Case Study: Supervised Machine Learning Trainer 3607A
Student project at Copenhagen Institute of Interaction Design dealing with the future of work.
In the near future, cities are filled with smart infrastructures such as decentralized security cameras, self-sorting trashcans, and intelligent street lights. But who do you call when smart things break? The future smart city is not a sci-fi dystopia made out of glass, concrete, and job stealing robots. It’s a place much like our own and filled with the banality of everyday life and mundane jobs. Regardless of how you imagine the future smart city, someone needs to get in their white van, take out their ladder, and fix broken things.
We believe that mundane maintenance jobs are not just going to disappear when our cities adopt machine learning driven technology. Behind the algorithms and machines are human decisions and biases. We believe that a new class of blue collar jobs such as photo tagging and data set generation for machine learning algorithms will become prevalent.
The Supervised Machine Learning Trainer 3607A (SMLT-3607A) is a design fiction object aimed at exposing the humanness and mundaneness of the future smart city. Any maintenance person, regardless of familiarity with machine learning, can use the SMLT to interface with abnormally behaving smart infrastructure such as a surveillance camera. The SMLT is an industrial grade controller that allows a maintenance person to re-train the smart camera by recording new examples in real time. The future maintenance worker will teach the camera what it’s seeing and curate the training data set. He/she will help the camera learn the difference between people and objects and decide who should be classified as an upstanding citizen or a petty criminal.
The SMLT 3607A was a student project created by Benedict Hübener, Keyur Jain, and James Zhou at Copenhagen Institute of Interaction Design from the course Work Intelligence taught by Simone Rebaudengo, Josh Noble, and Bjorn Karmann.
Why did you create the SMLT-3607A?
Artificial intelligence and automation are having a profound impact on future jobs. However, the conversations in the media around job automation tend to lead to doomsday scenarios. Search online for artificial intelligence, machine learning, and jobs and you will see Terminator-esque photos and predictions. The concept of “Future Mundane” popularized by Nick Foster is an inspiration for our project. We believe our future is not made of transparent glass and Minority Report style interfaces. Our future will much more mundane where the old exists with the new.
As designers, we have the ability to craft a piece of the future so that people can touch and feel it. We can make “Knotty Objects” that represent and hint at larger systems. We created the SMLT-3607A so that we can shift the conversation around automation and jobs. We want the people to talk about how we can co-exist with automation and machine learning and how we can embrace a future with artificial intelligence.
How did you pick the topic of smart city maintenance?
At the start of the project, we conducted observational research in the city of Copenhagen. We observed and spoke to people who are working. We talked to a city maintenance worker, a scaffolding worker, and a meter attendant. We spoke to them about their jobs and the effects of automation. We asked them to show us their tools. The most interesting person that we spoke to is Kal, the city maintenance worker. Kal has a whole cart of tools that he wheels around. This inspired us to think about how Kal’s job will be affected by automation. It prompted us to think about the tools of the future city maintenance person.
What is in the future smart city?
In order to create a believable future, we looked at “weak signals” of present cities. We spoke to city planners and visited a street where the City of Copenhagen is testing smart city technologies. We conducted secondary research and looked at papers and articles on smart cities. All of the signals suggest that the future smart city will be filled with cameras and machine learning algorithms.
As result, we imagine that the future smart cities will be filled with surveillance cameras that are context-aware and have machine vision capabilities. Instead of relying on humans to monitor the security footage and identify issues, the surveillance camera can flag behaviors or people. For example, a surveillance camera might be trained to identify cyclists, pedestrians, etc. Or it might be trained to identify a fist fight or someone who is littering. When the surveillance camera sees something prohibited, it would automatically alert the appropriate authorities. But because machine vision is based on robust datasets, having a well-crafted dataset is important. We imagine the future city maintenance person will be asked to maintain both the infrastructure and the integrity of the dataset.
How does the SMLT-3607A work?
City infrastructures break all the time. The smart surveillance camera is no exception. The SMLT-3607A helps the city maintenance person fix and retrain misbehaving surveillance cameras. To use the SMLT-3607A a city maintenance person plugs the kit into the camera. He/she would monitor the footage and see what the camera is mislabeling. Then the maintenance person needs to select the class that is being mislabeled. He/she presses the record example button every time the camera sees the correct person or object or action. The recorded example is then added to the dataset of the algorithm. Eventually, after many correct examples in the dataset, the surveillance camera will be able to identify correctly.
The Design of the SMLT-3607A
The interface and operation of the SMLT-3607A are deliberately simple and repetitive. We believe that in the future, machine learning technology is going to be normalized. The repetitive task of dataset capturing and labeling is going to be a new blue collar job. The SMLT-3607A will be a tool that would allow anyone to interface and work with machine learning technology.
At the moment the SMLT-3607A exists as a prop with an embedded Arduino. The screen is an iPhone controlled by a laptop. To continue with the project, we would hook up all of the buttons and knobs to a WIFI enabled Arduino. The Arduino would send all of the input signals to the machine learning software Wekinator. The Wekinator allows us to access machine learning capabilities and output to some kind of Java or Processing sketch.
James Zhou is an Interaction Designer with a background in philosophy currently based in Chicago. He is interested in the intersections where design meets impact, tangibility meets code, and human meets AI.
Keyur Jain is a engineer turned interaction designer currently working at Omio in Berlin, where he is helping reimagine travel planning experiences. Across his work, he is interested in the combination of systems and stories as an approach to solving problems and bringing clarity towards the relationships between people and technology.
Benedict Hübener is a designer and consultant from Berlin who uses a rapid build-measure-learn approach to discover new business opportunities. He is always interested in finding the best solution for even the strangest problems.