"Who Wants to be a Self-driving Car?" is a data driven trust exercise that uses augmented reality to help people empathize with self-driving vehicle systems. We built an unconventional driving machine that lets people use real-time, three dimensional mapping and object recognition displayed in a virtual reality headset to navigate through space.
Discussions around the future of mobility are currently focused, among others, on the possible large-scale changes posed by self-driving cars. Of particular interest is the technical feasibility and business viability of self-driving car systems and questions rising around the many layers our lives will be affected by artificial intelligence. In most cases, these challenges and questions are being addressed behind closed doors and in domain specific contexts and are often inaccessible to the wider public.
Our goal is to bring people closer to these discussions by experimenting with new, immersive media experiences and interactive prototypes. What happens when people are able to empathize with self-driving cars? What might we discover when people are forced to "see the world" through sensors and make decisions based on data, probabilities, and statistics? And how can we use new technologies like VR to better reach audiences who might not typically engage with these topics?
The project aims to be a prototype for empathy, creating situations in which the driver and nearby bystanders become part of a conversation about responsibility, (dis)trust, fascination, and perhaps, hope for a world that is moving towards autonomous mobility.
Join the discussion by posting to the comments and taking our survey at the bottom of the page!
The moovel lab collaborated with MESO Digital Interiors to prototype this immersive experience. The idea was to make a machine that replaces the human senses with the sensors that a self-driving car might use. Our unconventional driving machine is essentially a steel-frame buggy with in-wheel, electric motors, complete with hydraulic breaking. Drivers lay head first on the vehicle; the positioning used to enhance the feeling of immersion (and vulnerability) created during the experience. A physical steering wheel controls the turning of the vehicle.
The VR experience is created using data collected by the sensors outfitted on the driving machine. The main view is a presentation of data from a 3D depth camera - ZED Stereo Camera - that uses stereoscopic imaging to map the landscape in real time. The 3D mapping of the vehicle's vicinity is supplemented with visual object detection using the YOLO library on data from a standard web camera helping the driver to better understand what’s around them.
A video camera is also outfitted on the back of the vehicle which allows the driver to see while in reverse. Lastly, a light detection and ranging (LIDAR) - Slamtec RPLidar - sensor adds an additional layer of distance sensing by sending out pulses of light from the sensor to nearby objects and calculating the 2-way return time.
These components are pulled together into the VR headset - Oculus Rift - using VVVV and the 3D Unity Game Engine to provide the drivers with data that they must interpret to navigate the driving machine through space essentially replacing the control unit of an autonomous vehicle with a person. There are 2 computers on board the buggy - a PC and a NVIDIA Jetson TX2. The PC takes in the data from the 3D depth camera and lidar and also receives the detected objects from the Jetson TX2 Board running the YOLO object detection software. The detected objects are sent via OSC to VVVV which composes the visualization as mentioned above all at 90 fames per second.
"Who wants to be a self-driving car?" is a tool to explore the technology behind self-driving cars from a human perspective. The project is intended to be used as an accessible platform for all people to share their perceptions, feelings, and thoughts on the many ways that sensors, data, computation and mobility are intertwined. The project therefore becomes a medium through which the blackbox of self-driving technologies can be better expressed and read. The anecdotal evidence of people's' interactions with the project vary from critical to hopeful and terrified to empowered; some of these reactions can be see in the project documentation video above.
By "becoming a self-driving car", we hope that people will be able to identify with the strengths and challenges that self-driving technologies will introduce. For most, the project is an entry point into a conversation about self-driving cars and their possible impacts on society (e.g. how we structure our cities, car sharing, goods delivery). For some, however, the project prompts deeper questions the new ways that artificial intelligence, sensors, and mobility are connected. It isn't certain when self-driving cars will hit the road on a large scale, but how self-driving cars will affect society as a whole may depend on how people are able to engage with these future mobility topics. What do you think it's like to be a self-driving car?