Recently, while on my way to the University of Pittsburgh’s campus, I made a quick “Pittsburgh left” – taking a left turn just as the light turns green – while facing a driverless car.
Instead of jolting forward or honking – as some human drivers would be tempted to do – the car allowed me to go. In this case, the interaction was pleasant. (How polite of the car to let me cut it off!)
But as a sociolinguist who studies human-computer interaction, I started thinking about how self-driving cars will communicate with the human drivers they encounter on the road. Driving can involve a range of social signals and unspoken rules, some of which vary by country – even by region or city. How will driverless cars be able to navigate this complexity? Can they ever be programmed to do so?
We know the driverless cars are equipped with a technology called LIDAR, which creates a 360-degree image of the car’s surroundings. Image sensors can interpret signs, lights and lane markings. A separate radar detects objects, while a computer incorporates all of this information along with mapping data to guide the car.
But any autonomous vehicle will also need to be able to interact with traditional cars and their drivers, as well as pedestrians, bikes and unforeseen events like lane closures, disabled stop lights, emergency vehicles and accidents.
This is where things can get murky.
For example, if you’re driving and pass a speed trap, you might flash your headlights at drivers coming in the other direction to let them know. But flashing headlights can also mean “your high beams are too bright,” “you forgot to put your headlights on” or “go ahead” in situations where it’s unclear who has the right of way. In order to interpret the meaning, a person will consider the context: the time of day, the type of road, the weather. But how would an autonomous vehicle react?
There are other forms of communication to help us navigate, ranging from honks and sirens, to hand signals and even bumper stickers.
Of course, humans use all sorts of hand gestures – waving a car in front of them, indicating that another driver needs to slow, and even giving the finger when angry. Sounds can communicate love, anger, arrivals, departures, warnings and more. Drivers can express total disapproval with a hard, extended hit of the horn. Of course, emergency sirens encourage drivers to make way.
But specific meaning can vary by region or country. For example, a few years ago, Public Radio International ran a storyabout the language of honking in Cairo, Egypt, which is “spoken” primarily by men. These honks can have complex constructions; for example, four short honks followed by a long one mean “open your eyes” to warn someone who is not paying attention.
In Pittsburgh, people tend to honk before going through a short, narrow or curvy tunnel. In Morocco, where I’m originally from, drivers perform varied honks when passing; they’ll honk once before passing to secure cooperation, again as they pass (to signal progress), and lastly after they pass to say, “thank you.” Yet this might be confusing – or even perceived as rude – to drivers in the U.S.
Written communication also plays a role between cars and drivers. For example, signs such as “Baby on Board” or “Students on Board” are supposed to encourage the drivers following these vehicles to be even more careful. Bumper stickers like “Caution: Wide Right Turn” or “This Vehicle Makes Frequent Stops” can be critical to safety.
Vehicles can be taught to “read” road signs, and thus presumably can be taught to recognize common warnings on bumpers.
Yet navigating construction sites or accident scenes may require following directions from a human in a way that cannot be programmed. This creates a huge opportunity for error. Because hand signals vary widely from region to region (and even person to person), autonomous cars could fail to recognize a signal to go or, more catastrophically, could mistakenly follow a hand gesture into a barrier or another car.
This gives me pause: How much knowledge about our societal and linguistic values are built into the system? How can driverless cars learn to interpret hand and auditory signals?
Google cars can apparently recognizehand signals on bikers, but what if the biker doesn’t use standard signals? Who gets to embed the algorithm in the machine, and how are sociolinguistic values assigned?
In my experience, the self-driving car was very polite and didn’t honk or otherwise chastise me for my behavior (though the human passenger did communicate his displeasure with a gaze). But had I waved it in front of me, would it have been able to respond appropriately? A 2015 story in Robotics Trends described how a bike and a Google car got stuck in a standoff when the car misread signals from the biker.
Cities (and countries) possess a variety of sociolinguistic cues. It remains to be seen if the engineers working on driverless cars will be able to program these subtle – but important – differences into these vehicles as more and more appear on the roads.
Abdesalam Soudi does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment