Self-driving cars, humans must learn a common language

SAN FRANCISCO– Anthropologist Melissa Cefkin has studied folk dancers in Turkey, salespeople in Spain and Germany, and bus dispatchers in the United States. Her research has focused on how people express themselves through body movements and form their identities through workplace practices.
These days, she’s applying her ethnographic skills to solving a more complex problem: figuring out how self-driving cars can talk to people.
Her team at Nissan Research Center in Sunnyvale observes the ways that humans and cars behave — “what happens on the road is very much like an impromptu choreography,” she said — then experiments with concepts that will allow autonomous cars to communicate their intentions to the people they’ll encounter.
“Cars are profoundly intertwined with our lives,” said Cefkin, principal scientist and design anthropologist at the research center. “The increasingly autonomous future will reconfigure how that will feel. What will it mean for these vehicles to be good citizens in the world? How will they interact with everybody else on the road? That’s a job for social scientists to understand.”
These are not trivial questions. For all the intricate technology required for autonomous cars — the sensors to replicate eyes and ears, the computers and algorithms to serve as the car’s brains, the high-definition 3-D maps to guide them — there’s another factor that computer science alone cannot solve: how these cars will engage with people — passengers, motorists, bicyclists and pedestrians — and vice versa.
“It’s crucial to make self-driving cars accepted in society so people feel they are trustworthy and part of daily life,” said Sameep Tandon, CEO and co-founder of Drive.ai, a Mountain View self-driving car company that is experimenting with rooftop displays to signal a car’s intentions. “Otherwise there’s a risk people will think of this as the robot apocalypse.”
The scores of carmakers, startups and Silicon Valley giants now working on autonomous technology are acutely aware that it won’t matter how exceptional their cars are if people are scared to ride in them, or be near them. For most of us, it requires a leap of faith to accept the idea of a robot-controlled, 2-ton hunk of metal hurtling down the highway at a mile a minute.
“Passenger trust is critical to establish before society will accept self-driving cars,” said Jack Weast, chief systems architect of Intel’s Autonomous Driving Group. “We need to make sure the cars not only work technically, but that they are embraced psychologically. The path to trust is a relationship with the machine.”
Of course, humans have a more than century-old relationship with traditional cars, resulting in ingrained behaviors that will be tough to overcome. Still, people have adapted to talking to their cell phones and relying on GPS for directions. And newer cars have introduced drivers to semiautonomous features such as adaptive cruise control and automatic lane-keeping.
But better vehicle-to-person communication will be crucial, as most experts predict a lengthy period — perhaps decades — of mixed traffic, with robot cars navigating roads alongside human-driven ones. Even once all cars are robots and can communicate with each other, they will need to “talk” to pedestrians and cyclists.
This human-machine interface issue has drawn researchers from a host of social science disciplines, including anthropology, sociology and psychology, as well as roboticists, engineers, programmers and designers. A number of experiments are under way.
A few companies, including Waymo, Uber and NuTonomy, have living laboratories on the road, providing robot-car rides to civilians, observing them and interviewing them about their experiences.
But those are expensive yet limited endeavors (today’s few hundred autonomous vehicles cost upward of $100,000 each) and there are bans everywhere on testing them on the street without a driver. So researchers are getting creative.
Some rely on technology — immersive driving simulators and robot cars that exist only digitally, as algorithms on a computer, for instance.
Others employ stagecraft techniques, such as a “ghost driver” experiment in which the driver behind the wheel is disguised as a car seat, essentially becoming invisible. That lets researchers gauge the reactions of passersby to what appears to be a driverless car.
“Our techniques are theater-like ways of simulating the future, like live-action, improvisational role-play for science,” said Wendy Ju of Stanford’s Center for Design Research, who spearheaded the ghost driver effort. “There’s a comedy to it, but we are dead serious about collecting real behavioral responses.”
The future of driving may just depend on it.
The simple vocabulary most cars now employ — turn signals, brake lights, hazard lights, horns — may need to be radically expanded once driving eliminates the human element. That’s true from seemingly simple situations, like a pedestrian making eye contact with a driver before crossing in front of a car, to the more complex, like negotiating four-way stops and highway lane changes.
New communication methods could include patterned lights; audible cues (perhaps a polite voice saying “Cross now,” or a musical tone as at some stoplights); rooftop displays showing symbols or words; laser devices to project a message such as a crosswalk on the road ahead to indicate that it’s safe to cross; and cars that wirelessly transmit their intentions to other vehicles.
“There’s so much rich interaction with drivers that we take for granted. It seems like a mundane thing, but it turns out to be a really big deal,” said Karl Iagnemma, CEO and co-founder of NuTonomy. His company has been testing robot taxis in Singapore for almost a year and plans similar tests soon in Boston with ride-hailing service Lyft. “I wouldn’t be surprised if there’s a true reinvention of (autonomous cars’) interface to the outside world.”

The nuTonomy is fitted with lasers and scanners that translate to a view of the surrounding street for the self driving car displayed in Boston.
At Nissan, Cefkin’s team assesses how people and cars interact today and experiments with ways they might do so in the future.
For instance, they’re testing “intention indicators” — mechanisms for robot cars to signal their next move, something particularly important when cars arrive at a four-way stop at the same time.
One concept is an arc of white LED lights atop a car’s roof that would blink in a moving dash pattern when the car is about to move. Nissan filmed cars arriving simultaneously and then proceeding at four-way stops in the Sunset District. It added the simulated lights in postproduction, then showed the videos to test subjects.
One of them, Mirza Baig, sat in a mock-up of a driver’s seat in Nissan’s lab, gripping a steering wheel as he “drove” while viewing a video of the cars on three 42-inch screens. With his feet poised over a gaming system’s accelerator and brake pedals, and goggles tracking his eye movements, researchers tried to determine whether he could intuitively grasp what the cars were signaling.
Cefkin’s team is trying to keep its technology simple. “No one will want their car to look like a moving Christmas tree,” she said. “So we are actively exploring use of just a single color (of light) or a small variety of shades.”
So far, each company is devising its own approach to the communications question. At some point, the companies will need to get together and establish standard methods. But that could be a long process.
Even something as simple as turn signals took years to become reality. As cars evolved in the last century, hand signals gave way to devices including “trafficators,” semaphore signals similar to those on trains; hand-shaped mechanical arms; and pop-up signs on the rear bumpers. The blinking-light turn signal was patented in the 1920s but didn’t become common until after World War II.
Anca Dragan wants autonomous cars to understand people.
Dragan, an assistant professor of electrical engineering and computer science at the University of California at Berkeley and head of its InterACT Laboratory, which focuses on human-robot interactions, said driverless cars will need to predict what humans on the road will do — and figure out how to behave appropriately around them.
“Cars need to be very expressive and clear about their intentions,” she said. “They need reasoning to understand intentions of other drivers and pedestrians.”
That is easier said than done. Working with a student to scribble formulas calculating potential trajectories of other vehicles, Dragan traced elaborate swirls in the air. “How can I write an equation to describe how people drive?” she said.
Her autonomous vehicle now exists only as a computer model, but it has learned how to determine other drivers’ intentions when attempting to merge into lanes. It simply nudges toward the adjacent lane and detects whether the simulated human driver hits the brakes or the accelerator, fairly similar to how most people change lanes.
“Those reactions tell it about your driving style, so it can anticipate whether it should merge or let you go first,” she said. “We’re excited to see that the car can ‘reason’ properly about people.”
Even more strikingly, the simulated car devised its own way to communicate at four-way stops. To indicate to other cars that they should go first, it will inch backward (if there’s no other car behind it).
“It was surprising behavior, but it works” to communicate intent, Dragan said.
Being able to learn and adapt to the constantly changing circumstances of traffic will be crucial for self-driving cars.
Robot cars trained to avoid obstacles at all cost, for instance, are vulnerable to pranksters; just by stepping in front of the cars, they could create instant traffic jams. Google found that its self-driving cars were programmed to be so “polite” that they became stranded at four-way stops, yielding to every other car that approached.
“So many complicated things can happen in the real world,” said Drive.ai’s Tandon. “If you program a rule for every single case, you’d have a decision tree so complicated no one could deal with it. Instead we use deep learning to make the process go seamlessly. We want our vehicles to learn from as much data as possible.”
While they can’t make eye contact, some self-driving vehicles already can interpret a limited number of visual cues from humans. Waymo’s self-driving minivans, for example, can read the hand signals of bicyclists on the road, said Waymo engineer James Stout.
But replicating human behavior remains tricky. Waymo deals with it, in part, by using an elaborate computer simulation program that takes situations encountered by its cars and re-creates them in a virtual world. Each simulation replicates a particular moment at a particular location in California, Arizona or Texas, including the behavior of nearby pedestrians and human drivers. Waymo can then run a virtual car through that situation again and again, teaching it how to respond.
Nissan researchers also study a wide variety of driving practices. Video taped at a roundabout in Sao Paulo displays the traffic intricacies of even the most ordinary day. As cars, bikes, scooters and motorcycles navigate the circle, pedestrians, some wheeling baskets of produce, wait to cross the roadway. Delivery people on motor scooters weave in and out of traffic. A car suddenly backs up; a woman wanders in and out of the crosswalk.
“We’ve got to be ready for any situation, and there are good and bad drivers everywhere,” said Stout, lead software engineer for Waymo’s simulation project. “We’re not going to be able to shape the world around us. We have to be able to deal with the world as it is.”
Carmakers are eschewing traditional rules-based robotics in favor of deep learning, which relies on algorithms inspired by the brain’s structure and function.
“Deep learning gives the ability to improve, adapt to different circumstances, use ‘intuition’ to improve abilities,” said Tory Smith, a Drive.ai technical program manager.
Essentially, robot-car makers see teaching the cars to “think” as the best way for them to cohabit with humans.
Despite working in tech in her job at Arizona State University, Sara Bryant is cautious about leaping into innovations. So she was taken aback when she requested an Uber ride to work from her Scottsdale home recently and got a message saying she’d been assigned a self-driving car. Though she had the option to decline it, her curiosity won out and she agreed to the ride.
Her verdict? “It was enchanting,” she said. “I don’t think we’re ‘back to the future’ yet, but it was seamless and smooth; it was never frightening.”
That was largely because Uber’s robot taxis come with two humans, a backup driver and a safety engineer, both well-versed in a spiel about how the cars work. “I would not have gotten in if there had not been attendants,” Bryant said. “To me, it’s a great unknown what will happen once these are on the road without human drivers.”
She also appreciated the touch-pad in the back seat that greeted her with a “Welcome, Sara” message and a “Let’s ride” button she pressed to begin the trip. It then displayed a robot’s-eye view of her route, a colorful 3-D rendering of everything from the lane markers to the bikes, buildings and trees along the way.
The interactive screen even had a selfie option: “I took one and texted it to my family and friends,” she said.
Uber has about 100 autonomous cars available to ride-hailing passengers in the Phoenix area and in Pittsburgh (its autonomous vehicles in San Francisco don’t pick up customers), and says it has given almost 30,000 paid robot rides.
Waymo is also testing autonomous cars in Arizona, by recruiting families to volunteer for rides in its robotic Chrysler Pacifica minivans to all their daily activities: school, work, sporting events, nights out. Intel recently invited a handful of Arizona residents to ride in its autonomous cars and provide feedback, while other companies, including Apple at its Cupertino headquarters and General Motors’ Cruise division in San Francisco, have set up ride services for their own employees.
These early tests seek to understand what makes people feel comfortable in autonomous cars — although the experiments are all skewed by the fact that the cars have drivers ready to take over.
Participants in Intel’s Arizona study were initially anxious and skeptical, mirroring a recent AAA report that three-quarters of Americans said they are afraid to ride in robot cars.
Intel’s Weast said explaining how the technology works — and simply taking a ride — helped change their minds.
“The car would communicate, tell people what it was doing and why,” such as needing to reroute for a detour, he said. “People appreciated that the car told riders what they need to know.” Soon enough, he said, riders got to a point of not needing any more updates, and even wanted to shut off further communication.
Uber’s back-seat iPad similarly is designed to build trust, by showing ride information and a robot’s-eye view of the route.
Intel found that it helped for the car to have anthropomorphic features, such as a voice interface a la Siri or Alexa. In fact, people who had used such devices soon jumped into conversing with their cars, Weast said.
According to a research study of 100 participants using a driving simulator reported in the Journal of Experimental Social Psychology, people are less likely to blame autonomous cars when something goes wrong if the cars have a name, gender and human voice.
In Singapore, some NuTonomy riders took it upon themselves to assign human characteristics to their car. “We found that comparing the ride they experienced to a person helped them become more relaxed and confident in the technology,” Iagnemma said. “One woman compared it to the way her grandmother drove.”
On the edge of the Stanford University campus, a black Nissan Leaf sits parked in a cluttered workshop. The car appears empty — until the driver’s seat moves.
Behind the steering wheel, a graduate student is encased in a suit designed to look like a black fabric car seat. Dark gloves hide his hands, which are held at the bottom of the wheel, out of sight. A black mesh screen masks his eyes.
It’s part of an experiment by Ju, the Stanford researcher, and colleague David Sirkin, who study how humans and machines interact. The ghost driver test examines how people react to a car that truly seems to be driving itself.
While autonomous vehicles have become a daily sight in parts of the Bay Area, they always have a human in the driver’s seat — required by the state. So Ju and Sirkin needed a way to fool pedestrians. As their ghost driver cruises through streets with pedestrians, they watch how people respond and interview them afterward.
“The heavier the pedestrian traffic, the more interesting it is,” Sirkin said, as students began building another suit, pulling supplies from a Joann craft store bag.
Autonomous cars will need to understand how humans act, not just when they are driving other cars, but on foot. Human drivers and pedestrians can signal each other with eye contact and hand or body gestures, but robots may not be able to sense such nuanced activity. Pedestrians, meanwhile, may not be able to predict how an autonomous car will behave at a stop sign or crosswalk — assuming they realize the car is autonomous.
A similar seat-suit experiment in Virginia drew attention in August when NBC reporter Adam Tuss spotted a gray, unmarked van with no one visible in the driver’s seat. After pulling up nearby and seeing the hidden driver’s hands on the wheel, Tuss tried an unorthodox approach to interaction: “Brother, who are you? What are you doing? I’m with the news, dude.”
Here’s me trying to talk to a man in a car seat costume @nbcwashington pic.twitter.com/e5humOM7uS
— Adam Tuss (@AdamTuss) August 7, 2017
The driver remained silent. But a month later, Ford fessed up: It had commissioned the Virginia Tech Transportation Institute to run the test. The van had a “light bar” that pulsed when yielding and blinked rapidly when accelerating from a stop. John Shutko, a Ford technical specialist, wrote that the company hoped to develop a common language for self-driving cars.
“As people, we’re constantly giving off expectations based on what we do, how we move,” said Ju, the executive director of interaction design research. “Basically, machines that interact with humans are going to have to learn that game as well.”
And learning just one set of expectations won’t suffice.
People act differently in different parts of the globe. Boston drivers are famously aggressive; Los Angeles drivers, fast and precise; Wisconsin drivers, slow and mostly polite. Tokyo pedestrians meanwhile, can be counted on to wait for lights to cross a street, while Romans are more willing to step into traffic.
Ju has talked with autonomous vehicle engineers who are convinced that as long as the cars behave consistently, humans will adapt to them, suggesting that pedestrians and human drivers everywhere will modify their behaviors to accommodate robot cars.
She considers that notion “wildly unrealistic.”
So how have pedestrians responded to her ghost driver car? It depends, Ju says. Stanford students largely ignore the sight of a car with no apparent driver. So do pedestrians in Mexico City. In the Netherlands, people approach the car, intrigued.
“The main thing that happens is surprising,” Ju said. “Most people decide, ‘I’m just going to cross'” in front of the seemingly driverless car. “They don’t spend a lot of time trying to figure out what’s happening.”
Engineering may be able to make cars more human, it seems, but changing people may be a bigger challenge.