When Uber’s self-driving Volvos hit the streets of Pittsburgh later this summer, each vehicle will be crewed by a couple of Uber employees, just in case the car’s robo-driver goes rogue. But pay no attention to the humans in the front seats.
“The goal is to wean us off of having drivers in the car, so we don’t want the public talking to our safety drivers,” said Uber engineering director Raffi Krikorian. To that end, Uber plans to install a tablet in the backseat of each of its autonomous vehicles. The company’s new human-machine interface will reportedly introduce riders to the autonomous driving experience, and explain the technology behind it.
We’ve yet to see that interface for ourselves, but it’s safe to assume the challenges Uber faces in creating it are similar to those faced by Tesla, Mercedes, Ford, and other companies testing the autonomous waters. But one thing Uber won’t have to do is teach its customers to surrender the wheel; as passengers, they’ve done that already. And designers say this single factor could turn Uber’s self-driving fleet into a prolific proving ground for interface design.
Today, Uber finds itself in a position where it makes more sense to develop an interface that behaves like a trusty chauffeur, rather than a co-pilot. It’s a situation that semi-autonomous car manufacturers may not find themselves in for years to come—but Uber has a chance to start experimenting now. “I would have said that would happen over the next couple years,” says Patrick Mankins, a designer at Artefact who specializes in AI-powered systems. “Now, I’m going to say over the next couple months.”
Designing that interface will require a delicate balance. It should present passengers with enough information to put them at ease, but not so much that they feel responsible for the car’s behavior. A town car driver wouldn’t overwhelm you with data and graphics, and Uber’s backseat tablets shouldn’t either.
This is out of step with conventional wisdom surrounding interfaces in semi-autonomous vehicles. For instance: Audi’s autonomous concepts feature a dash-mounted screen that communicates to the driver not only that the car sees the world around it, but also what the car thinks about that world and how it plans to navigate it safely. This information serves to put you, the driver, at ease—but not too at ease. Because this system’s other job is to keep you informed. After all, these systems are only semi-autonomous. There’s no telling when you might have to resume control of the vehicle.
But that system won’t work for Uber. “Passengers going on this ride for the first time are going to wonder, am I expected to monitor each and every move?” says Nandita Mangal, who led the design on Delphi Automotive’s autonomous car concept earlier this year. If you’re Uber, that’s the last thing you want. Not only does this defeat the entire purpose of a personal chauffeur, it’s potentially stress-inducing. A passenger in the rear seat of an autonomous car can’t take control of the wheel, even if she wants to.
Uber’s interface will need to convey a tone of cool, confident decision-making. One hypothetical scenario: Say your car skips your usual exit. A quick, conversational, “we’re taking the next exit to avoid a traffic jam,” Mankins says, is just the right amount of information. His colleague Brad Crane, who worked with Hyundai to develop its near-future vision of Autonomous driving, agrees: “It’s chatty enough so that you don’t come to not trust it.”
Make the Theoretical Practical
An interface that strikes this balance successfully becomes free to explore other possibilities, like location-based notifications. “We have to take the initiative to create a design environment where the passenger feels the city is being brought a lot closer to him,” says Mangal. So, like, in the semi-autnomous concept Delphi showed at CES earlier this year, the interface would alert you if you were near a Starbucks and ask if you wanted a latte. That feature has yet to roll out to any semi-autonomous vehicles currently on the market.
From there, things get really wild. “I could imagine the car has five personas,” says usability guru Don Norman (you may know him as the author of The Design of Everyday Things, but he’s also written extensively on the challenges of automated and semi-automated driving). “I can select the calm and unruffled driver leisurely taking his time. Or I could select the scenic-loving driver. Or I can select the Brooklyn taxi cab driver who can get us there as soon as possible.” Each of these behaviors would comply with traffic laws, but the rider could customize the experience.
Or maybe your Uber could just intuit your preferences through data gleaned from your other devices. A car that can analyze your vitals (via, say, your smart watch) could sense that you’re feeling tense and adjust your ride accordingly—slowing down, or taking a quieter route.
In situations like these, the passenger-facing interface wouldn’t be confined to a screen in the backseat. “The car becomes a much larger and riper and richer opportunity space, where you now have an interior that’s limited by some dimensions, but that can be anything,” Mankins says. These back-seat screens will be more than just iPads for the road. They’ll be a new way to interact with a vehicle and with travel—and the whole car interior might change because of it.