Sitting in the passenger seat of Google’s self driving car is a less bizarre experience than sitting in the driving seat, but it’s still unsettling. In the streets of Mountain View, outside the headquarters of X (once Google X, in the post-Alphabet age it’s moved out of mum and dad’s house and dropped the prefix), I got the chance to do just that.
It’s partly unsettling because it’s hard not to feel a flicker of anxiety when you look over and notice that the person driving the car hasn’t got their hands on the wheel, even as you head towards a red light on a corner with a huge truck bearing down on you.
It’s partly because the software that drives the car isn’t exactly ready for production yet, so every now and again something weird happens – a jerky overtake, a slight hesitation to squeeze through into an adjacent lane, or, as happened once, the car declaring for no obvious reason that “a slight hiccup” had occurred and that it was going to pull over.
And it’s partly because the future has come a lot sooner than anyone really thought. Even if Google takes far longer to start selling cars than it thinks it will (and senior figures in X tell me that they’re confident something will hit the market before 2020), this technology is going to hit the real world somewhere soon, and it’s going to change everything.
Uber agrees. The taxi company on Thursday announced the latest phase of its own self-driving tests, putting its prototype cars on the roads of Pittsburgh for real riders to hail them for the first time. They aren’t quite self-driving – they still have a human driver for backup – but they’re the next step for the company’s drive to replace its “driver-partners” (Uber is notoriously reluctant to grant Uber drivers full employment rights) with a fully automated fleet.
Until a month ago, though, you could be forgiven for thinking the self-driving revolution had already hit. Tesla Motors, the upstart electric car company headed by the charismatic serial entrepreneur Elon Musk, launched its heavily promoted “autopilot” feature to owners of its Model S cars in October 2015.
The feature was labelled a “public beta”, and users were warned to always keep their hands on the steering wheel; but those messages were counteracted by bluster from Musk, who declared in March that year that “We’re now almost able to travel all the way from San Francisco to Seattle without the driver touching any controls at all”. And, of course, the name Autopilot itself does little to suggest to the average user that the car does not, in fact, drive itself.
Those mixed messages led to tragedy in May, when a Tesla driver, Joshua Brown, died in a crash which happened while Autopilot was in charge of the car. As Tesla put it, the crash happened when “Neither Autopilot nor the driver noticed” a tractor trailer crossing the highway in front of the car; the following day, it emerged that Brown may have been watching a movie as his car drove itself.
The problem of semantics
But the question of whether or not Brown had been paying attention to the road misses the more important point: he didn’t think he needed to. It’s a point Tesla itself tacitly admitted in China this week, when it changed the name of its Autopilot system from a phrase that loosely translates to “self-driving” to one that more closely resembles “driver assist”. “We want to highlight to non-English speaking consumers that Autopilot is a driver-assist function,” a Tesla spokesperson told the Wall Street Journal.
Other car companies have similar technology, but don’t quite sell it in the same way – or with the same bluster. Nissan rolled out its ProPilot technology in Japan this July, for instance, while BMW’s Driver Assistance systems in its 4 series can follow the car in front or warn the driver if they veer out of lane. ProPilot is sold as “autonomous driving” and “intelligent driving”. The semantics of naming are an important consideration for the companies: is their language encouraging drivers to think that their attention no longer needs to be focused on the road ahead?
But in X’s experience, modulating the tone of your advertising just isn’t enough. The very existence of almost-but-not-quite-perfect autonomous driving introduces whole new dangers. Nathaniel Fairfield, the principal engineer with X’s self-driving car team who “drove” me round Mountain View, said that people just don’t pay attention to the road, no matter what you tell them.
“You can tell them it’s a bundle of self-driving assist systems, but when the sucker drives them for the next three hours just dandy, they rely on their short term experience with it, and if it’s been doing well, they’ll just relax.
“You can say whatever you want to say, and people are going to interpret it however they interpret it, and at the end of the day you end up with whatever happens.”
X has had its own experience with that fact. In the early days of its self-driving car experiments, it loaned the modified Lexus SUVs which formed the basis of its first cars to employees, to use on their commutes. Even though they had been told to keep focused on the road, and their hands near the wheel – and even though they were in a car owned by their employer, and knew they were being monitored by some of the most all-pervasive telemetry you can put in a vehicle – they still rapidly ended up goofing off in the cars.
To a certain extent, that too can be approached as a simple technology problem. It’s not hard to imagine a driver assist function paired with simple sensors to ensure that the driver’s attention really is focused on the road, just as cars today emit ear-splitting alerts if you try to drive them without wearing your seatbelt. But that’s an engineering problem that Fairfield and the rest of the X team aren’t interested in tackling.
“You’re defining success as pissing off a customer enough that they have to perpetually [pay attention],” he said. “People don’t want to do that! People have better things to do with their time in cars these days” than sit and watch the road, and the ultimate goal of the self-driving car project is to let people actually do that.”
Andrew Chatham, another principal engineer who had acted as Fairfield’s bug tracker during the ride, jumped in: “I don’t think we’d even claim that it’s impossible to solve this problem, but it’s not the problem that we want to be working on.”
Of course, the counterpoint is that it’s still much better to be an irritated driver, being forced to keep your eyes on the road while a driver-assist system ensures that you don’t accidentally rear-end the car in front, than it is to be dead. The technology X has today is capable of feats beyond the wildest dreams of automotive safety technicians even a decade ago: even in my 10-minute jaunt round Mountain View, the car clocked a police cruiser by the lights on its roof, navigated a junction governed only by a stop sign, and carried out a tricky lane-merge in the queue for the lights. Those features could be saving lives today, rather than being held for an indeterminate future.
“That’s entirely true,” said Chatham, “and I don’t think we want to call off anyone from what they’re doing. Our intent is not to slag them [off], but the system we have built is aimed at full autonomy, and it is therefore much more complicated than a lot of these other systems. This is not the engineeringly efficient or cost-effective way to build something that just helps you stay in your lane.”
Fairfield, though, added a note of caution to the idea that such systems are even a desirable stepping stone. “To be clear, there’s a very complicated calculus: what are people willing to buy? How’s that going to work out? How much safety do you get? How much is that true safety, or how much is that just lulling people into a false sense of security?
“Or maybe you’re very clear about it, but how are they going to take that or internalise it or interpret it or how are they going to use it. And there’s a degree of uncertainty, and definitely room for people of good principal to have disagreements.”
‘It’s imperative that a human be behind the wheel’
Other disagreements pose more existential questions for the whole project, though. John Simpson, a US consumer watchdog, has been one of the loudest voices calling on Alphabet to clarify its policy on self-driving cars as a matter of urgency, and particularly to open up about how its system works, and doesn’t work. When one of its test vehicles swiped a bus in February, for instance, the company declined to release the telemetry from inside the car, even as it was otherwise very open about the circumstances of the accident.
Those questions bear down on Alphabet, but are ultimately a call for canny regulators to work with the company in negotiating rules for the new normal. A wild west where self-driving car companies set the rules of engagement – even in response to successful campaigning for openness – isn’t a desirable state of affairs for either the companies, who prefer to operate in a realm of certainty, nor drivers and passengers, who deserve more in the case of accidents than the obfuscatory statements released by Tesla in the wake of its first fatal crash.
Simpson is also vehemently against the idea of a fully automatic car, taking the exact opposite stance to X. “It’s imperative that a human be behind the wheel capable of taking control when necessary. Self-driving robot cars simply aren’t ready to safely manage too many routine traffic situations without human intervention,” he said. “What the disengagement reports show is that there are many everyday routine traffic situations with which the self-driving robot cars simply can’t cope.” Which is, in a way, obviously true, and why X’s car remains a research project rather than something you can buy today.
The question is how long that will remain true for. “The cars are really, really capable,” says Fairchild, “and the rate at which they’re getting better is actually increasing.”
When will it be good enough that they, at least, are happy with it hitting the streets without a fallback? “Not too long.”