Self-driving cars are already deciding who to kill – Business Insider

Posted: Thursday, December 29, 2016


car rainGeorgii Shipin / Shutterstock

Autonomous vehicles are already making profound choices about
whose lives matter, according to experts, so we might want to pay
attention.

“Every time the car makes a complex maneuver, it is implicitly
making trade-off in terms of risks to different parties,” Iyad
Rahwan, an MIT cognitive scientist, wrote in an email.

The most well-known issues in AV ethics are trolly problems—moral
questions dating back to the era of trollies that ask whose lives
should be sacrificed in an unavoidable crash. For instance, if a
person falls onto the road in front of a fast-moving AV, and the
car can either swerve into a traffic barrier, potentially killing
the passenger, or go straight, potentially killing the
pedestrian, what should it do?

Rahwan and colleagues have studied what humans consider the moral
action in no-win scenarios (you can judge your own cases at
their crowd-sourced project, Moral Machine).


trolly problem basic
What should the
self-driving car do?


moralmachine.mit.edu


While human-sacrifice scenarios are only hypothetical for now,
Rahwan and others say they would inevitably come up in

a world full of AV
s
.

Then there are the ethical questions that come up every day. For
instance, how should AVs behave when passing a biker or
pedestrian?

“When you drive down the street, you’re putting everyone around
you at risk,” Ryan Jenkins, a philosophy professor at Cal Poly,
told us. “[W]hen we’re driving driving past a bicyclist, when
we’re driving past a jogger, we like to give them an extra bit of
space because because we think it safer; even if we’re very
confident that we’re not about to crash, we also realize that
unexpected things can happen and cause us to swerve, or the biker
might fall off their bike, or the jogger might slip and fall into
the street.”

And there’s no easy answer to these questions.

“To truly guarantee a pedestrian’s safety, an AV would have to
slow to a crawl any time a pedestrian is walking nearby on a
sidewalk, in case the pedestrian decided to throw themselves in
front of the vehicle,” Noah Goodall, a scientist with the
Virginia Transportation Research Council, wrote by email.

Human drivers can answer ethical questions big and small using
intuition, but it’s not that simple for artificial intelligence.
AV programmers must either define explicit rules for each of
these situations or rely on general driving rules and hope things
work out.

“On one hand, the algorithms that control the car may have an
explicit set of rules to make moral tradeoffs,” Rahwan wrote. “On
the other hand, the decision made by a car in the case of
unavoidable harm may emerge from the interaction of various
software components, none of which has explicit programming to
handle moral tradeoffs.”

Even if programmers choose to keep things vague, a pattern of
behavior will be discernible in some instances or in overall
statistics.

“In the words of Harvey Cox, ‘not to decide is to decide,'” Oren
Etzioni, CEO of the Allen Institute for Artificial Intelligence,
wrote in an email.


Tesla self-driving car view
Tesla already has full self-driving hardware on all
cars.

Tesla

How are AV companies actually handling these ethical issues? In
many cases, they’re trying to dodge the question.

Although trolly problems have attracted a lot of attention, the
AV industry has generally avoided comment or been dismissive.
When a Daimler AG executive allegedly took a side this
fall—reportedly telling Car and Driver that Mercedes-Benz AV
would protect passenger at all costs—the company
issued a strong denial, saying “it is clear that
neither programmers nor automated systems are entitled to weigh
the value of human lives.” Daimler added that trolly problems
weren’t really an issue, as the company “focuses on completely
avoiding dilemma situation by, for example, implementing a
risk-avoiding operating strategy.”

Ethicists, of course, will point out that some risks aren’t
avoidable—brakes fail and other drivers, bikers, pedestrian, and
animals take sudden and unpredictable actions—so it’s not
unrealistic to think that cars will have to make hard choices.

As for Daimler’s claim that it values all lives equally, we might
assume that means the company doesn’t have any explicit rules
favoring one group over another. Implicit bias, however, is quite
different.

Google, meanwhile, has given more detail than most about how it
handles crash optimization—a project with clear ethical
implications.

Back in 2014, Google X founder Sebastian Thrun said the company’s
cars would choose to hit the smaller of two objects: “If it
happens that there is a situation where the car couldn’t escape,
it would go for the smaller thing.”

A 2014 Google patent involving lateral lane
positioning (which may or may not be in use) followed a
similar logic, describing how an AV might move away from a truck
in one lane and closer to a car in another lane, since it’s safer
to crash into a smaller object.

Hitting the smaller object is, of course, an ethical decision:
it’s a choice to protect the passengers by minimizing their crash
damage. It could also be seen, though, as shifting risk onto
pedestrians or passengers of small cars. Indeed, as Patrick Lin,
a philosophy professor at Cal Poly, points out in an email, “the
smaller object could be a baby stroller or a small child.”

In March 2016, Google’s AV leader at that time, Chris Urmson,
described
more sophisticated rules
to the LA Times: “Our cars
are going to try hardest to avoid hitting unprotected road users:
cyclists and pedestrians. Then after that they’re going to try
hard to avoid moving things.”

Compared with aiming for smaller objects, that approach sounds
utilitarian, going out of the way to protect people who might
suffer most in a crash. Of course, it might also be less popular
with buyers of self-driving cars who want the machine to protect
them at all costs.


google waymo
Google’s self-driving
cars, now called Waymo, are hitting the market
soon.


waymo.com


How should we handle AV ethics? There is at least an emerging
consensus that more discussion is needed.

The National Highway Traffic Safety Administration said in
a September report that “manufacturers and other
entities, working cooperatively with regulators and other
stakeholders (e.g., drivers, passengers, and vulnerable road
users) should address these situations to ensure that such
ethical judgments and decisions are made consciously and
intentionally.”

Consumer Watchdog’s Wayne Simpson, a vocal AV skeptic, agreed
with that much at least. In a testimony to the NHTSA, he laid out the stakes:
“The public has a right to know when a robot car is barreling
down the street whether it’s prioritizing the life of the
passenger, the driver, or the pedestrian, and what factors it
takes into consideration. If these questions are not answered in
full light of day … corporations will program these cars to limit
their own liability, not to conform with social mores, ethical
customs, or the rule of law.

The AV industry also appears to be receptive.

Apple—that noted AV company—responded
with its own call
for a “thoughtful exploration” that
“draw[s] on inputs from industry leaders, consumers, federal
agencies, and other experts.”

Ford
echoed the sentiment
, saying it was already “engaged
in collaborative work with several major universities and through
industry partnerships” looking at AV ethics.

At the same time, Ford warned about excessive philosophizing. “We
are … trying to approach this from a disciplined perspective
based on good engineering, rather than getting caught in
unrealistic hypotheticals which really cannot be resolved,” wrote
Wayne Bahr, Global Director of Automative Safety. “One common
problem in any discussion about ethics of HAVs is that the base
assumptions about what a HAV might be capable of are largely
distorted. For example, any question that poses questions about
the worth of one individual person over another assumes that the
vehicle would be able to distinguish people to that level of
detail.”

Bahr’s comments refer to versions of trolly problems that take
into account factors like age, legal status, social worth when
choosing who to kill. Those are, of course, things that AV won’t
be able to discern any time soon.


trolly problem vagrants
Trolly problems can get
… complicated.


moralmachine.mit.edu


In the long run, the most ethical decision may be the one that
gets the most AVs on the road. After all, AVs are already much safer than human drivers, and it’s
been projected that they could eliminate 90% of traffic
fatalities.

Getting to that point, however, will require the creation of good
laws and the avoidance of missteps that might trigger dire
controversies or lawsuits. In other words, it will require
ethics.

As Rahwan and colleagues Azim Shariff and Jean-François Bonnefon
wrote in the New York Times: “The sooner
driverless cars are adopted, the more lives will be saved. But
taking seriously the psychological as well as technological
challenges of autonomous vehicles will be necessary in freeing us
from the tedious, wasteful and dangerous system of driving that
we have put up with for more than a century.”

Comments

Write a Reply or Comment:

Your email address will not be published.*