In his Nicomachean Ethics, Aristotle states that it is
a fact that “all knowledge and every pursuit aims at some good,” but then
continues, “What then do we mean by the good?” That, in essence, encapsulates
the ethical dilemma. We all agree that we should be good and just, but it’s
much harder to decide what that entails.
Since Aristotle’s time, the
questions he raised have been continually discussed and debated. From the works
of great philosophers like Kant, Bentham, and Rawls to modern-day cocktail
parties and late-night dorm room bull sessions, the issues are endlessly mulled
over and argued about but never come to a satisfying conclusion.
Today, as we enter a
“cognitive era” of thinking machines, the problem of what should guide our
actions is gaining newfound importance. If we find it so difficult to
denote the principles by which a person should act justly and wisely, then how
are we to encode them within the artificial intelligences we are creating? It
is a question that we need to come up with answers for soon.
Designing a Learning
Environment
Every parent worries about
what influences their children are exposed to. What TV shows are they watching?
What video games are they playing? Are they hanging out with the wrong crowd at
school? We try not to overly shelter our kids because we want them to learn
about the world, but we don’t want to expose them to too much before they have
the maturity to process it.
In artificial intelligence, these influences are called a “machine
learning corpus.” For example, if you want to teach an algorithm to recognize
cats, you expose it to thousands of pictures of cats and things that are not
cats. Eventually, it figures out how to tell the difference between, say, a cat
and a dog. Much as with human beings, it is through learning from these
experiences that algorithms become useful.
However, the process can go horribly
awry, as in the case of Microsoft’s Tay, a
Twitter bot that the company unleashed on the microblogging platform. In under
a day, Tay went from being friendly and casual (“Humans are super cool”) to
downright scary (“Hitler was right and I hate Jews”). It was profoundly
disturbing.
Francesca Rossi, an AI
researcher at IBM, points out that we often encode principles regarding
influences into societal norms, such as what age a child needs to be to watch
an R-rated movie or whether they should learn evolution in school. “We need to
decide to what extent the legal principles that we use to regulate humans can
be used for machines,” she told me.
However, in some cases algorithms can alert us to bias in our society that we might not have been aware of,
such as when we Google “grandma” and
see only white faces. “There is a great potential for machines to alert us to
bias,” Rossi notes. “We need to not only train our algorithms but also be open
to the possibility that they can teach us about ourselves.”
Unraveling Ethical Dilemmas
One thought experiment that has puzzled
ethicists for decades is the trolley problem. Imagine you see a trolley barreling down the tracks
and it’s about to run over five people. The only way to save them is to pull a
lever to switch the trolley to a different set of tracks, but if you do that,
one person standing on the other tracks will be killed. What should you
do?
Ethical systems based on moral
principles, such as Kant’s Categorical Imperative (act only according to that maxim whereby you
can, at the same time, will that it should become a universal law) or Asimov’s first law (a robot may not injure a human being or,
through inaction, allow a human being to come to harm) are thoroughly unhelpful
here.
Another alternative would be to adopt
the utilitarian principle and simply do what results in the most good or the
least harm. Then it would be clear that you should kill the one person to save
the five. However, the idea of killing somebody intentionally is troublesome,
to say the least. While we do apply the principle in some limited cases, such
as in the case of a Secret Service officer’s duty to protect the president,
those are rare exceptions.
The rise of artificial intelligence is forcing us to take abstract
ethical dilemmas much more seriously because we need to code in moral
principles concretely. Should a self-driving car risk killing its passenger to
save a pedestrian? To what extent should a drone take into account the risk of
collateral damage when killing a terrorist? Should robots make life-or-death
decisions about humans at all? We will have to make concrete decisions about
what we will leave up to humans and what we will encode into software.
These are tough questions, but IBM’s
Rossi points out that machines may be able to help us with them. Aristotle’s
teachings, often referred to as virtue ethics, emphasize that we need to learn the meaning of
ethical virtues, such as wisdom, justice, and prudence. So it is possible that
a powerful machine learning system could provide us with new insights.
Cultural Norms vs. Moral Values
Another issue that we will have to
contend with is that we will have to decide not only what ethical principles to encode in
artificial intelligences but also how they are coded. As noted above, for
the most part, “Thou shalt not kill” is a strict principle. Other than a few
rare cases, such as the Secret Service or a soldier, it’s more like a
preference that is greatly affected by context.
There is often much confusion about what is truly a
morale principle and what is merely a cultural norm. In many cases, as with
LGBT rights, societal judgments with respect to morality change over time. In
others, such as teaching creationism in schools or allowing the sale of
alcohol, we find it reasonable to let different communities make their own
choices.
What makes one thing a moral value and
another a cultural norm? Well, that’s a tough question for even the most-lauded
human ethicists, but we will need to code those decisions into our algorithms.
In some cases, there will be strict principles; in others, merely preferences
based on context. For some tasks, algorithms will need to be coded differently
according to what jurisdiction they operate in.
The issue becomes especially thorny when algorithms have to make
decisions according to conflicting professional norms, such as in medical
care. How much should cost be taken into account when regarding medical
decisions? Should insurance companies have a say in how the algorithms are
coded?
This is not, of course,
a completely new problem. For example, firms operating in the U.S. need to
abide by GAAP accounting standards, which rely on strict rules, while those
operating in Europe follow IFRS accounting standards, which are driven by broad
principles. We will likely end up with a similar situation with regard to many
ethical principles in artificial intelligences.
Setting a Higher Standard
Most AI experts I’ve spoken to think
that we will need to set higher moral standards for artificial intelligences
than we do for humans. We do not, as a matter of course, expect people to
supply a list of influences and an accounting for their logic for every
decision they make, unless something goes horribly wrong. But we will require
such transparency from machines.
“With another human, we often assume that they have similar common-sense
reasoning capabilities and ethical standards. That’s not true of machines, so
we need to hold them to a higher standard,” Rossi says. Josh Sutton, global
head, data and artificial intelligence, at Publicis.Sapient, agrees and argues that both the logical trail and
the learning corpus that lead to machine decisions need to be made available
for examination.
However, Sutton sees how we might also
opt for less transparency in some situations. For example, we may feel more
comfortable with algorithms that make use of our behavioral and geolocation
data but don’t let humans access that data. Humans, after all, can always be
tempted. Machines are better at following strict parameters.
Clearly, these issues need further
thought and discussion. Major industry players, such as Google, IBM, Amazon,
and Facebook, recently set up a
partnership to
create an open platform between leading AI companies and stakeholders in
academia, government, and industry to advance understanding and promote best
practices. Yet that is merely a starting point.
As pervasive as artificial intelligence
is set to become in the near future, the responsibility rests with society as a
whole. Put simply, we need to take the standards by which artificial
intelligences will operate just as seriously as those that govern how our
political systems operate and how are children are educated.
It is a responsibility that we cannot
shirk.
No comments:
Post a Comment