Sunday, March 17, 2019

Artificial Intelligence: Good and Evil All at Once, Just Like its Creators

By Philip Segal on March 16, 2019

POSTED IN ARTIFICIAL INTELLIGENCE

Have you ever noticed that artificial intelligence always seems much more frightening when people write about what it will become, but then how it can seem like imperfect, bumbling software when writing about AI in the present tense?

You get one of each in this morning’s Wall Street Journal. The paper paints a horrific picture of what the ruthless secret police of the world’s dictatorships will be able to do with AI in The Autocrat’s New Tool Kit, including facial recognition to track behavior more efficiently and to target specific groups with propaganda.


But then see Social-media companies have struggled to block violent content about this week’s terrorist attack on two mosques in New Zealand. With all of their computing power and some of the world’s smartest programmers and mathematicians, Facebook and YouTube allowed the killings to be streamed live on the internet. It took an old-fashioned phone call from the New Zealand police to tell them to take the live evildoing down. Just as the New York Times or CNBC would never put such a thing on their websites, neither should Facebook or YouTube.

Wouldn’t you think that technology that could precisely target where to send the most effective propaganda could distinguish between an extremely violent film and extremely violent reality? I would. After all, it’s like nothing for these sites to have indexed the code of all the movie clips already uploaded onto their systems. If facial recognition works on a billion Chinese people, why not on the thousands of known film actors floating up there on the YouTube cloud? If it’s not a film you already know and there are lots of gunshots, the video should be flagged for review.

Why is this so hard? For one thing, the computing power the companies need doesn’t exist yet. “The sheer volume of material posted by [YouTube’s] billions of users, along with the difficulty in evaluating which videos cross the line, has created a minefield for the companies,” the Journal said.

It’s that minefield that disturbs me. A minefield dotted with difficulties about whether or not to show mass murder in real time? What would be the harm of having a person look at any video that features mass killing before it’s cleared to air? If the computers can’t figure out what to do with such material, let a person look at it.

What is so frightening about AI is not the computing power and the uses the world can find for it, but the abdication of self-control and ethical considerations of the people using the AI.

I want the police in my country to have guns and to use them on criminals who are about to kill innocent people. I don’t want police states shooting peaceful demonstrators. I’m happy to have police in the U.S. use facial recognition if it will help stop a person from blowing up the stadium where the Super Bowl is being held. I would not want cameras on every intersection automatically tracking my every movement.

Guns are neither smart nor stupid. They are artificial power that increases the harm an unarmed person can affect. Guns are essential in maintaining freedom but can suppress freedom too.

Same for AI. There are lots of wonderful applications for it. Every bit of software in use today was called AI before it began to be used, when we then called it software.

What sets off the good AI from the bad is the way people use it. Streaming on YouTube can be a wonderful thing. But just as we need political accountability to make sure the guns our armies and police have aren’t abused, we need the people at YouTube to control their technology in a responsible way.

The AI at Facebook and YouTube isn’t dumb: Dumb are the people who trusted too readily that the tool could decide for itself what the right call would be when the horrors from Christchurch began to be uploaded.

No comments:

Post a Comment