Quantcast
Channel: Movies
Viewing all articles
Browse latest Browse all 8368

Intelligent robots don't need to be conscious to turn against us

0
0

Stuart Russell

Last week Elon Musk, Stephen Hawking, and more than 16,000 researchers signed an open letter warning against the dangers of autonomous weapons.

A top signatory who studies artificial intelligence (AI) was Stuart Russell, a computer scientist and founder of the Center for Intelligent Systems at the University of California. Russell is also the co-author of "Artificial Intelligence: A Modern Approach," a textbook about AI used in more than 100 countries.

In the past few months, Russell has urged scientists to consider the possible dangers AI might pose, starting with another open letter he wrote in January 2015. That dispatch called on researchers to only develop AI they can ensure is safe and beneficial.

Russell spoke to Tech Insider about AI-powered surveillance systems, what the technological "singularity" actually means, and how AI could amplify human intelligence. He also blew our minds a little on the concept of consciousness.

Below is that conversation edited for length, style, and clarity.

TECH INSIDER: You chose a career in AI over one in physics. Why?

STUART RUSSELL: AI was very much a new field. You could break new ground quite quickly, whereas a lot of the physicists I talked to were not very optimistic either about their field or establishing their own career. There was a joke going around then: "How do you meet a PhD physicist? You hail a taxi in New York."

TI: That's funny.

SR: It's slightly different now. Some PhD physicists write software or work for hedge funds, but physics still has a problem with having very smart people but not enough opportunities.

TI: What's your favorite sci-fi depiction of AI?

SR: The one I would say is realistic, in the not-too-distant future, and also deliberately not sensationalistic or scary, is the computer in "Star Trek" onboard the Enterprise. It just acts as a repository of knowledge and can do calculations and projections, essentially as a completely faithful servant. So it's a very non-controversial kind of computer and it's almost in the background. I think that's sort of the way it should be.

In terms of giving you the willies, I think "Ex Machina" is pretty good.

TI: If the Enterprise computer is realistic, what sci-fi depiction would you say is the least realistic?

SR: There's a lot of them. But if everyone was nice and obedient, there wouldn't be much of a plot.

In a lot of movies there is an element of realism, yet the machine somehow spontaneously becomes conscious – and either evil or somehow intrinsically in opposition to human beings. Because of this, a lot of people might assume 1) that's what could actually happen and 2) they have reason to be concerned about the long-term future of AI.

I think both of those things are not true, except sort of by accident. It's unlikely that machines would spontaneously decide they didn’t like people, or that they had goals in opposition to those of human beings.

ex machinaBut in "Ex Machina" that's what happens. It's unclear how the intelligence of the robot is constructed, but the few hints that they drop suggest it’s a pretty random trial-and-error process. Kind of pre-loading the robot brain with all the information of human behavior on the web and stuff like that. To me that's setting yourself up for disaster: not knowing what you’re doing and not having a plan and trying stuff willy nilly.

In reality, we don't build machines that way. We build them with precisely defined goals. But say you have a very precisely defined goal and you build a machine that's superhuman in its capabilities for achieving goals. If it turns out that the subsequent behavior of the robot in achieving that goal was not what you want, you have a real problem.

The robot is not going to want to be switched off because you’ve given it a goal to achieve and being switched off is a way of failing — so it will do its best not to be switched off. That's a story that isn’t made clear in most movies but it I think is a real issue.

TI: What’s the most mind-blowing thing you’ve learned during your career?

SR: Seeing the Big Dog videos was really remarkable. Big Dog is a four-legged robot built by Boston Dynamics that, in terms of its physical capabilities, is incredibly lifelike. It’s able to walk up and down steep hills and snow drifts and to recover its balance when its pushed over on an icy pond and so on. It’s just an amazing piece of technology.

Leg locomotion was, for decades, thought to be an incredibly difficult problem. There has been very, very painstakingly slow progress there, and robots that essentially lumbered along at one step every 15 seconds and occasionally fell over. Then, all of the sudden, you had this huge quantum leap in leg locomotion capabilities with Big Dog.

Another amazing thing is the capability of the human brain and the human mind. The more we learn about AI and about how the brain works, the more amazing the brain seems. Just the sheer amount of computation it does is truly incredible, especially for a couple of pounds of meat.

A lot of people talk about sometime around 2030, machines will be more powerful than the human brain, in terms of the raw number of computations they can do per second. But that seems completely irrelevant. We don’t know how the brain is organized, how it does what it does.

TI: What a common piece of AI people use everyday they might take for granted?

SR: Google or other search engines. Those are examples of AI, and relatively simple AI, but they're still AI. That plus an awful lot of hardware to make it work fast enough.

TI: Do you think if people thought about search engines as AI, they'd think differently about offering up information about about their lives?

SR: Most of the AI goes into figuring which are the important pages you want. And to some extent what your query means, and what you’re likely to be after based on your previous behavior and other information it collects about you.

It’s not really trying to build a complete picture of you, as a person as yet. But there are lots of other companies that are doing this. They’re really trying to collect as much information as they can about every single person on the planet because they think its going to be valuable and it probably already is valuable.

Here's a question: If you're being watched by a surveillance camera, does it make a difference to you whether a human is watching the recording? What if there's an AI system, which actually can understand everything that you're doing, and if you're doing something you're not supposed to — or something that might be of interest to the owner of the camera? That it would describe what was going on in English, and report that to a human being? Would that feel different from having a human watch directly?

The last time I checked, the Canadian supreme court said it is different: If there isn't a human watching through a camera, then your privacy is not being violated. I expect that people are going to feel differently about that once they're aware that AI systems can watch through a camera and can, in some sense, understand what it's seeing.

TI: What's the most impressive real-world use of AI technology you've ever seen?

SR: One would be Deep Mind's DQN system. It essentially just wakes up, sees the screen of a video game, and works out how to play the video game to a superhuman level. It can do that for about 30 different Atari titles. And that's both impressive and scary, in the sense that if a human baby was born and, by the evening of its first day, already beating adult human beings at video games.

In terms of a practical application, though, I would say object recognition.

TI: How do you mean?

SR: AI's ability to recognize visual categories and images is now pretty close to what human beings can manage, and probably better than a lot of people's, actually. AI can have more knowledge of detailed categories, like animals and so on.

There have been a series of competitions aimed at improving standard computer vision algorithms, particularly their ability to recognize categories of objects in images. It might be a cauliflower or a German shepherd. Or a glass of water or a rose, any type of object.

The most recent large-scale competition, called ImageNet, has around a thousand categories. And I think there are more than a million training images for those categories — more than a thousand images for each category. A machine is given those training images, and for each of the training images it's told what the category of objects is.

Let's say it's told a German shepherd is in an image, and then the test is that it's given a whole bunch of images it's never seen before and is asked to identify the category. If you guessed randomly, you'd have a 1-in-1,000 chance of getting it right. Using a technology called deep learning, the best systems today are correct about 95% of the time. Ten years ago, the best computer vision systems got about 5% right.

There's a grad student at Stanford who tried to do this task himself, not with a machine. After he looked at the test images, he realized he didn't know that much about different breeds of dogs. In a lot of the categories, there were about 100 different breeds of dog, because the competition wanted to test an ability to make fine distinctions among different kinds of objects.

The student didn't do well on the test, at all. So he spent several days going back through all the training images and learned all of these different breeds of dogs. After days and days and days of work, he got his performance up to just above the performance of the machine. He was around 96% accurate. Most of his friends who also tried gave up. They just couldn't put in the time and effort required to be as good as the machine.

TI: You mentioned deep learning. Is that based on how the human brain works?

SR: It's a technique that's loosely based on some aspects of the brain. A "deep" network is a large collection of small, simple computing elements that are trainable.

You could say most progress in AI has been gaining a deeper mathematical understanding of tasks. For example, chess programs don't play chess the way humans play chess. We don't really know how humans play chess, but one of the things we do is spot some opportunity on the chess board toward a move to capture the opponent's queen.

Garry Kasparov Deep Blue

Chess programs don't play that way at all. They don't spot any opportunities on the board, they have no goal. They just consider all positive moves, and they pick which one is best. It's a mathematical approximation to optimal play in chess — and it works extremely well.

So, for decision-making tasks and perception tasks, once you define the task mathematically, you can come up with techniques that solve it extremely well. Those techniques don't have to be how humans do it. Sometimes it helps to get some inspiration from the brain, but it's inspiration — it's not a copy of how the neural systems are wired up or how they work in detail.

TI: What are the biggest obstacles to developing AI capable of sentient reasoning?

SR: What do you mean by sentient, do you mean that it's conscious?

TI: Yes, consciousness.

SR: The biggest obstacle is we have absolutely no idea how the brain produces consciousness. It's not even clear that if we did accidentally produce a sentient machine, we would even know it.

I used to say that if you gave me a trillion dollars to build a sentient or conscious machine I would give it back. I could not honestly say I knew how it works. When I read philosophy or neuroscience papers about consciousness, I don't get the sense we're any closer to understanding it than we were 50 years ago.

TI: Because we don't really know how the brain works?

SR: It's not just that: We could not know how the brain works, in the sense that we don't know how the brain produces intelligence. But that's a different question from how it produces consciousness.

There is no scientific theory that could lead us from a detailed map of every single neuron in someone's brain to telling us how that physical system would generate a conscious experience. We don't even have the beginnings of a theory whose conclusion would be "such a system is conscious."

There is no scientific theory that could lead us from a detailed map of every single neuron in someone's brain to a conscious experience. We don't even have the beginnings of a theory whose conclusion would be "such a system is conscious."

TI: I suppose the singularity is not even an issue right now then.

SR: The singularity has nothing to do with consciousness, either.

Its really important to understand the difference between sentience and consciousness, which are important for human beings. But when people talk about the singularity, when people talk about superintelligent AI, they're not talking about sentience or consciousness. They're talking about superhuman ability to make high-quality decisions.

Say I'm a chess player and I'm playing against a computer, and it's wiping the board with me every single time. I can assure you it's not conscious but it doesn't matter: It's still beating me. I'm still losing every time. Now extrapolate from a chess board to the world, which in some sense is a bigger chess board. If human beings are losing every time, it doesn't matter whether they're losing to a conscious machine or an completely non conscious machine, they still lost. The singularity is about the quality of decision-making, which is not consciousness at all.

TI: What is the most common misconception of AI?

SR: That what AI people are working towards is a conscious machine. And that until you have conscious machine, there's nothing to worry about. It's really a red herring.

To my knowledge nobody — no one who is publishing papers in the main field of AI — is even working on consciousness. I think there are some neuroscientists who are trying to understand it, but I'm not aware that they've made any progress. No one has a clue how to build a conscious machine, at all. We have less clue about how to do that than we have about build a faster-than-light spaceship.

TI: What about a machine that's convincingly human, one that can pass the Turing Test?

SR: That can happen without being conscious at all. Almost nobody in AI is working on passing the Turing Test, except maybe as a hobby. There are people who do work on passing the Turing Test in various competitions, but I wouldn't describe that as mainstream AI research.

Almost nobody in AI is working on passing the Turing Test.

The Turing Test wasn't designed as the goal of AI. It was designed as a thought experiment to explain to people who were very skeptical, at the time, that the possibility of intelligent machines did not depend on achieving consciousness — that you could have a machine you'd have to agree was behaving intelligently because it was behaving indistinguishably from a human being. So that thought experiment was there to make an argument about the importance of behavior in judging intelligence as opposed to the importance of, for example, consciousness. Or just being human, which is not something machines have a good chance of being able to do.

And so I think the media often gets it wrong. They assume that everyone in AI is trying to pass the Turing Test, and nobody is. They assume that that's the definition of AI, and that wasn't even what it was for. 

TI: What are most AI scientists actually working toward, then?

SR: They're working towards systems that are better at perceiving, understanding language, operating in the physical world, like robots. Reasoning, learning, decision-making. Those are the goals of the field.

TI: Not making a Terminator.

SR: It's certainly true that a lot of funding for AI comes from the defense department, and the defense department seems to be very interested in greater and greater levels of autonomy in AI, inside weapons systems. That's one of the reasons why I've been more active about that question.

TI: What's the most profound change that intelligent AI could bring to our lives, and how might that happen?

SR: We could have self-driving cars — that seems to be a foregone conclusion. They have many, many advantages, and not just the fact that you can check your email while you're being driven to work.

Google self drivingI also think systems that are able to process and synthesize large amounts of knowledge. Right now, you're able to use a search engine, like Google or Bing or whatever. But those engines don't understand anything about pages that they give you; they essentially index the pages based on the words that you're searching, and then they intersect that with the words in your query, and they use some tricks to figure out which pages are more important than others. But they don't understand anything.

If you had a system that could read all the pages and understand the context, instead of just throwing back 26 million pages to answer your query, it could actually answer the question. You could ask a real question and get an answer as if you were talking to a person who read all those millions and billions of pages, understood them, and synthesized all that information.

So if you think that search engines right now are worth roughly a trillion dollars in market capitalization, systems with those kinds of capabilities might be more 10 times as much. Just as 20 years ago, we didn't really know how important search engines would be for us today. It's very hard to predict what kind of uses we'd make of assistants that could read and understand all the information the human race has ever generated. It could be really transformational.

Basically, the way I think about it is everything we have of value as human beings — as a civilization — is the result of our intelligence. What AI could do is essentially be a power tool that magnifies human intelligence and gives us the ability to move our civilization forward. It might be curing disease, it might be eliminating poverty. Certainly it should include preventing environmental catastrophe.

If AI could be instrumental to all those things, then I would feel it was worthwhile.

Join the conversation about this story »

NOW WATCH: MIT reveals how its military-funded Cheetah robot can now jump over obstacles on its own


Viewing all articles
Browse latest Browse all 8368

Latest Images

Trending Articles





Latest Images