Quantcast
Channel: Movies
Viewing all articles
Browse latest Browse all 8368

Here's What 'Terminator' Gets Wrong About AI

$
0
0

t 800, terminator, machine

In a lot of science fiction, artificial intelligence systems become truly intelligent — as well as extremely dangerous — once they achieve self-awareness.

Take the "Terminator" series. Before becoming self-aware, Skynet is a powerful tool for the US military to coordinate the national defense; after becoming self-aware, Skynet decides, for some reason, to coordinate the destruction of the human species instead.

But how important is self-awareness, really, in creating an artificial mind on par with ours? According to quantum computing pioneer and Oxford physicist David Deutsch, not very.

In an excellent article in Aeon, Deutsch explores why artificial general intelligence (AGI) must be possible, but hasn't yet been achieved. He calls it AGI to emphasize that he's talking about a mind like ours, that can think and feel and reason about anything, as opposed to a complex computer program that's very good at one or a few human-like tasks.

Simply put, his argument explaining why AGI is possible is this: Since our brains are made of matter, it must be possible, in principle at least, to recreate the functionality of our brains using another type of matter. (Deutsch provided a rigorous proof for this idea, known as "the universality of computation" in the 1980s.)  

As for Skynet, Deutsch writes:

Remember the significance attributed to Skynet’s becoming ‘self-aware’? That’s just another philosophical misconception, sufficient in itself to block any viable approach to AGI. The fact is that present-day software developers could straightforwardly program a computer to have ‘self-awareness’ in the behavioural sense — for example, to pass the ‘mirror test’ of being able to use a mirror to infer facts about itself — if they wanted to. As far as I am aware, no one has done so, presumably because it is a fairly useless ability as well as a trivial one.

In other words, the issue is not self-awareness — it's awareness, period. We could make a machine to be "self-aware" in a technical sense, and it wouldn't possess any more human-level intelligence than a computer that's programmed to play the piano. Viewed this way, self-awareness is just another narrow, arbitrary skill — not the Holy Grail it's made out to be in a lot of science fiction.

HAL 9000

As Deutsch puts it:

AGIs will indeed be capable of self-awareness — but that is because they will be General: they will be capable of awareness of every kind of deep and subtle thing, including their own selves.

So why does this matter? Isn't this just another harmless sci-fi trope? Not exactly.

If we really want to create artificial intelligence, we have to understand what it is we're trying to create. Deutsch persuasively argues that, as long as we're focused on self-awareness, we won't focus on understanding how our brains actually work. 

What matters, Deutsch argues, is "the ability to create new explanations," to generate theories about the world and all its particulars. In contrast with this, the idea that self-awareness — let alone real intelligence — will spontaneously emerge from a complex computer network is not just science fiction. It's pure fantasy.

READ MORE: Here's What Needs To Happen For AI To Become A Reality

SEE ALSO: Why We Can't Yet Build True Artificial Intelligence, Explained In One Sentence

Join the conversation about this story »


Viewing all articles
Browse latest Browse all 8368

Latest Images

Trending Articles





Latest Images