From: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html?utm_source=List&utm_campaign=867044c6bb-WBW+%28MailChimp%29&utm_medium=email&utm_term=0_5b568bad0b-867044c6bb-50855317
Why It’s So Hard - Nothing will make you appreciate human intelligence like
learning about how unbelievably challenging it is to try to create a computer
as smart as we are. Building skyscrapers, putting humans in space, figuring out
the details of how the Big Bang went down—all far easier than understanding our
own brain or how to make something as cool as it. As of now, the human brain is
the most complex object in the known universe.
What’s interesting is
that the hard parts of trying to
build an AGI (a computer as smart as humans in general,
not just at one narrow specialty) are
not intuitively what you’d think they are. Build a computer that can
multiply two ten-digit numbers in a split second—incredibly easy. Build one
that can look at a dog and answer whether it’s a dog or a cat—spectacularly
difficult. Make AI that can beat any human in chess? Done. Make one that can
read a paragraph from a six-year-old’s picture book and not just recognize the
words but understand the meaning of
them? Google is currently spending billions of dollars trying to do it. Hard things—like
calculus, financial market strategy, and language translation—are
mind-numbingly easy for a computer, while easy things—like vision, motion,
movement, and perception—are insanely hard for it. Or, as computer scientist
Donald Knuth puts it, “AI has by now succeeded in doing essentially everything
that requires ‘thinking’ but has failed to do most of what people and animals
do ‘without thinking.'”7
What you quickly
realize when you think about this is that those things that seem easy to us are
actually unbelievably complicated, and they only seem easy because those skills
have been optimized in us (and most animals) by hundreds of million years of
animal evolution. When you reach your hand up toward an object, the muscles,
tendons, and bones in your shoulder, elbow, and wrist instantly perform a long
series of physics operations, in conjunction with your eyes, to allow you to
move your hand in a straight line through three dimensions. It seems effortless
to you because you have perfected software in your brain for doing it. Same
idea goes for why it’s not that malware is dumb for not being able to figure
out the slanty word recognition test when you sign up for a new account on a
site—it’s that your brain is super impressive for being able to.
On the other hand,
multiplying big numbers or playing chess are new activities for biological
creatures and we haven’t had any time to evolve a proficiency at them, so a
computer doesn’t need to work too hard to beat us. Think about it—which would
you rather do, build a program that could multiply big numbers or one that
could understand the essence of a B well enough that it you could show it a B
in any one of thousands of unpredictable fonts or handwriting and it could
instantly know it was a B? So how do we
get there?
First Key to Creating AGI: Increasing
Computational Power -
One thing that
definitely needs to happen for AGI to be a possibility is an increase in the
power of computer hardware. If an AI system is going to be as intelligent as
the brain, it’ll need to equal the brain’s raw computing capacity. One way to express
this capacity is in the total calculations per second (cps) the brain could
manage, and you could come to this number by figuring out the maximum cps of
each structure in the brain and then adding them all together.
Ray Kurzweil came up
with a shortcut by taking someone’s professional estimate for the cps of one
structure and that structure’s weight compared to that of the whole brain and
then multiplying proportionally to get an estimate for the total. Sounds a
little iffy, but he did this a bunch of times with various professional estimates
of different regions, and the total always arrived in the same ballpark—around
1016, or 10 quadrillion cps.
Currently, the world’s
fastest supercomputer, China’s Tianhe-2, has actually beaten that number, clocking in at
about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square
meters of space, using 24 megawatts of power (the brain runs on just 20 watts), and costing $390 million to build. Not especially
applicable to wide usage, or even most commercial or industrial usage yet.
Kurzweil suggests that
we think about the state of computers by looking at how many cps you can buy
for $1,000. When that number reaches human-level—10 quadrillion cps—then that’ll
mean AGI could become a very real part of life.
Moore’s Law is a
historically-reliable rule that the world’s maximum computing power doubles
approximately every two years, meaning computer hardware advancement, like
general human advancement through history, grows exponentially. Looking at how
this relates to Kurzweil’s cps/$1,000 metric, we’re currently at about 10
trillion cps/$1,000, right on pace with this graph’s predicted trajectory:9
So the world’s $1,000 computers are now beating the mouse brain and they’re at about a thousandth of human level. This doesn’t sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.
So on the hardware side, the raw
power needed for AGI is technically available now, in China, and we’ll be ready
for affordable, widespread AGI-caliber hardware within 10 years. But raw
computational power alone doesn’t make a computer generally intelligent—the
next question is, how do we bring human-level intelligence to all that power?
Second Key to Creating AGI: Making it Smart - This is the icky part. The
truth is, no one really knows how to make it smart—we’re still debating how to
make a computer human-level intelligent and capable of knowing what a dog and a
weird-written B and a mediocre movie is. But there are a bunch of far-fetched
strategies out there and at some point, one of them will work. Here are the
three most common strategies I came across:
1)
Plagiarize the brain. This is like scientists toiling over how
that kid who sits next to them in class is so smart and keeps doing so well on
the tests, and even though they keep studying diligently, they can’t do nearly
as well as that kid, and then they finally decide “k fuck it I’m just gonna
copy that kid’s answers.” It makes sense—we’re stumped trying to build a
super-complex computer, and there happens to be a perfect prototype for one in
each of our heads.
The science world is
working hard on reverse engineering the brain to figure out how evolution made
such a rad thing—optimistic
estimates say we can
do this by 2030. Once we do that, we’ll know all the secrets of how the
brain runs so powerfully and efficiently and we can draw inspiration from it
and steal its innovations. One example of computer architecture that mimics the
brain is the artificial neural network. It starts out as a network of
transistor “neurons,” connected to each other with inputs and outputs, and it
knows nothing—like an infant brain. The way it “learns” is it tries to do a
task, say handwriting recognition, and at first, its neural firings and
subsequent guesses at deciphering each letter will be completely random.
But when it’s told it got
something right, the transistor connections in the firing pathways that
happened to create that answer are strengthened; when it’s told it was wrong,
those pathways’ connections are weakened. After a lot of this trial and
feedback, the network has, by itself, formed smart neural pathways and the
machine has become optimized for the task. The brain learns a bit like this but
in a more sophisticated way, and as we continue to study the brain, we’re
discovering ingenious new ways to take advantage of neural circuitry.
More extreme plagiarism involves a strategy called “whole
brain emulation,” where the goal is to slice a real brain into thin layers,
scan each one, use software to assemble an accurate reconstructed 3-D model,
and then implement the model on a powerful computer. We’d then have a computer
officially capable of everything the brain is capable of—it would just need to
learn and gather information. If engineers get really good, they’d be able to emulate a real
brain with such exact accuracy that the brain’s full personality and memory
would be intact once the brain architecture has been uploaded to a computer. If
the brain belonged to Jim right before he passed away, the computer would now
wake up as Jim (?),
which would be a robust human-level AGI, and we could now work on turning Jim
into an unimaginably smart ASI, which he’d probably be really excited about.
How far are we from achieving whole brain emulation? Well
so far, we’ve not yet just
recently been able to
emulate a 1mm-long flatworm brain, which consists of just 302 total neurons.
The human brain contains 100 billion. If that makes it seem like a hopeless
project, remember the power of exponential progress—now that we’ve conquered
the tiny worm brain, an ant might happen before too long, followed by a mouse,
and suddenly this will seem much more plausible.
2)
Try to make evolution do what it did before but for us this time. So if we decide the
smart kid’s test is too hard to copy, we can try to copy the way he studies for the tests instead.
Here’s something we know. Building a computer as powerful
as the brain is possible—our
own brain’s evolution is proof. And if
the brain is just too complex for us to emulate, we could try to emulate evolution instead. The fact is, even if we can
emulate a brain, that might be like trying to build an airplane by copying a
bird’s wing-flapping motions—often, machines are best designed using a fresh,
machine-oriented approach, not by mimicking biology exactly.
So how can we simulate evolution to build an AGI? The
method, called “genetic algorithms,”
would work something like this: there would be a performance-and-evaluation
process that would happen again and again (the same way biological creatures
“perform” by living life and are “evaluated” by whether they manage to
reproduce or not). A group of computers would try to do tasks, and the most
successful ones would be bred with
each other by having half of each of their programming merged together into a
new computer. The less successful ones would be eliminated. Over many, many
iterations, this natural selection process would produce better and better
computers. The challenge would be creating an automated evaluation and breeding
cycle so this evolution process could run on its own. The downside of copying evolution is that
evolution likes to take a billion years to do things and we want to do this in
a few decades.
But we have a lot of
advantages over evolution. First, evolution has no foresight and works
randomly—it produces more unhelpful mutations than helpful ones, but we would
control the process so it would only be driven by beneficial glitches and
targeted tweaks. Secondly, evolution doesn’t aim for
anything, including intelligence—sometimes an environment might even select against higher intelligence (since it uses a lot
of energy). We, on the other hand, could specifically direct this evolutionary
process toward increasing intelligence. Third, to select for intelligence,
evolution has to innovate in a bunch of other ways to facilitate
intelligence—like revamping the ways cells produce energy—when we can remove
those extra burdens and use things like electricity. It’s no doubt we’d be
much, much faster than evolution—but it’s still not clear whether we’ll be able
to improve upon evolution enough to make this a viable strategy.
3) Make this whole thing the computer’s problem, not ours. This is when scientists get desperate and
try to program the test to take itself. But it might be the most promising
method we have. The idea is that we’d
build a computer whose two major skills would be doing research on AI and
coding changes into itself—allowing it to not only learn but to improve its own architecture. We’d teach computers to be
computer scientists so they could bootstrap their own development. And that
would be their main job—figuring out how to make themselves smarter. More on this later.
All
of This Could Happen Soon - Rapid advancements in hardware and
innovative experimentation with software are happening simultaneously, and AGI
could creep up on us quickly and unexpectedly for two main reasons:
1) Exponential growth is intense and
what seems like a snail’s pace of advancement can quickly race upwards
2) When it comes to software, progress can seem
slow, but then one epiphany can instantly change the rate of advancement
(kind of like the way science, during the time humans thought the universe was
geocentric, was having difficulty calculating how the universe worked, but then
the discovery that it was heliocentric suddenly made everything much easier). Or, when it comes to something
like a computer that improves itself, we might seem far away but actually be
just one tweak of the system away from having it become 1,000 times more
effective and zooming upward to human-level intelligence.
No comments:
Post a Comment