From: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
At some point, we’ll have achieved AGI—computers with human-level general intelligence. Just a bunch of people and computers living together in equality. Oh actually not at all. The thing is, an AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:
At some point, we’ll have achieved AGI—computers with human-level general intelligence. Just a bunch of people and computers living together in equality. Oh actually not at all. The thing is, an AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:
Hardware:
§ Speed. The brain’s neurons max out at around 200 Hz, while today’s
microprocessors (which are much slower than they will be when we reach AGI) run
at 2 GHz, or 10 million times faster than our neurons. And the brain’s internal
communications, which can move at about 120 m/s, are horribly outmatched by a
computer’s ability to communicate optically at the speed of light.
§ Size and storage. The brain is locked into its size by the shape
of our skulls, and it couldn’t get much bigger anyway, or the 120 m/s internal
communications would take too long to get from one brain structure to another.
Computers can expand to any physical size, allowing far more hardware to be put
to work, a much larger working memory (RAM), and a longterm memory (hard drive
storage) that has both far greater capacity and precision than our own.
§ Reliability and durability. It’s not only the memories of a computer
that would be more precise. Computer transistors are more accurate than
biological neurons, and they’re less likely to deteriorate (and can be repaired
or replaced if they do). Human brains also get fatigued easily, while computers
can run nonstop, at peak performance, 24/7.
Software:
§ Editability, upgradability, and a wider
breadth of possibility. Unlike the human brain, computer software can receive updates
and fixes and can be easily experimented on. The upgrades could also span to
areas where human brains are weak. Human vision software is superbly advanced,
while its complex engineering capability is pretty low-grade. Computers could
match the human on vision software but could also become
equally optimized in engineering and any other area.
§ Collective capability. Humans crush all other species at building a
vast collective intelligence. Beginning with the development of language and
the forming of large, dense communities, advancing through the inventions of
writing and printing, and now intensified through tools like the internet,
humanity’s collective intelligence is one of the major reasons we’ve been able
to get so far ahead of all other species. And computers will be way better at
it than we are. A worldwide network of AI running a particular program could
regularly sync with itself so that anything any one computer learned would be
instantly uploaded to all other computers. The group could also take on one
goal as a unit, because there wouldn’t necessarily be dissenting opinions and
motivations and self-interest, like we have within the human population.10
AI, which will likely
get to AGI by being programmed to self-improve, wouldn’t see “human-level
intelligence” as some important milestone—it’s only a relevant marker from our point
of view—and wouldn’t have any reason to “stop” at our level. And given the
advantages over us that even human intelligence-equivalent AGI would have, it’s
pretty obvious that it would only hit human intelligence for a brief instant
before racing onwards to the realm of superior-to-human intelligence.
This may shock the
shit out of us when it happens. The reason is that from our perspective, A) while the intelligence of
different kinds of animals varies, the main characteristic we’re aware of about
any animal’s intelligence is that it’s far lower than ours, and B) we view the
smartest humans as WAY smarter than the dumbest humans. Kind of like this:
So as AI zooms upward
in intelligence toward us, we’ll see it as simply becoming smarter, for an animal. Then, when it hits the lowest
capacity of humanity—Nick Bostrom uses the term “the village idiot”—we’ll be
like, “Oh wow, it’s like a dumb human. Cute!” The only thing is, in the grand
spectrum of intelligence, all humans, from the
village idiot to Einstein, are within a very small range—so just after hitting village idiot-level and being
declared an AGI, it’ll suddenly be smarter than Einstein and we won’t know what
hit us:
And what happens…after
that?
An
Intelligence Explosion
I hope you enjoyed
normal time, because this is when this topic gets unnormal and scary, and it’s
gonna stay that way from here forward. I want to pause here to remind you that
every single thing I’m going to say is real—real science and real forecasts of
the future from a large array of the most respected thinkers and scientists.
Just keep remembering that.
Anyway, as I said
above, most of our current models for getting to AGI involve the AI getting
there by self-improvement. And once it gets to AGI, even systems that formed
and grew through methods that didn’t involve
self-improvement would now be smart enough to begin self-improving if they
wanted to.3
And here’s where we
get to an intense concept: recursive self-improvement. It
works like this—An AI system at a certain level—let’s say human village
idiot—is programmed with the goal of improving its own intelligence. Once it
does, it’s smarter—maybe at this point it’s at
Einstein’s level—so now when it works to improve its intelligence, with an
Einstein-level intellect, it has an easier time and it can make bigger leaps.
These leaps make it much smarter
than any human, allowing it to make even bigger leaps.
As the leaps grow larger and happen more rapidly, the AGI soars upwards in
intelligence and soon reaches the superintelligent level of an ASI system. This
is called an Intelligence Explosion,11 and
it’s the ultimate example of The Law of Accelerating Returns.
There is some debate
about how soon AI will reach human-level general intelligence—the median year
on a survey of hundreds of scientists about when they believed we’d be more
likely than not to have reached AGI was 204012—that’s
only 25 years from now, which doesn’t sound that huge until you consider that
many of the thinkers in this field think it’s likely that the progression from
AGI to ASI happens very quickly.
Like—this could happen:
It
takes decades for the first AI system to reach low-level general intelligence,
but it finally happens. A computer is able to understand the world around it as
well as a human four-year-old. Suddenly, within an hour of hitting that
milestone, the system pumps out the grand theory of physics that unifies
general relativity and quantum mechanics, something no human has been able to
definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times
more intelligent than a human.
Superintelligence of
that magnitude is not something we can remotely grasp, any more than a
bumblebee can wrap its head around Keynesian Economics. In our world, smart
means a 130 IQ and stupid means an 85 IQ—we don’t have a word for an IQ of
12,952.
What we do know is
that humans’ utter dominance on this Earth suggests a clear rule: with intelligence comes power. Which means an ASI,
when we create it, will be the most powerful being in the history of life on
Earth, and all living things, including humans, will be entirely at its whim—and this might happen in the next few decades.If
our meager brains were able to invent wifi, then something 100 or 1,000 or 1
billion times smarter than we are should have no problem controlling the
positioning of each and every atom in the world in any way it likes, at any
time—everything we consider magic, every power we imagine a supreme God to have
will be as mundane an activity for the ASI as flipping on a light switch is for
us. Creating the technology to reverse human aging, curing disease and hunger
and even mortality, reprogramming the weather to protect the future of life on
Earth—all suddenly possible. Also possible is the immediate end of all life on
Earth. As far as we’re concerned, if an ASI comes to being, there is now an
omnipotent God on Earth—and the all-important question for us is:
Will it be a nice God?
No comments:
Post a Comment