As I learned about the world of AI, I found a
surprisingly large number of people standing here:
The people on Confident Corner are buzzing with
excitement. They have their sights set on the fun side of the balance beam and
they’re convinced that’s where all of us are headed. For them, the future is
everything they ever could have hoped for, just in time. The thing that separates these people from
the other thinkers we’ll discuss later isn’t their lust for the happy side of
the beam—it’s their confidence that that’s the side we’re going to land on. Where this confidence comes from is up for
debate. Critics believe it comes from an excitement so blinding that they
simply ignore or deny potential negative outcomes. But the believers say it’s
naive to conjure up doomsday scenarios when on balance, technology has and will
likely end up continuing to help us a lot more than it hurts us.
We’ll cover both sides, and you can form your own
opinion about this as you read, but for this section, put your skepticism away
and let’s take a good hard look at what’s over there on the fun side of the
balance beam—and try to absorb the fact that the things you’re reading might
really happen. If you had shown a hunter-gatherer our world of indoor comfort,
technology, and endless abundance, it would have seemed like fictional magic to
him—we have to be humble enough to acknowledge that it’s possible that an
equally inconceivable transformation could be in our future.
Nick Bostrom describes three ways a superintelligent
AI system could function:
As an oracle, which answers nearly any question posed
to it with accuracy, including complex questions that humans cannot easily
answer—i.e. How can I manufacture a more efficient car engine? Google is a
primitive type of oracle.
As a genie, which executes any high-level command it’s
given—Use a molecular assembler to build a new and more efficient kind of car
engine—and then awaits its next command.
As a sovereign, which is assigned a broad and
open-ended pursuit and allowed to operate in the world freely, making its own
decisions about how best to proceed—Invent a faster, cheaper, and safer way
than cars for humans to privately transport themselves.
These questions and tasks, which seem complicated to
us, would sound to a superintelligent system like someone asking you to improve
upon the “My pencil fell off the table” situation, which you’d do by picking it
up and putting it back on the table.
There are a lot of eager scientists, inventors, and
entrepreneurs in Confident Corner—but for a tour of brightest side of the AI
horizon, there’s only one person we want as our tour guide. Ray Kurzweil is polarizing. In my reading, I
heard everything from godlike worship of him and his ideas to eye-rolling
contempt for them. Others were somewhere in the middle—author Douglas
Hofstadter, in discussing the ideas in Kurzweil’s books, eloquently put forth
that “it is as if you took a lot of very good food and some dog excrement and
blended it all up so that you can’t possibly figure out what’s good or bad.”
Whether you like his ideas or not, everyone agrees
that Kurzweil is impressive. He began inventing things as a teenager and in the
following decades, he came up with several breakthrough inventions, including
the first flatbed scanner, the first scanner that converted text to speech
(allowing the blind to read standard texts), the well-known Kurzweil music
synthesizer (the first true electric piano), and the first commercially
marketed large-vocabulary speech recognition. He’s the author of five national
bestselling books. He’s well-known for his bold predictions and has a pretty
good record of having them come true—including his prediction in the late ’80s,
a time when the internet was an obscure thing, that by the early 2000s, it
would become a global phenomenon. Kurzweil has been called a “restless genius”
by The Wall Street Journal, “the ultimate thinking machine” by Forbes,
“Edison’s rightful heir” by Inc. Magazine, and “the best person I know at
predicting the future of artificial intelligence” by Bill Gates.9 In 2012,
Google co-founder Larry Page approached Kurzweil and asked him to be Google’s
Director of Engineering. In 2011, he co-founded Singularity University, which
is hosted by NASA and sponsored partially by Google. Not bad for one life.
This biography is important. When Kurzweil articulates
his vision of the future, he sounds fully like a crackpot, and the crazy thing
is that he’s not—he’s an extremely smart, knowledgeable, relevant man in the
world. You may think he’s wrong about the future, but he’s not a fool. Knowing
he’s a such a legit dude makes me happy, because as I’ve learned about his
predictions for the future, I badly want him to be right. And you do too. As
you hear Kurzweil’s predictions, many shared by other Confident Corner thinkers
like Peter Diamandis and Ben Goertzel, it’s not hard see to why he has such a
large, passionate following—known as the singularitarians. Here’s what he
thinks is going to happen:
Kurzweil believes computers will reach AGI by 2029 and
that by 2045, we’ll have not only ASI, but a full-blown new world—a time he
calls the singularity. His AI-related timeline used to be seen as outrageously
overzealous, and it still is by many, but in the last 15 years, the rapid
advances of ANI systems have brought the larger world of AI experts much closer
to Kurzweil’s timeline. His predictions are still a bit more ambitious than the
median respondent on Bostrom’s survey (AGI by 2040, ASI by 2060), but not by
that much.
Kurzweil’s depiction of the 2045 singularity is
brought about by three simultaneous revolutions in biotechnology,
nanotechnology, and, most powerfully, AI.
Armed with superintelligence and all the technology
superintelligence would know how to create, ASI would likely be able to solve
every problem in humanity. Global warming? ASI could first halt CO2 emissions
by coming up with much better ways to generate energy that had nothing to do
with fossil fuels. Then it could create some innovative way to begin to remove
excess CO2 from the atmosphere. Cancer and other diseases? No problem for
ASI—health and medicine would be revolutionized beyond imagination. World
hunger? ASI could use things like nanotech to build meat from scratch that
would be molecularly identical to real meat—in other words, it would be real
meat. Nanotech could turn a pile of garbage into a huge vat of fresh meat or
other food (which wouldn’t have to have its normal shape—picture a giant cube
of apple)—and distribute all this food around the world using ultra-advanced
transportation. Of course, this would also be great for animals, who wouldn’t
have to get killed by humans much anymore, and ASI could do lots of other
things to save endangered species or even bring back extinct species through
work with preserved DNA. ASI could even solve our most complex macro issues—our
debates over how economies should be run and how world trade is best
facilitated, even our haziest grapplings in philosophy or ethics—would all be
painfully obvious to ASI.
But there’s one thing ASI could do for us that is so
tantalizing, reading about it has altered everything I thought I knew about
everything: ASI could allow us to conquer our mortality.
A few months ago, I mentioned my envy of more advanced
potential civilizations who had conquered their own mortality, never
considering that I might later write a post that genuinely made me believe that
this is something humans could do within my lifetime. But reading about AI will
make you reconsider everything you thought you were sure about—including your
notion of death.
Evolution had no good reason to extend our lifespans
any longer than they are now. If we live long enough to reproduce and raise our
children to an age that they can fend for themselves, that’s enough for
evolution—from an evolutionary point of view, the species can thrive with a 30+
year lifespan, so there’s no reason mutations toward unusually long life would
have been favored in the natural selection process. As a result, we’re what
W.B. Yeats describes as “a soul fastened to a dying animal.” Not that fun.
And because everyone has always died, we live under
the “death and taxes” assumption that death is inevitable. We think of aging
like time—both keep moving and there’s nothing you can do to stop them. But
that assumption is wrong.
The fact is, aging isn’t stuck to time. Time will
continue moving, but aging doesn’t have to. If you think about it, it makes
sense. All aging is is the physical materials of the body wearing down. A car
wears down over time too—but is its aging inevitable? If you perfectly repaired
or replaced a car’s parts whenever one of them began to wear down, the car
would run forever. The human body isn’t any different—just far more complex.
Kurzweil talks about intelligent wifi-connected
nanobots in the bloodstream who could perform countless tasks for human health,
including routinely repairing or replacing worn down cells in any part of the
body. If perfected, this process (or a far smarter one ASI would come up with)
wouldn’t just keep the body healthy, it could reverse aging. The difference
between a 60-year-old’s body and a 30-year-old’s body is just a bunch of
physical things that could be altered if we had the technology. ASI could build
an “age refresher” that a 60-year-old could walk into, and they’d walk out with
the body and skin of a 30-year-old. Even
the ever-befuddling brain could be refreshed by something as smart as ASI,
which would figure out how to do so without affecting the brain’s data
(personality, memories, etc.). A 90-year-old suffering from dementia could head
into the age refresher and come out sharp as a tack and ready to start a whole
new career. This seems absurd—but the body is just a bunch of atoms and ASI
would presumably be able to easily manipulate all kinds of atomic structures—so
it’s not absurd.
Kurzweil then takes things a huge leap further. He
believes that artificial materials will be integrated into the body more and
more as time goes on. First, organs could be replaced by super-advanced machine
versions that would run forever and never fail. Then he believes we could begin
to redesign the body—things like replacing red blood cells with perfected red
blood cell nanobots who could power their own movement, eliminating the need
for a heart at all. He even gets to the brain and believes we’ll enhance our
brain activities to the point where humans will be able to think billions of
times faster than they do now and access outside information because the
artificial additions to the brain will be able to communicate with all the info
in the cloud.
The possibilities for new human experience would be endless.
Humans have separated sex from its purpose, allowing people to have sex for
fun, not just for reproduction. Kurzweil believes we’ll be able to do the same
with food. Nanobots will be in charge of delivering perfect nutrition to the
cells of the body, intelligently directing anything unhealthy to pass through
the body without affecting anything. An eating condom. Nanotech theorist Robert
A. Freitas has already designed blood cell replacements that, if one day
implemented in the body, would allow a human to sprint for 15 minutes without
taking a breath—so you can only imagine what ASI could do for our physical
capabilities. Virtual reality would take on a new meaning—nanobots in the body
could suppress the inputs coming from our senses and replace them with new
signals that would put us entirely in a new environment, one that we’d see,
hear, feel, and smell.
Eventually, Kurzweil believes humans will reach a
point when they’re entirely artificial;10 a time when we’ll look at biological
material and think how unbelievably primitive it was that humans were ever made
of that; a time when we’ll read about early stages of human history, when
microbes or accidents or diseases or wear and tear could just kill humans
against their own will; a time the AI Revolution could bring to an end with the
merging of humans and AI.11 This is how Kurzweil believes humans will
ultimately conquer our biology and become indestructible and eternal—this is
his vision for the other side of the balance beam. And he’s convinced we’re
gonna get there. Soon.
You will not be surprised to learn that Kurzweil’s
ideas have attracted significant criticism. His prediction of 2045 for the
singularity and the subsequent eternal life possibilities for humans has been
mocked as “the rapture of the nerds,” or “intelligent design for 140 IQ
people.” Others have questioned his optimistic timeline, or his level of
understanding of the brain and body, or his application of the patterns of
Moore’s law, which are normally applied to advances in hardware, to a broad
range of things, including software. For every expert who fervently believes
Kurzweil is right on, there are probably three who think he’s way off.
But what surprised me is that most of the experts who
disagree with him don’t really disagree that everything he’s saying is
possible. Reading such an outlandish vision for the future, I expected his
critics to be saying, “Obviously that stuff can’t happen,” but instead they
were saying things like, “Yes, all of that can happen if we safely transition
to ASI, but that’s the hard part.” Bostrom, one of the most prominent voices
warning us about the dangers of AI, still acknowledges:
It is hard to think of any problem that a
superintelligence could not either solve or at least help us solve. Disease,
poverty, environmental destruction, unnecessary suffering of all kinds: these
are things that a superintelligence equipped with advanced nanotechnology would
be capable of eliminating. Additionally, a superintelligence could give us
indefinite lifespan, either by stopping and reversing the aging process through
the use of nanomedicine, or by offering us the option to upload ourselves. A
superintelligence could also create opportunities for us to vastly increase our
own intellectual and emotional capabilities, and it could assist us in creating
a highly appealing experiential world in which we could live lives devoted to
joyful game-playing, relating to each other, experiencing, personal growth, and
to living closer to our ideals.
This is a quote from someone very much not on
Confident Corner, but that’s what I kept coming across—experts who scoff at
Kurzweil for a bunch of reasons but who don’t think what he’s saying is
impossible if we can make it safely to ASI. That’s why I found Kurzweil’s ideas
so infectious—because they articulate the bright side of this story and because
they’re actually possible. If it’s a good god.
The most prominent criticism I heard of the thinkers
on Confident Corner is that they may be dangerously wrong in their assessment
of the downside when it comes to ASI. Kurzweil’s famous book The Singularity is
Near is over 700 pages long and he dedicates around 20 of those pages to
potential dangers. I suggested earlier that our fate when this colossal new
power is born rides on who will control that power and what their motivation
will be. Kurzweil neatly answers both parts of this question with the sentence,
“[ASI] is emerging from many diverse efforts and will be deeply integrated into
our civilization’s infrastructure. Indeed, it will be intimately embedded in
our bodies and brains. As such, it will reflect our values because it will be
us.”
But if that’s the answer, why are so many of the
world’s smartest people so worried right now? Why does Stephen Hawking say the
development of ASI “could spell the end of the human race” and Bill Gates say
he doesn’t “understand why some people are not concerned” and Elon Musk fear
that we’re “summoning the demon”? And why do so many experts on the topic call
ASI the biggest threat to humanity? These people, and the other thinkers on
Anxious Avenue, don’t buy Kurzweil’s brush-off of the dangers of AI. They’re
very, very worried about the AI Revolution, and they’re not focusing on the fun
side of the balance beam. They’re too busy staring at the other side, where
they see a terrifying future, one they’re not sure we’ll be able to escape.
Next post: Why the Future Might Be Our Worst Nightmare
No comments:
Post a Comment