The Information EVOLUTION

I remember several decades ago learning that we were at the beginning of an information revolution. The idea, as I understood it, was that many things are moving towards a digital economy; even wars will become information-based.

The information revolution takes over where the industrial revolution left off.

I am seeing an even bigger picture emerging – it is consistent with the evolution of the universe and Earth’s biosphere.

Screen Shot 2016-06-16 at 10.20.47 AM

At the moment, I can hear a bird of prey (I think it’s a falcon) that comes around this neighborhood every year about this time and makes its call from the tree tops. When I think about the amount of effort that birds make to produce mating calls, and other kinds of communication, I am reminded of how much importance information plays in the biological world. The variety and vigor of bird song is amazing. From an evolutionary point of view, one has to assume that there is great selective pressure to create such energy in organized sound.

money+gorilla+teeth+omg+weird+primatesThis is just a speck of dust in comparison to the evolution of communication in our own species, for whom information is a major driver in our activities. Our faces have evolved to give and receive a very high bandwidth of information between each other (Compare the faces of primates to those of less complex animals and notice the degree to which the face is optimized for giving and receiving information).

Our brains have grown to massive proportions (relatively-speaking) to account for the role that information plays in the way our species survives on the planet.

Now: onto the future of information…

Beaming New Parts to the Space Station

Screen Shot 2016-06-16 at 10.29.58 AM

Guess which is more expensive:

  1. Sending a rocket to the space station with a new part to repair an old one.
  2. Beaming up the instructions to build the part on an on-board 3D printer.

You guessed it.

And this is where some people see society going in general. 3D printing will revolutionize society in a big way. Less moving atoms, More moving bits.

To what degree will the manipulation of bits become more important than the manipulation of atoms?

Not Just a Revolution: Evolution

My sense is that the information revolution is not merely one in a series of human eras: it is the overall trend of life on Earth. We humans are the agents of the latest push in this overall trend.

Some futurists predict that nanotechnology will make it possible to infuse information processing into materials, giving rise to programmable matter. Ray Kurzweil predicts that the deep nano-mingling of matter and information will be the basis for a super-intelligence that can spread throughout the universe.

Okay, whatever.

For now, let’s ride this information wave and try to use the weightlessness of bits to make life better for all people (and all life-forms) on Earth – not just a powerful few.

The Singularity is Just One in a Series

I’m reading Kurzweil’s The Singularity is Near.

It occurs to me that the transition that the human race is about to experience is similar to other major transitions that are often described as epochs – paradigm-shifts – in which a new structure emerges over a previous structure. There are six key epochs that Kurzweil describes. (The first four are not unlike epochal stages described by Terrance Deacon and others.)

  1. Physics and Chemistry
  2. Biology and DNA
  3. Brains
  4. Technology
  5. Human Intelligence Merges with Human Technology
  6. Cosmic Intelligence

When a new epoch comes into being, the agents of that new epoch don’t necessarily eradicate, overcome, usurp, reduce, or impede the agents of the previous epoch. Every epoch stands on the shoulders of the last epoch.This is one reason not to fear the Singularity…as if it is going to destroy us or render us un-human. In fact, epoch number 5 may allow us to become more human (a characterization that we could only truly make after the fact – not from our current vantage point).

I like to think of “human” as a verb: as a shift from animal to post-human, because it characterizes our nature of always striving for something more.

animal to posthuman

There are debates raging on whether the Singularity is good or bad for humanity. One way to avoid endless debate is to do the existential act: to make an attempt at determining the fate of humanity, rather than sit passively and make predictions.  As Alan Kay famously said, “the best way to predict the future is to invent it”. We should try to guide the direction of the next epoch as much as we can while we are still the ones in charge.

In a previous article I wrote that criticizes some predictions by Nick Bostrom, I compare our upcoming epochal shift to a shift that happened in the past, when multi-cellular beings evolved. Consider:

Maybe Our AI Will Evolve to Protect Us And the Planet

tree-of-lifeBillions of years ago, single cells decided to come together in order to make bodies, so they could do more using teamwork. Some of these cells were probably worried about the bodies “taking over”. And oh did they! But, these bodies also did their little cells a favor: they kept them alive and provided them with nutrition. Win-win baby!

I am not a full-fledged Singularitarian. I prefer to stay agnostic as long as I can. Its not just a human story. Our Singularity is just the one that is happening to us at the moment.

Similarly, the emergence of previous epochs may have been experienced as Singularities to those that came before.

Why Nick Bostrom is Wrong About the Dangers of Artificial Intelligence

emvideo-youtube-VmtrvkGXBn0.jpg.pagespeed.ce.PHMYbBBuGwNick Bostrom is a philosopher who is known for his work on the dangers of AI in the future. Many other notable people, including Stephen Hawking, Elon Musk, and Bill Gates, have commented on the existential threats posed by a future AI. This is an important subject to discuss, but I believe that there are many careless assumptions being made as far as what AI actually is, and what it will become.

Yea yea, there’s Terminator, Her, Ex Machinima, and so many other science fiction films that touch upon deep and relevant themes about our relationship with autonomous technology. Good stuff to think about (and entertaining). But AI is much more boring than what we see in the movies. AI can be found distributed in little bits and pieces in cars, mobile phones, social media sites, hospitals…just about anywhere that software can run and where people need some help making decisions or getting new ideas.

John McCarthy, who coined the term “Artificial Intelligence” in 1956, said something that is totally relevant today: “as soon as it works, no one calls it AI anymore.” Given how poorly-defined AI is – how the definition of it seems to morph so easily, it is curious how excited some people get about its existential dangers. Perhaps these people are afraid of AI precisely because they do not know what it is.

Screen Shot 2015-09-02 at 10.51.56 AMElon Musk, who warns us of the dangers of AI, was asked the following question by Walter Isaacson: “Do you think you maybe read too much science fiction?” To which Musk replied:

“Yes, that’s possible”….“Probably.”

Should We Be Terrified?

In an article with the very subtle title, “You Should Be Terrified of Superintelligent Machines“, Bostrom says this:

An AI whose sole final goal is to count the grains of sand on Boracay would care instrumentally about its own survival in order to accomplish this.”

godzilla-610x439Point taken. If we built an intelligent machine to do that, we might get what we asked for. Fifty years later we might be telling it, “we were just kidding! It was a joke. Hahahah. Please stop now. Please?” It will push us out of the way and keep counting…and it just might kill us if we try to stop it.

Part of Bostrom’s argument is that if we build machines to achieve goals in the future, then these machines will “want” to survive in order to achieve those goals.

“Want?”

Bostrom warns against anthropomorphizing AI. Amen! In a TED Talk, he even shows a picture of the typical scary AI robot – like so many that have been polluting the air waves of late. He discounts this as anthropomorphizing AI.

Screen Shot 2015-08-31 at 9.51.57 PM

And yet Bostrom frequently refers to what an AI “wants” to do, the AI’s “preferences”, “goals”, even “values”. How can anyone be certain that an AI can have what we call “values” in any way that we can recognize as such? In other words, are we able to talk about “values” in any other context than a human one?

Screen Shot 2015-09-01 at 3.49.13 PMFrom my experience in developing AI-related code for the past 20 years, I can say this with some confidence: it is senseless to talk about software having anything like “values”. By the time something vaguely resembling “value” emerges in AI-driven technology, humans will be so intertwingled with it that they will not be able to separate themselves from it.

It will not be easy – or possible – to distinguish our values from “its” values. In fact, it is quite possible that we won’t refer to it at “it”. “It” will be “us”.

Bostrom’s fear sounds like fear of the Other.

That Disembodied Thing Again

Let’s step out of the ivory tower for a moment. I want to know how that AI machine on Boracay is going to actually go about counting grains of sand.

Many people who talk about AI refer to many amazing physical feats that an AI would supposedly be able to accomplish. But they often leave out the part about “how” this is done. We cannot separate the AI (running software) from the physical machinery that has an effect on the world – any more than we can talk about what a brain can do that has been taken out one’s head and placed on a table.

Screen Shot 2015-08-31 at 9.56.50 PM

It can jiggle. That’s about it.

Once again, the Cartesian separation of mind and body rears its ugly head – as it were – and deludes people into thinking that they can talk about intelligence in the absence of a physical body. Intelligence doesn’t exist outside of its physical manifestation. Can’t happen. Never has happened. Never will happen.

Ray Kurzweil predicted that by 2023 a $1,000 laptop would have the computing power and storage capacity of a human brain. When put in these terms, it sounds quite plausible. But if you were to extrapolate that to make the assumption that a laptop in 2023 will be “intelligent” you would be making a mistake.

Many people who talk about AI make reference to computational speed and bandwidth. Kurzweil helped to popularize a trend for plotting computer performance along with with human intelligence, which perpetuates computationalism. Your brain doesn’t just run on electricity: synapse behavior is electrochemical. Your brain is soaking in chemicals provided by this thing called the bloodstream – and these chemicals have a lot to do with desire and value. And… surprise! Your body is soaking in these same chemicals.

Intelligence resides in the bodymind. Always has, always will.

So, when there’s lot of talk about AI and hardly any mention of the physical technology that actually does something, you should be skeptical.

Bostrom asks: when will we have achieved human-level machine intelligence? And he defines this as the ability “to perform almost any job at least as well as a human”.

I wonder if his list of jobs includes this:

Screen Shot 2015-09-02 at 12.45.02 AM

Intelligence is Multi-Multi-Multi-Dimensional

Bostrom plots a one-dimensional line which includes a mouse, a chimp, a stupid human, and a smart human. And he considers how AI is traveling along this line, and how it will fly past humans.

Screen Shot 2015-08-31 at 9.51.34 PM

Intelligence is not one dimensional. It’s already a bit of a simplification to plot mice and chimps on the same line – as if there were some single number that you could extract from each and compute which is greater.

Charles Darwin once said: “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change.”

Is a bat smarter than a mouse? Bats are blind (dumber?) but their sense of echolocation is miraculous (smarter?)

parrot

Is an autistic savant who can compose complicated algorithms but can’t hold a conversation smarter than a charismatic but dyslexic soccer coach who inspires kids to be their best? Intelligence is not one-dimensional, and this is ESPECIALLY true when comparing AI to humans. Plotting them both on a single one-dimensional line is not just an oversimplification. By plotting AI on the same line as human intelligence, Bostrom is committing anthropomorphism.

AI cannot be compared apples-to-apples to human intelligence because it emerges from human intelligence. Emergent phenomena by their nature operate on a different plane than what they emerge from.

WE HAVE ONLY OURSELVES TO FEAR BECAUSE WE ARE INSEPARABLE FROM OUR AI

We and our AI grow together, side by side. AI evolves with us, for us, in us. It will change us as much as we change it. This is the posthuman condition. You probably have a smart phone (you might even be reading this article on it). Can you imagine what life was like before the internet? For half of my life, there was no internet, and yet I can’t imagine not having the internet as a part of my brain. And I mean that literally. If you think this is far-reaching, just wait another 5 years. Our reliance on the internet, self-driving cars, automated this, automated that, will increase beyond our imaginations.

Posthumanism is pulling us into the future. That train has left the station.

african cell phoneBut…all these technologies that are so folded-in to our daily lives are primarily about enhancing our own abilities. They are not about becoming conscious or having “values”. For the most part, the AI that is growing around us is highly-distributed, and highly-integrated with our activities – OUR values.

I predict that Siri will not turn into a conscious being with morals, emotions, and selfish ambitions…although others are not quite so sure. Okay – I take it back; Siri might have a bit of a bias towards Apple, Inc. Ya think?

Giant Killer Robots

armyrobotThere is one important caveat to my argument. Even though I believe that the future of AI will not be characterized by a frightening army of robots with agendas, we could potentially face a real threat: if military robots that are ordered to kill and destroy – and use AI and sophisticated sensor fusion to outsmart their foes – were to get out of hand, then things could get ugly.

But with the exception of weapon-based AI that is housed in autonomous mobile robots, the future of AI will be mostly custodial, highly distributed, and integrated with our own lives; our clothes, houses, cars, and communications. We will not be able to separate it from ourselves – increasingly over time. We won’t see it as “other” – we might just see ourselves as having more abilities than we did before.

Those abilities could include a better capacity to kill each other, but also a better capacity to compose music, build sustainable cities, educate kids, and nurture the environment.

If my interpretation is correct, then Bolstrom’s alarm bells might be better aimed at ourselves. And in that case, what’s new? We have always had the capacity to create love and beauty … and death and destruction.

To quote David Byrne: “Same as it ever was”.

Maybe Our AI Will Evolve to Protect Us And the Planet

Here’s a more positive future to contemplate:

AI will not become more human-like – which is analogous to how the body of an animal does not look like the cells that it is made of.

tree-of-lifeBillions of years ago, single cells decided to come together in order to make bodies, so they could do more using teamwork. Some of these cells were probably worried about the bodies “taking over”. And oh did they! But, these bodies also did their little cells a favor: they kept them alive and provided them with nutrition. Win-win baby!

To conclude, I disagree with Bostrom: we should not be terrified.

Terror is counter-productive to human progress.

Immortality: Can Humans Attain Eternal Youth And NOT Destroy the Planet?

Why are humans so obsessed with immortality?

Can humans ultimately achieve immortality?

And can immortality be achieved while still maintaining the health of the planet? In other words: can we avoid the scale of human population growth that would turn us into a seething viral infection on Earth’s skin that completely destroys it? (or, more likely, destroys us?)

The key to answering these questions is to redefine “immortality“.

Let’s start with Neoteny – a biological phenomenon in which traits of youthfulness in a species are extended into adulthood. Humans have pronounced neoteny, and there are several theories as to why this is the case, including some theories that claim that a longer period of learning, play, exploration, and creativity (features of youth) gave Homo sapiens an evolutionary advantage. The fact that humans continue to actively learn throughout life can be seen as a kind of psychological neoteny.

Given this biological essence, this genetic fact of our existence, one might conclude that we – as a species – should have a fondness for youth.

Indeed, we are obsessed with it.

Our Eternal Offspring

Now consider that humanity has created a huge brainchild which consists of culture, art, writing, technology, and the internet. This offspring of humanity extends beyond our physical selves and into our collective future. Culture is self-perpetuating, after all. The more we engage in the meta-world that we have created, the more we notice that it lives on while our physical selves grow old and die. And I believe that we are not having the same experience as our primitive ancestors, who preserved oral traditions as told by their ancestors, and built stone edifices to last thousands of years. I’m talking about something different: computers and software extend the human mind in a new way.

Extensions of Our Personal Selves

CatherineSoftware technology and the internet are forming external versions of our selves – beyond mere portraits or memoirs. Our individuality and personas are being increasingly articulated outside of ourselves. It is quite possible that within a few decades it will be commonplace for FaceBook pages, and other records, expressions, and creations of our lives, to be archived for prosperity. Personal digital preservation is a fact of our times, and it is becoming more sophisticated and thorough.

I used to think a lot about avatars in virtual worlds. That fad has started to pass. And yet, some avatar-like versions of ourselves – which may include software agents based on our personal styles of thinking (and of course shopping) – could become a core feature of future FaceBook-like services. Our identities are gradually being fragmented and distributed into many digital forms – often in ways that we are not aware of. It can have subtle effects on consciousness – even diluting the locus of consciousness – although it is too soon to feel the effects – or to articulate it. This idea is explored in Brian Rotman’s Becoming Beside Ourselves.

Total Brain Upload?

6789533-illustration-of-the-abstract-yellow-brain-on-black

I have just as much criticism of Ray Kurzweil’s vision of the future as I have admiration for his hard work at trying to build a better world. One of his more intriguing explorations is how to upload an entire brain into digital form so that one can live forever.

Uh, “Live” forever?

The very claim that this could be achieved is fraught with logical problems, even contradictions. In the absence of a body, how can a brain actually experience reality? (much less, a digital copy of a brain). One answer is that our bodies, and thus our senses, will be “simulated” using virtual reality. Okay, I can see how this might be achieved. But, having spent almost a decade developing virtual world software in startup companies, and having done academic research on the subject, I would say that this will be even harder than digitally uploading a brain. The unfathomable task of creating a convincing simulacrum of reality (forever) makes the task of a digital brain upload seem like a walk in the park. The Matrix will be fiction for a long time before it becomes fact – if ever.

The Aging Mind

Whether or not you believe in the possibility of a digital brain upload, you are more likely to agree with the following:

As we age, the stories of our lives, our memories, and other intangible aspects of existence attain more importance. Aging people tend to have an increased spiritual sense. The physical world – with all its entropy (and aches and pains) gives way to something like a virtual world: the realm of the mind. (Unless you become senile, in which case, you’re screwed).

santa-claus-athleteDoes an 80-year old man have much reason to mentally explore techniques of snowboarding? Or new ways to have sex on a trampoline? My guess is…not. But I may be wrong. My 80 year-old neighbor might think about trampoline sex all day, every day, and never tell me (I prefer not to know anyway).

But I digress. It is only the year 2013 after all. A hundred years from now, medical science will have made huge strides. In the future, sex at age 80 might actually be as good as sex at 30. And sex at 150 might actually be as good as sex at 100 (if you can imagine that. I choose not to dwell on the details).

What s the Purpose of Medical Technology?

Is medicine all about prolonging life, or is it all about making us healthier? Many of my friends believe medicine should be about making us healthier. What good is living longer if you’re miserable? Actually, being healthier has a side-effect of longevity – so perhaps these two things go hand-in-hand.

…which finally leads me to the core problem: More people…living to the age of 150…means more mouths to feed. Our planet can only feed so many mouths.

So: if humans insist on extending youth (and by implication: age (and by implication: population size (assuming we still insist on making babies))), how do we keep ourselves from turning into Earth’s pesky skin infection?

Answer: immortality will have to be reserved for the mind; not for the body. And the reason is this:

EARTH WILL NOT SUSTAIN INFINITE HUMAN PHYSICAL GROWTH.

popGlobe2

Evolving Our Digital Selves

Here’s my wacky, quaky idea:

“Death” as we know it will gradually shed its absolute status. It will start on a path similar to other age-old concepts like “intelligence”, “God”, and “consciousness” – all of which are being chipped away and blurred, as science, technology, religion, and the human experiment advance.

Personal digital preservation seems to me as an inevitable evolution. Data storage is becoming cheaper all the time, and the desire to retrieve memories, archives, knowledge, and data will continue. My grandmother’s death came as a shock to my emotional life. In the future, there may be a lot more left of one’s grandma after she dies. Just watching a video of her recounting a story is enough to bring back a little piece of her – at least for a moment. Consider that digital preservation might soon include artificial intelligence algorithms, and one might begin to imagine asking your virtual grandma to tell that story about dad’s fist car.

In the future, we won’t “believe” that we are actually talking to grandma, any more than we “believe” that our computers are thinking. Eventually, it might not really matter.

We invented virtual reality as a plaything. As long as we insist on living forever, virtual reality will become a necessity. It is what we will use to achieve the experience of immortality…rather than outright physical immortality.

Otherwise, Earth will decide she’s had enough of us, and shrug us off – like the pesky little germs that we are.

______________________________________