How much negentropy is Earth capable of?

Negentropy is the opposite of entropy. It refers to an increase in order, complexity, and usefulness, while entropy refers to the decay of order or the tendency for a system to become random and useless.

The universe as a whole tends toward total entropy, or heat death. This does not mean that ALL parts of the universe are becoming less ordered. There can be isolated parts of the universe that are actually increasing in order; becoming more organized and workable. The best example of this is our home: planet Earth.

A miracle of 7,000,000,000,000,000,000,000,000,000 atoms

I was walking from my bedroom to my bathroom this morning, pondering the miracle of my body purposefully moving itself from one place in the universe to another. Consider the atoms that make up my body; they are assembled in just the right way to construct a human capable of locomotion. It is a miracle. Of course, the atoms themselves are not the driving force of this capability. The driving force is a collaboration of emergent systems (molecules, tissues, electrochemical activity, signals between organs, and of course, a brain – which evolved in the context of a complex planet, with other brains in societies, and with an ever-complexifying backdrop of shared information.

It’s a curious thing: planet Earth – with its vast oceans, atmosphere, ecosystems and organisms – is determined to go against the overall tendency in the universe to decay towards the inevitable doom of heat death.

While walking the seven billion billion billion atoms of my body to the bathroom, I considered how far the negentropic urge of our planet could possibly push itself, in a universe that generally tries to ruin the party; a universe that will ultimately win in the end. The seven billion billion billion atoms currently in my body will eventually be strewn throughout a dead universe. At that point there will be nothing that can re-assemble them into anything useful.

How not to ruin a party

The party is not over; there is ample reason to believe that Earth is not done yet. Earth generated a biosphere – the only spherical ecosystem we know of – which produced animals and humans, and most recently – post-biological systems (technology and AI). I would not dismiss entirely the notion that Earth really wants us to invent AI, and to allow it to take over – because our AI could ultimately help Earth stay healthy, and continue its negentropic party. We humans (in our old, biological manifestation) are not capable of taking care of our own planet. We are only capable of exploiting its resources – left to our own primitive survival devices. It is only through our post-human systems that we will be able to give Earth the leverage it needs to continue its negentropic quest.

This is another way of saying that the solutions to climate change and mass extinction will require massive social movements, corporate and governmental leadership, global-scale technologies, and other trans-human-scale systems that far exceed the mental capacities of a single human brain. It is possible that the ultimate victory of AI will be to save ourselves from an angry Mother on the verge of committing infanticide.

In the meanwhile, Earth may decide that it needs to get rid of the majority of the human population; just another reason to reconsider the urge to make babies.

But just how far can Earth’s negentropic party extend? As Earth’s primary agents of negentropy, we humans are preparing to tap the moon and other planets for resources. Will we eventually be able to develop energy shields to deflect renegade asteroids? Will our robots continue to colonize the solar system? How far will Earth’s panspermia extend?

There are plenty of science fiction stories that offer exciting and illuminating possible answers to these questions; I will not attempt to venture beyond my level of knowledge in this area. All I will say is…I think there are two possible futures for us humans:

(1) Earth will decide it has had enough of climate change, and smack us down with rising oceans and chaotic storms, causing disease, mass migrations, and war, resulting in our ultimate demise (Earth will be fine after a brief recovery period).

(2) We will evolve a new layer of the biosphere – built of technology and AI – and this will regulate our destructive instincts, thus allowing Earth to stay healthy and to keep complexifying. It will allow Earth to reconsider what it currently sees as a cancer on its skin – and to see us as agents of health.

In the case of future (2), we may lose some of our autonomy – but it just might be a comfortable existence in the long run – because Earth will be better off – and it will want to keep us around. Eventually, the panspermic negentropic party will not be our own – we will be just one layer of emergence emanating from the planet. We will become mere organs of an extended Earth ecosystem of ecosystems that continues to defy the general entropy of the universe…at least for a few billion more years.


Here’s one way to evolve an artificial intelligence

This picture illustrates an idea for how to evolve an AI system. It is derived from the sensor-brain-actuator-world model.

Machine learning algorithms have been doing some impressive things. Simply by crawling through massive oceans of data and finding correlations, some AI systems are able to make unexpected predictions and reveal insights.

Neural nets and evolutionary algorithms constitute a natural pairing of technologies for designing AI systems. But evolutionary algorithms require selection criteria that can be difficult to design. One solution is to use millions of human observers as a Darwinian fitness force to guide an AI towards an optimal state.


Since there is so much discussion (and confusion) on AI these days I want make some clarifications.

  • This has nothing to do with consciousness or self. This AI is disembodied.
  • The raw data input is (not) curated. It has no added interpretation.
  • Any kind of data can be input. The AI will ignore most of it at first.
  • The AI presents its innards to humans. I am calling these “simulations”.
  • The AI algorithm uses some unspecified form of machine learning.
  • The important innovation here is the ability to generate “simulations”.


The humanist in me says we need to act as the collective Mother for our brain children by providing continual reinforcement for good behavior and discouraging bad behavior. As a world view emerges in the AI, and as an implicit code of morals comes into focus, the AI will “mature”. Widely-expressed fears of AI run amok could be partially alleviated by imposing a Mothering filter on the AI as it comes of age.

Can Anything Evolve without Selection?

I suppose it is possible for an AI to arrive at every possible good idea, insight, and judgement just by digesting the constant data spew from humanity. But without an implicit learning process (such as back-propagation and other feedback mechanisms used in training AI), the AI cannot truly learn in an ecosystem of continual feedback.

Abstract Simulations 

Abstraction in Modernist painting is about generalizing the visual world into forms and colors that substitute detail for overall impressions. Art historians have charted the transition form realism to abstraction – a kind of freeing-up and opening-up of vision.

Imagine now a new path leading from abstraction to realism. And it doesn’t just apply to images: it also applies to audible signals, texts, movements, and patterns of behavior.

Imagine an AI that is set up like the illustration above coming alive for the first time. The inner-life of newborn infant is chaotic, formless, and devoid of meaning, with the exception of reactions to a mother’s smile, her scent, and her breasts.

A newborn AI would produce meaningless simulations. As the first few humans log in to give feedback, they will encounter mostly formless blobs. But eventually, some patterns may emerge – with just enough variation for the human judges to start making selective choices: “this blob is more interesting than that blob”.

As the young but continual stream of raw data accumulates, the AI will start to build impressions and common themes, like what Deep Dream does as it collects images and finds common themes and starts riffing on those themes.

The important thing about this process is that it can self-correct if it starts to veer in an unproductive direction – initially with the guidance of humans and eventually on its own. It also maintains a memory of bad decisions, and failed experiments – which are all a part of growing up.


If this idea is interesting to you, just Google “evolving AI” and you will find many many links on the subject.

As far as my modest proposal: the takeaway I’d like to leave you with is this:

Every brain on earth builds inner-simulations of the world and plays parts of those simulations constantly as a matter of course. The simple animals have extremely simple models of reality. We humans have insanely complex models – which often get us into trouble. Trial simulations generated by an evolving AI would start pretty dumb, but with more sensory exposure, and human guidance, who knows what would emerge!

It would be irresponsible to launch AI programs without mothering. The evolved brains of most complex mammals naturally expect this. Our AI brain children are naturally derived from a mammalian brain. Mothering will allow us to evolve AI systems that don’t turn into evil monsters.

Having sex with robots to save the planet

Long long ago, there was an accident in a warm puddle. A particular molecule – through some chance interaction with the soup of surrounding molecules – ended up with a copy of itself. Since the surrounding soup was similar to the original, the copy was more likely to replicate itself. And so it did. The rest is history. We call it evolution.

It is possible that similar accidents happened elsewhere around the same time – not just in one single puddle. One could also say that variations of this accident are still happening – only now at a massive scale.

Every act of every living thing can be seen as an elaboration of this original act. Self-replication is the original impetus of all life. We share a common ancestor with amoebas – who replicate asexually. The invention of sexual reproduction boosted genetic creativity. More recently in the scope of Earth’s history, creativity escaped the confines of genetics. We humans are the primary hosts of this creative engine.

Human beings have contrived all of the resulting aspects of survival to an art-form. This includes – not just the act of sex – but also the act of preparing food (cuisine), the act of making sounds and speaking (music and singing), and the act of altering the environment to create new structure (visual art). The abstractions and representations of the world that the brain generates via the body are derivations and deviations from the original acts of survival. It’s a form of self-replication.

The emergence of abstractions, mental models, and representations is increasing in complexity. This is an inevitable one-way blossoming accelerated by the emergence of the animal brain. The human experience is conflicted; we are oriented toward achieving escape velocity from Original Nature, but we also long for Original Nature. How can we resolve this conflict?

The original act of self-replication has powerful repercussions – billions of years after the original accident – it has taken on many forms. It is the reason we humans have strange phenomena like orgasm. And selfies.


We are at a crossroads in the history of life on Earth. The current era of global warming is almost certainly the result of the overpopulation and hyperactivity of humans, who have released – and continue to release – too much carbon into the atmosphere. One effective solution to global warming would be to reduce the primary agents of the fever…to reduce human population.

And so, converting that original act of replication into works of art is not just creative and exciting: it may be necessary. Humans must transcend the Earthly act of self-replication in order to preserve the health of the planet.

The future of sex will be…let’s just say…interesting. Every cell in our body contains the blueprint of a desire to replicate. Nature and society are structured around the elaborate machinery that has emerged to ensure self-replication – of human bodies and culture. This desire has made its mark on every aspect of society – even if we don’t recognize it as such. We cannot escape it. And so we need to virtualize it, because self-replication of human beings (physically) has become a threat to the planet that sustains us. It’s our duty to Mother Earth.

I am a living organism and so I have to contend with this crazy desire to replicate. Note: I am childless. I have never replicated my genes and have no intention to do so at this stage in my life. But I am passionate about replicating ideas, art, words, and software.

Now, what about the title of this blog post? Will people eventually start having sex with robots? It will certainly be more subtle than that. In fact, it has been said that by the time we get to that point, WE will be the robots.

Is this the kind of future I want? Strangely, yes. Because I will have long returned to the Earth – my molecules will have been handed down through generations of living things. I will be a part of Earth’s physiology. My tribe will be bigger than humanity.

One of my molecules may even end up in a warm puddle somewhere.

Science writers who say machines have feelings…lack intelligence.

I saw an article by Peter Dockrill with the headline, “Artificial intelligence should be protected by human rights, says Oxford mathematician”.

The subtitle is: “Machines Have Feelings Too”.

Regarding the potential dangers of robots and computers, Peter asks: “But do robots need protection from us too?” Peter is apparently a “science and humor writer”. I think he should stick with just one genre.

Just more click-bait.

There are too many articles on the internet with headlines like this. They are usually covered with obnoxious, eye-jabbing ads, flitting in front of my face like giant colorful moths. It’s a carnival – through and through.

I could easily include any number of articles about the “terrifying” future of AI, “emotional machines”, “robot ethics”, and other cartoon-like dilutions of otherwise thoughtful well-crafted science fiction.

Good science fiction is better than bad science journalism.

Screen Shot 2016-07-09 at 10.30.04 PM

Here’s Ben Goldacre:

Screen Shot 2016-06-24 at 9.30.15 PM

Now, back to this silly subject of machines having feelings:

Some of my previous articles express my thoughts on the future of AI, such as:

No Rafi. The Brain is not a Computer

The Singularity is Just One in a Series

Why Nick Bostrom is Wrong About the Dangers of Artificial Intelligence

Intelligence is NOT One-Dimensional

homunculusbI think we should be working to fix our own emotional mess, instead of trying to make vague, naive predictions about machines having feelings. Machines will – eventually – have something analogous to animal motivation and human states of mind, but by then the human world will look so different that the current conversation will be laughable.

Right now, I am in favor of keeping the “feelings” on the human side of the equation.

We’re still too emotionally messed up to be worrying about how to tend to our machines’ feelings. Let’s fix our own feelings first before giving them to our machines. We still have that choice.

And now, more stupidity from Meghan Neal:

“Computers are already faster than us, more efficient, and can do our jobs better.”

Wow Meghan, you sure do like computers, don’t you?

I personally have more hope, respect, and optimism for our species.

In this article, Meghan makes sweeping statements about machines with feelings, including how “feeling” computers are being used to improve education.

The “feeling” robots she is referring to are machines with a gimmick – they are brain-dead automatons with faces attached to them. Many savvy futurists suggest that true AI will not result from humans trying to make machines act like humans.  That’s anthropomorphism. Programming pre-defined body language in an unthinking robot makes for interesting and insightful experimentation in human-machine interaction. But please! Don’t tell me that these machines have “feelings”.

Screen Shot 2016-07-09 at 3.44.18 PMThis article says: “When Nao is sad, he hunches his shoulders forward and looks down. When he’s happy, he raises his arms, angling for a hug. When frightened, Nao cowers, and he stays like that until he is soothed with some gentle strokes on his head.”


Pardon me while I projectile vomit.

Any time you are trying to compare human intelligence with computers, consider what Marvin once said:

Screen Shot 2016-06-24 at 10.38.04 PM

No Rafi. The brain is not a computer.

Rafi Letzter wrote an article called “If you think your brain is more than a computer, you must accept this fringe idea in physics“.

Screen Shot 2016-06-11 at 12.50.53 PM

The article states the view of computer scientist Scott Aaronson: “…because the brain exists inside the universe, and because computers can simulate the entire universe given enough power, your entire brain can be simulated in a computer.”

Who the fuck said computers can simulate the entire universe?

That is a huge assumption. It’s also wrong.

We need to always look close at the assumptions that people use to build theories. If it can be proven that computers can simulate the entire universe, then this theory will be slightly easier to swallow.

By the way, a computer cannot simulate the entire universe because it would have to simulate itself simulating itself simulating itself.

The human brain is capable of computation, and that’s why humans are able to invent computers.

The very question as to whether the brain “is a computer” is wrong-headed. Does the brain use computation? Of course it does (among other things). Is the brain a computer? Of course it isn’t.

The Singularity is Just One in a Series

I’m reading Kurzweil’s The Singularity is Near.

It occurs to me that the transition that the human race is about to experience is similar to other major transitions that are often described as epochs – paradigm-shifts – in which a new structure emerges over a previous structure. There are six key epochs that Kurzweil describes. (The first four are not unlike epochal stages described by Terrance Deacon and others.)

  1. Physics and Chemistry
  2. Biology and DNA
  3. Brains
  4. Technology
  5. Human Intelligence Merges with Human Technology
  6. Cosmic Intelligence

When a new epoch comes into being, the agents of that new epoch don’t necessarily eradicate, overcome, usurp, reduce, or impede the agents of the previous epoch. Every epoch stands on the shoulders of the last epoch.This is one reason not to fear the Singularity…as if it is going to destroy us or render us un-human. In fact, epoch number 5 may allow us to become more human (a characterization that we could only truly make after the fact – not from our current vantage point).

I like to think of “human” as a verb: as a shift from animal to post-human, because it characterizes our nature of always striving for something more.

animal to posthuman

There are debates raging on whether the Singularity is good or bad for humanity. One way to avoid endless debate is to do the existential act: to make an attempt at determining the fate of humanity, rather than sit passively and make predictions.  As Alan Kay famously said, “the best way to predict the future is to invent it”. We should try to guide the direction of the next epoch as much as we can while we are still the ones in charge.

In a previous article I wrote that criticizes some predictions by Nick Bostrom, I compare our upcoming epochal shift to a shift that happened in the past, when multi-cellular beings evolved. Consider:

Maybe Our AI Will Evolve to Protect Us And the Planet

tree-of-lifeBillions of years ago, single cells decided to come together in order to make bodies, so they could do more using teamwork. Some of these cells were probably worried about the bodies “taking over”. And oh did they! But, these bodies also did their little cells a favor: they kept them alive and provided them with nutrition. Win-win baby!

I am not a full-fledged Singularitarian. I prefer to stay agnostic as long as I can. Its not just a human story. Our Singularity is just the one that is happening to us at the moment.

Similarly, the emergence of previous epochs may have been experienced as Singularities to those that came before.

Thoughts on the Evolution of Evolvability


It is early February. The other day, I observed some fresh buds on a tree. When I lived back east, I remember seeing buds on bare trees in the snowy dead of winter. I used to wonder if these trees are “preparing” for the first days of spring by starting the growth of their buds. Trees, like most plants, can adapt to variations of weather. All organisms, in fact, exhibit behaviors that appear resourceful, reactive, adaptive, even “intelligent”.

We sometimes talk about animals and plants in terms of their goals and intentions. We even use intentional language in relation to computers or mechanical machines. Even though we know a machine isn’t alive, we use this kind of language as a form of shorthand.

But there may be something more than just verbal shorthand going on here.

The Intentional Stance

Daniel Dennett proposed the concept of the Intentional Stance. When I first learned about this idea, I felt a new sense of how our own human intelligence is just a special case of the adaptive and goal-directed nature of all life on the planet.

When I saw those buds on the tree the other day, I realized that there is so much goal-directed behavior happening all over the place – in plants, animals, and even in ecological systems. Are humans any more adaptive or “intentional” than any other organism?

The Evolution of Self and the Intentional Stance

Could it be that our human brains have simply…

…wrapped a fully-evolved self around our intentions?

…that we are really no more goal-directed or intentional than any other organism…except that we reflect on it with a higher level of consciousness, and apply a fully-formed language to that intentionality?

The Evolution of Evolvability

I first learned of the evolution of evolvability from a paper by Richard Dawkins. It’s a powerful idea, and it helps to make evolution seem less magical and perhaps easier to imagine. Not only have organisms continued to evolve, but their ability to evolve has improved. An example is the evolution of sexual reproduction, which created a huge advantage in a species’ ability to exploit genetic variation over evolutionary time.

A recent article titled “Intelligent design without a creator? Why evolution may be smarter than we thought” makes reference to the Evolution of Evolvability. It helps to cast the notion of intelligence and learning as prolific and pervasive in the natural world.

It would appear that the ability to evolve better ways to evolve predates humans. (It might even predate biology).

Of course we humans have found even better ways to evolve – including ways that overtake or sidestep our own human biology. This constitutes a new era in the evolution of life on earth – an era in which technology, culture, and ideas (memes) become the primary evolving agents of our species (and possibly the whole planet – assuming we humans make the planet so sick that we have to fabricate artificial immune systems in order to keep the planet (and thus ourselves) healthy.

While many people will cast this Singularity-like idea in a negative light, I see it as a new protective organ that is forming around our planet. Biology is not going away. It is just one regime in a progression of many emergent regimes. Biology has given birth to the next regime (via Dennett’s crane), which then reaches down to regulate, modulate, and protect the regime which created it.

Evolvability is the higher-level emergent system over evolution. It is a higher-order derivative. When seen in this way, biology comes out looking like just one step in a long process.

(Thanks to Stephen Brown for editorial assistance)