Here’s one way to evolve an artificial intelligence

This picture illustrates an idea for how to evolve an AI system. It is derived from the sensor-brain-actuator-world model.

Machine learning algorithms have been doing some impressive things. Simply by crawling through massive oceans of data and finding correlations, some AI systems are able to make unexpected predictions and reveal insights.

Neural nets and evolutionary algorithms constitute a natural pairing of technologies for designing AI systems. But evolutionary algorithms require selection criteria that can be difficult to design. One solution is to use millions of human observers as a Darwinian fitness force to guide an AI towards an optimal state.

Clarifications

Since there is so much discussion (and confusion) on AI these days I want make some clarifications.

  • This has nothing to do with consciousness or self. This AI is disembodied.
  • The raw data input is (not) curated. It has no added interpretation.
  • Any kind of data can be input. The AI will ignore most of it at first.
  • The AI presents its innards to humans. I am calling these “simulations”.
  • The AI algorithm uses some unspecified form of machine learning.
  • The important innovation here is the ability to generate “simulations”.

Mothering

The humanist in me says we need to act as the collective Mother for our brain children by providing continual reinforcement for good behavior and discouraging bad behavior. As a world view emerges in the AI, and as an implicit code of morals comes into focus, the AI will “mature”. Widely-expressed fears of AI run amok could be partially alleviated by imposing a Mothering filter on the AI as it comes of age.

Can Anything Evolve without Selection?

I suppose it is possible for an AI to arrive at every possible good idea, insight, and judgement just by digesting the constant data spew from humanity. But without an implicit learning process (such as back-propagation and other feedback mechanisms used in training AI), the AI cannot truly learn in an ecosystem of continual feedback.

Abstract Simulations 

Abstraction in Modernist painting is about generalizing the visual world into forms and colors that substitute detail for overall impressions. Art historians have charted the transition form realism to abstraction – a kind of freeing-up and opening-up of vision.

Imagine now a new path leading from abstraction to realism. And it doesn’t just apply to images: it also applies to audible signals, texts, movements, and patterns of behavior.

Imagine an AI that is set up like the illustration above coming alive for the first time. The inner-life of newborn infant is chaotic, formless, and devoid of meaning, with the exception of reactions to a mother’s smile, her scent, and her breasts.

A newborn AI would produce meaningless simulations. As the first few humans log in to give feedback, they will encounter mostly formless blobs. But eventually, some patterns may emerge – with just enough variation for the human judges to start making selective choices: “this blob is more interesting than that blob”.

As the young but continual stream of raw data accumulates, the AI will start to build impressions and common themes, like what Deep Dream does as it collects images and finds common themes and starts riffing on those themes.

http://theghostdiaries.com/10-most-nightmarish-images-from-googles-deepdream/

The important thing about this process is that it can self-correct if it starts to veer in an unproductive direction – initially with the guidance of humans and eventually on its own. It also maintains a memory of bad decisions, and failed experiments – which are all a part of growing up.

Takeaway

If this idea is interesting to you, just Google “evolving AI” and you will find many many links on the subject.

As far as my modest proposal: the takeaway I’d like to leave you with is this:

Every brain on earth builds inner-simulations of the world and plays parts of those simulations constantly as a matter of course. The simple animals have extremely simple models of reality. We humans have insanely complex models – which often get us into trouble. Trial simulations generated by an evolving AI would start pretty dumb, but with more sensory exposure, and human guidance, who knows what would emerge!

It would be irresponsible to launch AI programs without mothering. The evolved brains of most complex mammals naturally expect this. Our AI brain children are naturally derived from a mammalian brain. Mothering will allow us to evolve AI systems that don’t turn into evil monsters.

Advertisement

Cute Yet Creepy. Animal Yet Human.

Screen Shot 2016-08-14 at 8.54.16 PM

I have been thinking about the uncanny valley for decades. Here are some things I’ve written on the subject:

Screen Shot 2016-08-14 at 8.17.38 PM

The Uncanny Valley of Expression

Uncanny Charlie

How Does Artificial Life Avoid the Uncanny Valley?

Augmenting the Uncanny Valley

Over time, animated filmmakers have become more savvy about the uncanny problem. They are getting generally better at avoiding the creeps. According to this article, Disney learned its lesson, the hard way….

Screen Shot 2016-08-14 at 8.30.42 PM

“And that’s why realism-fetishizing technology like motion capture is much more susceptible to creeping us out than more “primitive” or stylized animation: it’s only when you’re purporting to offer that level of detail in the first place that you can totally, utterly screw it up.”

Despite the fact that animators are more savvy about the Valley, I still can’t help but notice a nagging, low-grade fever of optical realism that has crept into the lineup of popular animated characters (even as the accidental monsters get shuffled off to quarantine). Consumers of animated films may be unaware of it…because it has become normalized. The realism has increased, bit by bit, so that now we have quivering hair follicles, sparkling teeth, and eyeballs reflecting the light of the environment.

Imagine if our favorite classic characters were rendered like this.

real-animals-mickey

But the discomfort we call the uncanny valley doesn’t only occur when the thin veneer of visual realism unexpectedly reveals a mindless robot where “nobody is home”. The phenomenon could be seen in a larger context: it is caused by the clash of any two aspects of an artificial character that operate at incompatible levels of realism. For instance…

Can Animals Become Too Human?

I recently saw Zootopia. I really enjoyed it. Great film. But I must say, I did catch a glimpse of the Valley. There’s no denying it.

I also recently saw Guardians of the Galaxy, with Rocket Raccoon, who exhibits two very different kinds of realism: (1) Raccoon! (2) A tough guy with attitude – and a very human intelligence.

Screen Shot 2016-08-14 at 9.55.27 PMCan contradictory behavioral realism create a different sort of valley? Technology for character animation has enabled a much higher level of expressivity than has ever been possible, with fine detail in subtle eye and mouth movements. One might conclude that since behavioral realism has caught up with visual realism, the uncanny valley should now be a thing of the past. But then again, that depends on whether the behavior and the visuals apply to the same species!

Nothing abnormal about a cartoony raccoon throwin’ shapes and talkin’ tough. But when this animal is rendered in a hyper-realistic manner, AND evoking high-res human expression, things start to feel odd.

Silvery-marmoset-6Pandas, ants, lobsters, bison, eels … in order for all of these various animals to assume the range of human emotion needed to deliver a clever line, they have to be equipped with a face with all the expected degrees of freedom. The result is what I call “rubber mask syndrome”.

273-24099

One example is the characters in Antz, whose faces stretch in very un ant-like ways in order to express very human-like things. More and more animals (unlikely animals even) are being added to the cast of movie stars. They are snarky, sly, witty, sexy, clever…and oh so human. It has all gotten a little weird if you ask me.

Thoughts on the Evolution of Evolvability

Evolve-Darwin-Fish-Car-Emblem-(2363)

It is early February. The other day, I observed some fresh buds on a tree. When I lived back east, I remember seeing buds on bare trees in the snowy dead of winter. I used to wonder if these trees are “preparing” for the first days of spring by starting the growth of their buds. Trees, like most plants, can adapt to variations of weather. All organisms, in fact, exhibit behaviors that appear resourceful, reactive, adaptive, even “intelligent”.

We sometimes talk about animals and plants in terms of their goals and intentions. We even use intentional language in relation to computers or mechanical machines. Even though we know a machine isn’t alive, we use this kind of language as a form of shorthand.

But there may be something more than just verbal shorthand going on here.

The Intentional Stance

Daniel Dennett proposed the concept of the Intentional Stance. When I first learned about this idea, I felt a new sense of how our own human intelligence is just a special case of the adaptive and goal-directed nature of all life on the planet.

When I saw those buds on the tree the other day, I realized that there is so much goal-directed behavior happening all over the place – in plants, animals, and even in ecological systems. Are humans any more adaptive or “intentional” than any other organism?

The Evolution of Self and the Intentional Stance

Could it be that our human brains have simply…

…wrapped a fully-evolved self around our intentions?

…that we are really no more goal-directed or intentional than any other organism…except that we reflect on it with a higher level of consciousness, and apply a fully-formed language to that intentionality?

The Evolution of Evolvability

I first learned of the evolution of evolvability from a paper by Richard Dawkins. It’s a powerful idea, and it helps to make evolution seem less magical and perhaps easier to imagine. Not only have organisms continued to evolve, but their ability to evolve has improved. An example is the evolution of sexual reproduction, which created a huge advantage in a species’ ability to exploit genetic variation over evolutionary time.

A recent article titled “Intelligent design without a creator? Why evolution may be smarter than we thought” makes reference to the Evolution of Evolvability. It helps to cast the notion of intelligence and learning as prolific and pervasive in the natural world.

It would appear that the ability to evolve better ways to evolve predates humans. (It might even predate biology).

Of course we humans have found even better ways to evolve – including ways that overtake or sidestep our own human biology. This constitutes a new era in the evolution of life on earth – an era in which technology, culture, and ideas (memes) become the primary evolving agents of our species (and possibly the whole planet – assuming we humans make the planet so sick that we have to fabricate artificial immune systems in order to keep the planet (and thus ourselves) healthy.

While many people will cast this Singularity-like idea in a negative light, I see it as a new protective organ that is forming around our planet. Biology is not going away. It is just one regime in a progression of many emergent regimes. Biology has given birth to the next regime (via Dennett’s crane), which then reaches down to regulate, modulate, and protect the regime which created it.

Evolvability is the higher-level emergent system over evolution. It is a higher-order derivative. When seen in this way, biology comes out looking like just one step in a long process.

(Thanks to Stephen Brown for editorial assistance)

IS “ARTIFICIAL LIFE GAME” AN OXYMORON?

(This is a re-posting from Self Animated Systems)

langtonca

Artificial Life (Alife) began with a colorful collection of biologists, robot engineers, computer scientists, artists, and philosophers. It is a cross-disciplinary field, although many believe that biologists have gotten the upper-hand on the agendas of Alife. This highly-nuanced debate is alluded to in this article.

Games

What better way to get a feel for the magical phenomenon of life than through simulation games! (You might argue that spending time in nature is the best way to get a feel for life; I would suggest that a combination of time with nature and time with well-crafted simulations is a great way to get deep intuition. And I would also recommend reading great books like The Ancestor’s Tale :)

Simulation games can help build intuition on subjects like adaptation, evolution, symbiosis, inheritance, swarming behavior, food chains….the list goes on.

Screen Shot 2014-10-17 at 7.48.02 PMScreen Shot 2014-10-19 at 12.24.54 PMOn the more abstract end of the spectrum are simulation-like interactive experiences involving semi-autonomous visual stuff (or sound) that generates novelty. Kinetic art that you can touch, influence, and witness lifelike dynamics can be more than just aesthetic and intellectually stimulating.

These interactive experiences can also build intuition and insight about the underlying forces of nature that come together to oppose the direction of entropy (that ever-present tendency for things in the universe to decay).

Screen Shot 2014-10-17 at 7.58.33 PM

On the less-abstract end of the spectrum, we have virtual pets and avatars (a subject I discussed in a keynote at VISIGRAPP).

“Hierarchy Hinders” –  Lesson from Spore

Screen Shot 2014-10-17 at 8.18.59 PMWill Wright, the designer of Spore, is a celebrated simulation-style game designer who introduced many Alife concepts in the “Sim” series of games. Many of us worried that his epicSpore would encounter some challenges, considering that Maxis had been acquired by Electronic Arts. The Sims was quite successful, but Spore fell short of expectations. Turns out there is a huge difference between building a digital dollhouse game and building a game about evolving lifeforms.

Also, mega-game corporations have their share of social hierarchy, with well-paid executives at the top and sweat shop animators and code monkeys at the bottom. Hierarchy (of any kind) is generally not friendly to artificial life.

For blockbuster games, there are expectations of reliable, somewhat repeatable behavior, highly-crafted game levels, player challenges, scoring, etc. Managing expectations for artificial life-based games is problematic. It’s also hard to market a game which is essentially a bunch of game-mechanics rolled into one. Each sub-game features a different “level of emergence” (see the graph below for reference). Spore presents several slices of emergent reality, with significant gaps in-between. Spore may have also suffered partly due to overhyped marketing.

Artificial Life is naturally and inherently unpredictable. It is close cousins with chaos theory, fractals, emergence, and uh…life itself.

Emergence

alife graphAt the right is a graph I drew which shows how an Alife simulation (or any emergent system) creates novelty, creativity, adaptation, and emergent behavior. This emergence grows out of the base level inputs into the system. At the bottom are atoms, molecules, and bio-chemistry. Simulated protein-folding for discovering new drugs might be an example of a simulation that explores the space of possibilities and essentially pushes up to a higher level (protein-folding creates the 3-dimensional structure that makes complex life possible).

The middle level might represent some evolutionary simulation whereby new populations emerge that find a novel way to survive within a fitness landscape. On the higher level, we might place artificial intelligence, where basic rules of language, logic, perception, and internal modeling of the world might produce intelligent behavior.

In all cases, there is some level of emergence that takes the simulation to a higher level. The more emergence, the more the simulation is able to exhibit behaviors on the higher level. What is the best level of reality to create an artificial life game? And how much emergence is needed for it to be truly considered “artificial life”?

Out Of Control

Can a mega-corporation like Electronic Arts give birth to a truly open-ended artificial life game? Alife is all about emergence. An Alife engineer or artist expects the unexpected. Surprise equals success. And the more unexpected, the better. Surprise, emergent novelty, and the unexpected – these are not easy things to manage…or to build a brand around – at least not in the traditional way.

Screen Shot 2014-10-17 at 9.04.07 PMMaybe the best way to make an artificial life game is to spread the primordial soup out into the world, and allow “crowdsourced evolution” of emergent lifeforms.  OpenWorm comes to mind as a creative use of crowdsourcing.

What if we replaced traditional marketing with something that grows organically within the culture of users? What if, in addition to planting the seeds of evolvable creatures, we also planted the seeds of an emergent culture of users? This is not an unfamiliar kind problem to many internet startups.

Are you a fan of artificial life-based games? God games? Simulations for emergence? What is your opinion of Spore, and the Sims games that preceded it?

This is a subject that I have personally been interested in for my entire career. I think there are still unanswered questions. And I also think that there is a new genre of artificial game that is just waiting to be invented…

…or evolved in the wild.

Onward and Upward.

-Jeffrey