Here’s one way to evolve an artificial intelligence

This picture illustrates an idea for how to evolve an AI system. It is derived from the sensor-brain-actuator-world model.

Machine learning algorithms have been doing some impressive things. Simply by crawling through massive oceans of data and finding correlations, some AI systems are able to make unexpected predictions and reveal insights.

Neural nets and evolutionary algorithms constitute a natural pairing of technologies for designing AI systems. But evolutionary algorithms require selection criteria that can be difficult to design. One solution is to use millions of human observers as a Darwinian fitness force to guide an AI towards an optimal state.

Clarifications

Since there is so much discussion (and confusion) on AI these days I want make some clarifications.

  • This has nothing to do with consciousness or self. This AI is disembodied.
  • The raw data input is (not) curated. It has no added interpretation.
  • Any kind of data can be input. The AI will ignore most of it at first.
  • The AI presents its innards to humans. I am calling these “simulations”.
  • The AI algorithm uses some unspecified form of machine learning.
  • The important innovation here is the ability to generate “simulations”.

Mothering

The humanist in me says we need to act as the collective Mother for our brain children by providing continual reinforcement for good behavior and discouraging bad behavior. As a world view emerges in the AI, and as an implicit code of morals comes into focus, the AI will “mature”. Widely-expressed fears of AI run amok could be partially alleviated by imposing a Mothering filter on the AI as it comes of age.

Can Anything Evolve without Selection?

I suppose it is possible for an AI to arrive at every possible good idea, insight, and judgement just by digesting the constant data spew from humanity. But without an implicit learning process (such as back-propagation and other feedback mechanisms used in training AI), the AI cannot truly learn in an ecosystem of continual feedback.

Abstract Simulations 

Abstraction in Modernist painting is about generalizing the visual world into forms and colors that substitute detail for overall impressions. Art historians have charted the transition form realism to abstraction – a kind of freeing-up and opening-up of vision.

Imagine now a new path leading from abstraction to realism. And it doesn’t just apply to images: it also applies to audible signals, texts, movements, and patterns of behavior.

Imagine an AI that is set up like the illustration above coming alive for the first time. The inner-life of newborn infant is chaotic, formless, and devoid of meaning, with the exception of reactions to a mother’s smile, her scent, and her breasts.

A newborn AI would produce meaningless simulations. As the first few humans log in to give feedback, they will encounter mostly formless blobs. But eventually, some patterns may emerge – with just enough variation for the human judges to start making selective choices: “this blob is more interesting than that blob”.

As the young but continual stream of raw data accumulates, the AI will start to build impressions and common themes, like what Deep Dream does as it collects images and finds common themes and starts riffing on those themes.

http://theghostdiaries.com/10-most-nightmarish-images-from-googles-deepdream/

The important thing about this process is that it can self-correct if it starts to veer in an unproductive direction – initially with the guidance of humans and eventually on its own. It also maintains a memory of bad decisions, and failed experiments – which are all a part of growing up.

Takeaway

If this idea is interesting to you, just Google “evolving AI” and you will find many many links on the subject.

As far as my modest proposal: the takeaway I’d like to leave you with is this:

Every brain on earth builds inner-simulations of the world and plays parts of those simulations constantly as a matter of course. The simple animals have extremely simple models of reality. We humans have insanely complex models – which often get us into trouble. Trial simulations generated by an evolving AI would start pretty dumb, but with more sensory exposure, and human guidance, who knows what would emerge!

It would be irresponsible to launch AI programs without mothering. The evolved brains of most complex mammals naturally expect this. Our AI brain children are naturally derived from a mammalian brain. Mothering will allow us to evolve AI systems that don’t turn into evil monsters.

Advertisements

3 thoughts on “Here’s one way to evolve an artificial intelligence

  1. Hey Jeffrey, I guess it depends a lot on how we define AI.

    A full-scale experiment along these lines might well have profound results *if* there’s something profound about the data; if not, it’s already part of our daily lives. If we get people to edit language translations, we get something akin to Google Translate, for instance. Your idea seems to focus on images, and probably isn’t a million miles away from Google Images, where humans guide machine learning algorithms to tag the content of pictures. The result of methods like these is artificial intelligence, but in the phrase “coming alive for the first time” and the mother-child metaphor you hint at more, consciousness perhaps.

    A potentially more profound result might emerge if more senses and abilities were included in the model, for instance the audible output, language skills, visual acuity and motor skills of a robot (or even a virtual robot), and some of that is being studied already, at least with expert, low-volume feedback if not crowd-sourcing (and what a seductive project that would be for the average Jane/Joe!).

    I think it’s important in considering AI to remember the distinction between intelligence and consciousness. Consciousness seems likely to be at least partly, and perhaps inherently, an embodied function. Our modeling of self comes from interoceptive data, internal sense data from muscles and other internal organs (and, although we normally think of touch as an external perception, this is a constant stream of information about our body), and may also be closely tied to billions of years of survival imperitive. We might never know exactly what consciousness entails, however clever we make machines, since we can’t know even whether another species is conscious (or, if we’re strict enough philosophically, another human being) except by inference.

    We will almost certainly build more and more sophisticated avatars, exhibiting more and more nuanced, apparently-conscious behaviour. Robots will tell us they’re conscious and be convincing, but we’ll know that this response can be traced back through meaningless piles of filtering events by humans, and therefore we could conclude that the robot’s “opinion” that it’s conscious is on a par with Google’s identification of a cat, or the face of a specific individual.

    Anil Seth expresses some of this well here https://www.youtube.com/watch?v=lyu7v7nWzfo

    Thanks for your very inspiring blog!

    • Thanks for the interesting feedback, lettersquash!

      I considered your comments and went back and added some clarifications, and also tweaked a few things while I was there. As you can see, I really enjoy thinking and writing about this topic. My background in art and genetic algorithm-based animation has certainly influenced my outlook. I appreciate the opinions and clarifications of others who enjoy this subject as well.

  2. There’s a good book on consciousness by Michael S. A. Graziano, “Consciousness and the Social Mind”. He suggests that awareness is a mental process that tracks a model of attention. In the same way that we model our sensory experiences in our heads we also have a model of self. In evolution there would be a survival value to knowing what other animals are aware of, so that we can know whether the predator (or the prey) has noticed us, and what it plans to do next. And awareness of what other humans are aware of and how they feel about it, enables us to transfer that same model to ourselves. Since Graziano describes awareness in terms of rich data being processed and associated with other data, it would seem possible to program awareness of self in a computerized brain.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s