Here’s one way to evolve an artificial intelligence

This picture illustrates an idea for how to evolve an AI system. It is derived from the sensor-brain-actuator-world model.

Machine learning algorithms have been doing some impressive things. Simply by crawling through massive oceans of data and finding correlations, some AI systems are able to make unexpected predictions and reveal insights.

Neural nets and evolutionary algorithms constitute a natural pairing of technologies for designing AI systems. But evolutionary algorithms require selection criteria that can be difficult to design. One solution is to use millions of human observers as a Darwinian fitness force to guide an AI towards an optimal state.

Clarifications

Since there is so much discussion (and confusion) on AI these days I want make some clarifications.

  • This has nothing to do with consciousness or self. This AI is disembodied.
  • The raw data input is (not) curated. It has no added interpretation.
  • Any kind of data can be input. The AI will ignore most of it at first.
  • The AI presents its innards to humans. I am calling these “simulations”.
  • The AI algorithm uses some unspecified form of machine learning.
  • The important innovation here is the ability to generate “simulations”.

Mothering

The humanist in me says we need to act as the collective Mother for our brain children by providing continual reinforcement for good behavior and discouraging bad behavior. As a world view emerges in the AI, and as an implicit code of morals comes into focus, the AI will “mature”. Widely-expressed fears of AI run amok could be partially alleviated by imposing a Mothering filter on the AI as it comes of age.

Can Anything Evolve without Selection?

I suppose it is possible for an AI to arrive at every possible good idea, insight, and judgement just by digesting the constant data spew from humanity. But without an implicit learning process (such as back-propagation and other feedback mechanisms used in training AI), the AI cannot truly learn in an ecosystem of continual feedback.

Abstract Simulations 

Abstraction in Modernist painting is about generalizing the visual world into forms and colors that substitute detail for overall impressions. Art historians have charted the transition form realism to abstraction – a kind of freeing-up and opening-up of vision.

Imagine now a new path leading from abstraction to realism. And it doesn’t just apply to images: it also applies to audible signals, texts, movements, and patterns of behavior.

Imagine an AI that is set up like the illustration above coming alive for the first time. The inner-life of newborn infant is chaotic, formless, and devoid of meaning, with the exception of reactions to a mother’s smile, her scent, and her breasts.

A newborn AI would produce meaningless simulations. As the first few humans log in to give feedback, they will encounter mostly formless blobs. But eventually, some patterns may emerge – with just enough variation for the human judges to start making selective choices: “this blob is more interesting than that blob”.

As the young but continual stream of raw data accumulates, the AI will start to build impressions and common themes, like what Deep Dream does as it collects images and finds common themes and starts riffing on those themes.

http://theghostdiaries.com/10-most-nightmarish-images-from-googles-deepdream/

The important thing about this process is that it can self-correct if it starts to veer in an unproductive direction – initially with the guidance of humans and eventually on its own. It also maintains a memory of bad decisions, and failed experiments – which are all a part of growing up.

Takeaway

If this idea is interesting to you, just Google “evolving AI” and you will find many many links on the subject.

As far as my modest proposal: the takeaway I’d like to leave you with is this:

Every brain on earth builds inner-simulations of the world and plays parts of those simulations constantly as a matter of course. The simple animals have extremely simple models of reality. We humans have insanely complex models – which often get us into trouble. Trial simulations generated by an evolving AI would start pretty dumb, but with more sensory exposure, and human guidance, who knows what would emerge!

It would be irresponsible to launch AI programs without mothering. The evolved brains of most complex mammals naturally expect this. Our AI brain children are naturally derived from a mammalian brain. Mothering will allow us to evolve AI systems that don’t turn into evil monsters.

No Rafi. The brain is not a computer.

Rafi Letzter wrote an article called “If you think your brain is more than a computer, you must accept this fringe idea in physics“.

Screen Shot 2016-06-11 at 12.50.53 PM

The article states the view of computer scientist Scott Aaronson: “…because the brain exists inside the universe, and because computers can simulate the entire universe given enough power, your entire brain can be simulated in a computer.”

Who the fuck said computers can simulate the entire universe?

That is a huge assumption. It’s also wrong.

We need to always look close at the assumptions that people use to build theories. If it can be proven that computers can simulate the entire universe, then this theory will be slightly easier to swallow.

By the way, a computer cannot simulate the entire universe because it would have to simulate itself simulating itself simulating itself.

The human brain is capable of computation, and that’s why humans are able to invent computers.

The very question as to whether the brain “is a computer” is wrong-headed. Does the brain use computation? Of course it does (among other things). Is the brain a computer? Of course it isn’t.

The Miracle of My Hippocampus – and other Situated Mental Organs

I’m not very good at organizing.

pilesThe pile of papers, files, receipts, and other stuff and shit accumulating on my desk at home has grown to huge proportions. So today I decided to put it all into several boxes and bring it to the co-working space – where I could spend the afternoon going through it and pulling the items apart. I’m in the middle of doing that now. Here’s a picture of my progress. I’m feeling fairly productive, actually.

10457290-Six-different-piles-of-various-types-of-nuts-used-in-the-making-of-mixed-nuts--Stock-PhotoSome items go into the trash bin; some go to recycling; most of them get separated into piles where they will be stashed away into a file cabinet after I get home. At the moment, I have a substantial number of mini-piles. These accumulate as I sift through the boxes and decide where to put the items.

Here’s the amazing thing: when I pull an item out of the box, say, a bill from Verizon, I am supposed to put that bill onto the Verizon pile, along with the other Verizon bills that I have pulled out. When this happens, my eye and mind automatically gravitate towards the area on the table where I have been putting the Verizon bills. I’m not entirely conscious of this gravitation to that area.

Gravity Fields in my Brain

What causes this gravitation? What is happening in my brain that causes me to look over to that area of the table? It seems that my brain is building a spatial map of categories for the various things I’m pulling out of the box. I am not aware of it, and this is amazing to me – I just instinctively look over to the area on the table with the pile of Verizon bills, and…et voilà – there it is.

Other things happen too. As this map takes shape in my mind (and on the table), priorities line up in my subconscious. New connections get made and old connects get revived. Rummaging through this box has a therapeutic effect.

The fact that my eye and mind know where to look on the table is really not such a miracle, actually. It’s just my brain doing its job. The brain has many maps – spatial, temporal, etc. – that help connect and organize domains of information. One part of the brain – the hippocampus – is associated with spatial memory.

hippocampal-neurons_0-1

User Interface Design, The Brain, Space, and Time

I could easily collect numerous examples of software user interfaces that do a poor job of tapping the innate power of our spatial brains. These problematic user interfaces invoke the classic bouts of confusion, frustration, undiscoverability, and steep learning curves that we bitch about when comparing software interfaces.

This is why I am a strong proponent of Body Language (see my article about body language in web site design) as a paradigm for user interaction design. Similar to the body language that we produce naturally when we are communicating face-to-face, user interfaces should be designed with the understanding that information is communicated in space and in time (situated in the world). There is great benefit for designers to have some understanding of this aspect of natural language.

Okay, back to my pile of papers: I am fascinated with my unconscious ability to locate these piles as I sift through my stuff. It reminds me of why I like to use the fingers of my hand to “store” a handful of information pieces. I can recall these items later once they have been stored in my fingers (the thumb is usually saved for the most important item).

Body Maps, Brain, and Memory

inbodymaps

Screen Shot 2016-02-07 at 9.03.46 PMLast night I was walking with my friend Eddie (a fellow graduate of the MIT Media Lab, where the late Marvin Minsky taught). Eddie told me that he once heard Marvin telling people how he liked to remember the topics of an upcoming lecture: he would place the various topics onto his body parts.

…similar to the way the ancient Greeks learned to remember stuff.

During the lecture, Marvin would shift his focus to his left shoulder, his hand, his right index finger, etc., in order to recall various topics or concepts. Marvin was tapping the innate spatial organs in his brain to remember the key topics in his lecture.

My Extended BodyMap

18lta79g5tsytjpgMy body. My home town. My bed. My shoes. My wife. My community. The piles in my home office. These things in my life all occupy a place in the world. And these places are mapped in my brain to events that have happened in the past – or that happen on a regular basis. My brain is the product of countless generations of Darwinian iteration over billions of years.

All of this happened in space and time – in ecologies, animal communities, among collaborative workspaces.

Even the things that have no implicit place and time (as the many virtualized aspects of our lives on the internet)… even these things occupy a place and time in my mind.

Intelligence has a body. Information is situated.

Hail to Thee Oh Hippocampus. And all the venerated bodymaps. For you keep our flitting minds tethered to the world.

You offer guidance to bewildered designers – who seek the way – the way that has been forged over billions of years of intertwingled DNA formation…resulting in our spatially and temporally-situated brains.

treblebird

bodymapping.com.au

We must not let the no-place, no-time, any-place, any-time quality of the internet deplete us of our natural spacetime mapping abilities. In the future, this might be seen as one of the greatest challenges of our current digital age.

Hippocampus_and_seahorse_cropped

Why Nick Bostrom is Wrong About the Dangers of Artificial Intelligence

emvideo-youtube-VmtrvkGXBn0.jpg.pagespeed.ce.PHMYbBBuGwNick Bostrom is a philosopher who is known for his work on the dangers of AI in the future. Many other notable people, including Stephen Hawking, Elon Musk, and Bill Gates, have commented on the existential threats posed by a future AI. This is an important subject to discuss, but I believe that there are many careless assumptions being made as far as what AI actually is, and what it will become.

Yea yea, there’s Terminator, Her, Ex Machinima, and so many other science fiction films that touch upon deep and relevant themes about our relationship with autonomous technology. Good stuff to think about (and entertaining). But AI is much more boring than what we see in the movies. AI can be found distributed in little bits and pieces in cars, mobile phones, social media sites, hospitals…just about anywhere that software can run and where people need some help making decisions or getting new ideas.

John McCarthy, who coined the term “Artificial Intelligence” in 1956, said something that is totally relevant today: “as soon as it works, no one calls it AI anymore.” Given how poorly-defined AI is – how the definition of it seems to morph so easily, it is curious how excited some people get about its existential dangers. Perhaps these people are afraid of AI precisely because they do not know what it is.

Screen Shot 2015-09-02 at 10.51.56 AMElon Musk, who warns us of the dangers of AI, was asked the following question by Walter Isaacson: “Do you think you maybe read too much science fiction?” To which Musk replied:

“Yes, that’s possible”….“Probably.”

Should We Be Terrified?

In an article with the very subtle title, “You Should Be Terrified of Superintelligent Machines“, Bostrom says this:

An AI whose sole final goal is to count the grains of sand on Boracay would care instrumentally about its own survival in order to accomplish this.”

godzilla-610x439Point taken. If we built an intelligent machine to do that, we might get what we asked for. Fifty years later we might be telling it, “we were just kidding! It was a joke. Hahahah. Please stop now. Please?” It will push us out of the way and keep counting…and it just might kill us if we try to stop it.

Part of Bostrom’s argument is that if we build machines to achieve goals in the future, then these machines will “want” to survive in order to achieve those goals.

“Want?”

Bostrom warns against anthropomorphizing AI. Amen! In a TED Talk, he even shows a picture of the typical scary AI robot – like so many that have been polluting the air waves of late. He discounts this as anthropomorphizing AI.

Screen Shot 2015-08-31 at 9.51.57 PM

And yet Bostrom frequently refers to what an AI “wants” to do, the AI’s “preferences”, “goals”, even “values”. How can anyone be certain that an AI can have what we call “values” in any way that we can recognize as such? In other words, are we able to talk about “values” in any other context than a human one?

Screen Shot 2015-09-01 at 3.49.13 PMFrom my experience in developing AI-related code for the past 20 years, I can say this with some confidence: it is senseless to talk about software having anything like “values”. By the time something vaguely resembling “value” emerges in AI-driven technology, humans will be so intertwingled with it that they will not be able to separate themselves from it.

It will not be easy – or possible – to distinguish our values from “its” values. In fact, it is quite possible that we won’t refer to it at “it”. “It” will be “us”.

Bostrom’s fear sounds like fear of the Other.

That Disembodied Thing Again

Let’s step out of the ivory tower for a moment. I want to know how that AI machine on Boracay is going to actually go about counting grains of sand.

Many people who talk about AI refer to many amazing physical feats that an AI would supposedly be able to accomplish. But they often leave out the part about “how” this is done. We cannot separate the AI (running software) from the physical machinery that has an effect on the world – any more than we can talk about what a brain can do that has been taken out one’s head and placed on a table.

Screen Shot 2015-08-31 at 9.56.50 PM

It can jiggle. That’s about it.

Once again, the Cartesian separation of mind and body rears its ugly head – as it were – and deludes people into thinking that they can talk about intelligence in the absence of a physical body. Intelligence doesn’t exist outside of its physical manifestation. Can’t happen. Never has happened. Never will happen.

Ray Kurzweil predicted that by 2023 a $1,000 laptop would have the computing power and storage capacity of a human brain. When put in these terms, it sounds quite plausible. But if you were to extrapolate that to make the assumption that a laptop in 2023 will be “intelligent” you would be making a mistake.

Many people who talk about AI make reference to computational speed and bandwidth. Kurzweil helped to popularize a trend for plotting computer performance along with with human intelligence, which perpetuates computationalism. Your brain doesn’t just run on electricity: synapse behavior is electrochemical. Your brain is soaking in chemicals provided by this thing called the bloodstream – and these chemicals have a lot to do with desire and value. And… surprise! Your body is soaking in these same chemicals.

Intelligence resides in the bodymind. Always has, always will.

So, when there’s lot of talk about AI and hardly any mention of the physical technology that actually does something, you should be skeptical.

Bostrom asks: when will we have achieved human-level machine intelligence? And he defines this as the ability “to perform almost any job at least as well as a human”.

I wonder if his list of jobs includes this:

Screen Shot 2015-09-02 at 12.45.02 AM

Intelligence is Multi-Multi-Multi-Dimensional

Bostrom plots a one-dimensional line which includes a mouse, a chimp, a stupid human, and a smart human. And he considers how AI is traveling along this line, and how it will fly past humans.

Screen Shot 2015-08-31 at 9.51.34 PM

Intelligence is not one dimensional. It’s already a bit of a simplification to plot mice and chimps on the same line – as if there were some single number that you could extract from each and compute which is greater.

Charles Darwin once said: “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change.”

Is a bat smarter than a mouse? Bats are blind (dumber?) but their sense of echolocation is miraculous (smarter?)

parrot

Is an autistic savant who can compose complicated algorithms but can’t hold a conversation smarter than a charismatic but dyslexic soccer coach who inspires kids to be their best? Intelligence is not one-dimensional, and this is ESPECIALLY true when comparing AI to humans. Plotting them both on a single one-dimensional line is not just an oversimplification. By plotting AI on the same line as human intelligence, Bostrom is committing anthropomorphism.

AI cannot be compared apples-to-apples to human intelligence because it emerges from human intelligence. Emergent phenomena by their nature operate on a different plane than what they emerge from.

WE HAVE ONLY OURSELVES TO FEAR BECAUSE WE ARE INSEPARABLE FROM OUR AI

We and our AI grow together, side by side. AI evolves with us, for us, in us. It will change us as much as we change it. This is the posthuman condition. You probably have a smart phone (you might even be reading this article on it). Can you imagine what life was like before the internet? For half of my life, there was no internet, and yet I can’t imagine not having the internet as a part of my brain. And I mean that literally. If you think this is far-reaching, just wait another 5 years. Our reliance on the internet, self-driving cars, automated this, automated that, will increase beyond our imaginations.

Posthumanism is pulling us into the future. That train has left the station.

african cell phoneBut…all these technologies that are so folded-in to our daily lives are primarily about enhancing our own abilities. They are not about becoming conscious or having “values”. For the most part, the AI that is growing around us is highly-distributed, and highly-integrated with our activities – OUR values.

I predict that Siri will not turn into a conscious being with morals, emotions, and selfish ambitions…although others are not quite so sure. Okay – I take it back; Siri might have a bit of a bias towards Apple, Inc. Ya think?

Giant Killer Robots

armyrobotThere is one important caveat to my argument. Even though I believe that the future of AI will not be characterized by a frightening army of robots with agendas, we could potentially face a real threat: if military robots that are ordered to kill and destroy – and use AI and sophisticated sensor fusion to outsmart their foes – were to get out of hand, then things could get ugly.

But with the exception of weapon-based AI that is housed in autonomous mobile robots, the future of AI will be mostly custodial, highly distributed, and integrated with our own lives; our clothes, houses, cars, and communications. We will not be able to separate it from ourselves – increasingly over time. We won’t see it as “other” – we might just see ourselves as having more abilities than we did before.

Those abilities could include a better capacity to kill each other, but also a better capacity to compose music, build sustainable cities, educate kids, and nurture the environment.

If my interpretation is correct, then Bolstrom’s alarm bells might be better aimed at ourselves. And in that case, what’s new? We have always had the capacity to create love and beauty … and death and destruction.

To quote David Byrne: “Same as it ever was”.

Maybe Our AI Will Evolve to Protect Us And the Planet

Here’s a more positive future to contemplate:

AI will not become more human-like – which is analogous to how the body of an animal does not look like the cells that it is made of.

tree-of-lifeBillions of years ago, single cells decided to come together in order to make bodies, so they could do more using teamwork. Some of these cells were probably worried about the bodies “taking over”. And oh did they! But, these bodies also did their little cells a favor: they kept them alive and provided them with nutrition. Win-win baby!

To conclude, I disagree with Bostrom: we should not be terrified.

Terror is counter-productive to human progress.

Why Having a Tiny Brain Can Make You a Good Programmer

This post is not just for software developers. It is intended for a wider readership; we all encounter complexity in life, and we all strive to achieve goals, to grow, to become more resilient, and to be more efficient in our work.

xray-of-homer-simpsons-tiny-brain-295x300

Here’s what I’m finding: when I am dealing with a complicated software problem – a problem that has a lot of moving parts – many dimensions, I can easily get overwhelmed with the complexity. Most programmers have this experience when a wicked problem arises, or when a nasty bug is found that requires delving into unknown territories or parts of the code that you’d rather just forget about.

Dealing with complexity is a fact of life in general. What’s a good rule of thumb?

Externalize

parts-of-the-brainWe can only hold so many variables in our minds at once. I have heard figures like “about 7”. But of course, this begs the question of what a “thing” is. Let’s just say that there are only so many threads of a conversation, only so many computer variables, only so many aspects to a system that can be held in the mind at once. It’s like juggling.

Most of us are not circus clowns.

Externalizing is a way of taking parts of a problem that you are working on and manifesting them in some physical place outside of your mind. This exercise can free the mind to explore a few variables at a time…and not drop all the balls.

Dude, Your Brain is Too Big

I have met several programmers in my career who have an uncanny ability to hold many variables in their heads at once. These guys are amazing. And they deserve all the respect that is often given to them. But here’s the problem:

kim_peekPeople who can hold many things in their minds at once can write code, think about code, and refactor code in a way that most of us mortals could never do. While these people should be admired, they should not set the standard for how programming should be done. Their uncanny genius does not equate with good engineering.

This subject is touched-upon in this article by Levi Notik:

Screen Shot 2015-04-01 at 9.47.43 AM

who says: 

“It’s not hard to see why the popular perception of a programmer is one of some freak genius, sitting by a computer, frantically typing while keeping a million things in their head and making magic happen”

wsshom1A common narrative is that these freak geniuses are “ideal  for the job of programming”. In some cases, this may be true. But software has a tendency to become complex in the same way that rain water has a tendency to flow into rivers and eventually into the ocean. People with a high tolerance for complexity or a savant-like ability to hold many things in their minds are not (I contend) the agents of good software design.

I propose that people who cannot hold more than a few variables in their minds at once have something very valuable to contribute to the profession. We (and I’m taking about those of us with normal brains…but who are very resourceful) have built a lifetime’s worth of tools (mental, procedural, and physical) that allow us to build complexity – without limit – that has lasting value, and which other professionals can use. It’s about building robust tools that can outlive our brains – which gradually replace memory with wisdom.

My Fun Fun Fun Job Interview

I remember being in a job interview many years ago. The guy interviewing me was a young cocky brogrammer who was determined to show me how amazingly clever and cocky he could be, and to determine how amazingly clever and cocky I was willing to be. He asked me how I would write a certain algorithm (doesn’t matter what it was – your typical low-level routine).

Well, I was stumped. I had nothing. I knew I had written one of these algorithms before but I couldn’t for the life of me remember how I did it.

Why could I not remember how I had written the algorithm?

Because I did such a good job at writing it, testing it, and optimizing it, that I was able to wrap it up in a bow, tuck it away in a toolbox, and use it for eternity – and NEVER THINK ABOUT HOW IT WORKED ANY MORE.

Hello.

“Forgetting” is not only a trick we use to un-clutter our lives – it actually allows us to build complex, useful things.

Memory is way too precious to be cluttered with nuts and bolts.

tech_brogramming10__01a__630x420Consider a 25-year-old brogrammer who relies on his quick wit and multitaskery. He will not be the same brogrammer 25 years later. His nimble facility for details will gradually give way to wisdom – or at least one would hope.

I personally think it is tragic that programming is a profession dominated by young men with athletic synapses. (At least that is the case here in the San Francisco Bay area). The brains of these guys do not represent the brains of most of the people who use software.

Over time, the tools of software development will – out of necessity – rely less and less on athletic synapses and clever juggling, and more on plain good design.

Quotation-Donald-Knuth-science-reality-people-Meetville-Quotes-184888

IS “ARTIFICIAL LIFE GAME” AN OXYMORON?

(This is a re-posting from Self Animated Systems)

langtonca

Artificial Life (Alife) began with a colorful collection of biologists, robot engineers, computer scientists, artists, and philosophers. It is a cross-disciplinary field, although many believe that biologists have gotten the upper-hand on the agendas of Alife. This highly-nuanced debate is alluded to in this article.

Games

What better way to get a feel for the magical phenomenon of life than through simulation games! (You might argue that spending time in nature is the best way to get a feel for life; I would suggest that a combination of time with nature and time with well-crafted simulations is a great way to get deep intuition. And I would also recommend reading great books like The Ancestor’s Tale :)

Simulation games can help build intuition on subjects like adaptation, evolution, symbiosis, inheritance, swarming behavior, food chains….the list goes on.

Screen Shot 2014-10-17 at 7.48.02 PMScreen Shot 2014-10-19 at 12.24.54 PMOn the more abstract end of the spectrum are simulation-like interactive experiences involving semi-autonomous visual stuff (or sound) that generates novelty. Kinetic art that you can touch, influence, and witness lifelike dynamics can be more than just aesthetic and intellectually stimulating.

These interactive experiences can also build intuition and insight about the underlying forces of nature that come together to oppose the direction of entropy (that ever-present tendency for things in the universe to decay).

Screen Shot 2014-10-17 at 7.58.33 PM

On the less-abstract end of the spectrum, we have virtual pets and avatars (a subject I discussed in a keynote at VISIGRAPP).

“Hierarchy Hinders” –  Lesson from Spore

Screen Shot 2014-10-17 at 8.18.59 PMWill Wright, the designer of Spore, is a celebrated simulation-style game designer who introduced many Alife concepts in the “Sim” series of games. Many of us worried that his epicSpore would encounter some challenges, considering that Maxis had been acquired by Electronic Arts. The Sims was quite successful, but Spore fell short of expectations. Turns out there is a huge difference between building a digital dollhouse game and building a game about evolving lifeforms.

Also, mega-game corporations have their share of social hierarchy, with well-paid executives at the top and sweat shop animators and code monkeys at the bottom. Hierarchy (of any kind) is generally not friendly to artificial life.

For blockbuster games, there are expectations of reliable, somewhat repeatable behavior, highly-crafted game levels, player challenges, scoring, etc. Managing expectations for artificial life-based games is problematic. It’s also hard to market a game which is essentially a bunch of game-mechanics rolled into one. Each sub-game features a different “level of emergence” (see the graph below for reference). Spore presents several slices of emergent reality, with significant gaps in-between. Spore may have also suffered partly due to overhyped marketing.

Artificial Life is naturally and inherently unpredictable. It is close cousins with chaos theory, fractals, emergence, and uh…life itself.

Emergence

alife graphAt the right is a graph I drew which shows how an Alife simulation (or any emergent system) creates novelty, creativity, adaptation, and emergent behavior. This emergence grows out of the base level inputs into the system. At the bottom are atoms, molecules, and bio-chemistry. Simulated protein-folding for discovering new drugs might be an example of a simulation that explores the space of possibilities and essentially pushes up to a higher level (protein-folding creates the 3-dimensional structure that makes complex life possible).

The middle level might represent some evolutionary simulation whereby new populations emerge that find a novel way to survive within a fitness landscape. On the higher level, we might place artificial intelligence, where basic rules of language, logic, perception, and internal modeling of the world might produce intelligent behavior.

In all cases, there is some level of emergence that takes the simulation to a higher level. The more emergence, the more the simulation is able to exhibit behaviors on the higher level. What is the best level of reality to create an artificial life game? And how much emergence is needed for it to be truly considered “artificial life”?

Out Of Control

Can a mega-corporation like Electronic Arts give birth to a truly open-ended artificial life game? Alife is all about emergence. An Alife engineer or artist expects the unexpected. Surprise equals success. And the more unexpected, the better. Surprise, emergent novelty, and the unexpected – these are not easy things to manage…or to build a brand around – at least not in the traditional way.

Screen Shot 2014-10-17 at 9.04.07 PMMaybe the best way to make an artificial life game is to spread the primordial soup out into the world, and allow “crowdsourced evolution” of emergent lifeforms.  OpenWorm comes to mind as a creative use of crowdsourcing.

What if we replaced traditional marketing with something that grows organically within the culture of users? What if, in addition to planting the seeds of evolvable creatures, we also planted the seeds of an emergent culture of users? This is not an unfamiliar kind problem to many internet startups.

Are you a fan of artificial life-based games? God games? Simulations for emergence? What is your opinion of Spore, and the Sims games that preceded it?

This is a subject that I have personally been interested in for my entire career. I think there are still unanswered questions. And I also think that there is a new genre of artificial game that is just waiting to be invented…

…or evolved in the wild.

Onward and Upward.

-Jeffrey

The Unicorn Myth

I was recently called a “unicorn” – a term being bantered around to describe people who have multiple skills.

Unicorn-Vomit-Puke-E-Liquid-E-Juice-Vapor-Geekz-Vape-E-Cig-660x822

 

Here’s a quote from http://unicornspeakeasy.com/

“Dear outsiders,

Silicon Valley calls us unicorns because they doubt the [successful, rigorous] existence of such a multidisciplinary artist-engineer, or designer-programmer, when in fact, we have always been here, we are excellent at practicing “both disciplines” in spite of the jack-of-all-trades myth, and oh yeah – we disagree with you that it’s 2 separate disciplines. Unicorns are influential innovators in ways industries at large cannot fathom. In a corporate labor division where Release Engineer is a separate role from Software Requirements Engineer, and Icon Designer is a separate role from Information Architect, unicorn disbelief is perfectly understandable. To further complicate things, many self-titled unicorns are actually just programmers with a photoshop habit, or designers who dabble with Processing. Know the difference.”

Here’s David Cole on the “The Myth of the Myth of the Unicorn Designer“. He says:

“Design is already not a single skill.”

always_be_yourself_unless_you_can_be_a_unicorn_tshirt-r0a27d5bc26b7444f9233c46d012f5e3d_vj7bv_512

There are valid arguments on either side of the debate as to whether designers should be able to code. What do you think?

-Jeffrey

Programming Languages Need Nouns and Verbs

I created the following Grammar Lesson many years ago:

grammar lesson

Like many people my age, my first programming language was BASIC. Next I learned Pascal, which I found to be extremely expressive. Learning C was difficult, because it required me to be closer to the metal.

computer-memory-2Graduating to C++ made a positive difference. Object-oriented programming affords ways to encapsulate the aspects of the code that are close to the metal, allowing one to ascend to higher levels of abstraction, and express the things that really matter (I realize many programmers would take issue with this – claiming that hardware matters a lot).

Having since learned Java, and then later…JavaScript, I have come to the opinion that the more like natural language I can make my code, the happier I am.

Opinions vary of course, and that’s a good thing. Many programmers don’t like verbosity. Opinions vary on strong vs. weak typed languages. The list goes on. It’s good to have different languages to accommodate differing work styles and technical needs.

But…

if you believe that artificial languages (i.e., programming languages) need to be organic, evolvable, plastic, adaptable, and expressive (like natural language, only precise and resistant to ambiguity and interpretation), what’s the right balance?

Should Programs Look Like Math? 

Should software programs be reduced to elegant, terse, math-like expressions, stripped of all fat and carbohydrates? Many math-happy coders would say yes. Some programmers prefer declarative languages over procedural languages. As you can probably guess, I prefer procedural languages.

Is software math or poetry? Is software machine or language?

I think it could – and should – be all of these.

Screen Shot 2015-01-31 at 6.04.00 PM

Sarah Mei has an opinion. She says that Programming is Not Math.

Programming with Nouns and Verbs

First: let me just make a request of all programmers out there. When you are trying to come up with a name for a function, PLEASE include a verb. Functions DO things. Like any other kind of language, your code will grow in a healthy way within the ecology of human communicators if you make it appropriately expressive.

Don’t believe me? Wait until you’ve lived through several companies and watched a codebase try to survive through three generations of developers. Poorly-communicating software, put into the wrong hands, can set off a pathological chain of events, ending in ultimate demise. Healthy communication keeps marriages and friendships from breaking down. The same is true of software.

Many have pontificated on the subject of software having nouns and verbs. For instance, Matt’s Blog promotes programming with nouns and verbs.

And according to John MacIntyre, “Take your requirements and circle all the nouns, those are your classes. Then underline all the adjectives, those are your properties. Then highlight all your verbs, those are your methods”.

When I read code, I unconsciously look for the verbs and nouns to understand it.

codefragment-thumb-300x268-258

When I can’t identify any nouns or verbs, when I can’t figure out “who” is doing “what” to “whom”, I become cranky, and prone to breaking things around me. Imagine having to read a novel where all the verbs look like nouns and all the nouns look like verbs. It would make you cranky, right?

The human brain is wired for nouns and verbs, and each is processed in a different cortical region.

fig

There are two entities in the universe that use software:

(1) Computers, and (2) Humans.

Computers run software. Human communicate with it.

mammalian-brain-computer-inside

-Jeffrey

Disappearing UI Elements – I Mean, WTF?

I recently noticed something while using software products by Google, Apple, and other trend-setters in user interface design. I’m talking about…

User interface elements that disappear

baby_peekabooI don’t know about you, but I am just tickled pink when I walk into the kitchen to look for the can opener, and can’t find it. (I could have sworn it was in THIS drawer. Or….maybe it was in THAT drawer?)

Sometimes I come BACK to the previous drawer, and see it right under my nose…it is as if an invisible prankster were playing tricks on me.

Oh how it makes me giggle like a baby who is playing peekaboo with mommy.

An open male hand, isolated on a white background.

Not really. I lied. It makes me irrational. It makes me crazy. It makes me want to dismember kittens.

Introducing: Apple’s New Disappearing Scrollbar!

scrollbar-exhibit-300x226

According to DeSantis Briendel, “A clean and uncluttered visual experience is a laudable goal, but not at the expense of an intuitive and useful interface tool. Making a user hunt around for a hidden navigation element doesn’t seem very user friendly to us. We hope developers will pull back from the disappearing scrollbar brink, and save this humble but useful tool.”

It gets worse. If you have ever had to endure the process of customizing a YouTube channel, you may have discovered that the button to edit your channel is invisible until you move your mouse cursor over it.

maxresdefault

invisible

I observed much gnashing of teeth on the internet about this issue.

One frustrated user on the Google Product Forums was looking for the gear icon in hopes to find a way to edit his channel. To his remark that there is no gear icon, another user joked…

“That is because “the team” is changing the layout and operation of the site faster than most people change underwear.”

…which is a problem that I brought up in a previous blog about the arbitrariness of modern user interfaces.

There are those who love bringing up Minority Report – and claim that user interfaces will eventually go away, allowing us to just wave our hands in the air and speak naturally, and to communicate with gestures in thin air.

Bore hole

In Disappearing UI: You Are the Interface, the author describes (in utopian prose) a rosy future where interfaces will disappear, liberating us to interact with software naturally.

But what is “natural”?

What is natural for the human eye, brain, and hand is to see, feel, and hear the things that we want to interact with, to physically interact with them, and then to see, feel, and hear the results of that interaction. Because of physics and human nature, I don’t see this ever changing.

OK, sure. John Maeda’s first rule for simplicity is to REDUCE.

But, by “reduce”, I don’t think he meant…

play hide and seek with the edit button or make the scroll bar disappear while no one is looking.

Perhaps one reason Apple is playing the disappearing scroll bar trick is that they want us to start doing their two-finger swipe. That would be ok if everyone had already given up their mice. Not so fast, Apple!

Perhaps Apple and Google are slowly and gradually preparing us for a world where interfaces will dissolve away completely, eventually disappearing altogether, allowing us to be one with their software.

Seems to me that a few billions years of evolution should have some sway over how we prefer to interact with objects and information in the world.

I like to see and feel the knife peeling the skin off a potato. It’s not just aesthetics: it’s information.

I like seeing a person’s eyes when I’m talking to them. I like seeing the doorknobs in the room. I like knowing where the light switch is.

yy

Apple, please stop playing hide-and-seek with my scroll bar.

-Jeffrey

The Tail Wagging the Brain

 

weird_cover_image

(This is chapter 8 of Virtual Body Language). Extra images have been added for this post. An earlier variation of this blog post was published here.)

The Brain That Changes ItselfThe brain can rewire its neurons, growing new connections, well into old age. Marked recovery is often possible even in stroke victims, porn addicts, and obsessive-compulsive sufferers, several years after the onset of their conditions. Neural Plasticity is now proven to exist in adults, not just in babies and young children going through the critical learning stages of life. This subject has been popularized with books like The Brain that Changes Itself (Doidge 2007).

Like language grammar, music theory, and dance, visual language can be developed throughout life. As visual stimuli are taken in repeatedly, our brains reinforce neural networks for recognizing and interpreting them.

biophiliaSome aspects of visual language are easily learned, and some may even be instinctual. Biophilia is the human aesthetic appreciation of biological form and all living systems (Kellert and Wilson 1993). The flip side of aesthetic biophilia is a readiness to learn disgust (of feces and certain smells) or fear (of large insects or snakes, for instance). Easy recognition of the shapes of branching trees, puffy clouds, flowing water, plump fruit, and subtle differences in the shades of green and red, have laid a foundation for human aesthetic response in all things, natural or manufactured. Our biophilia is the foundation of much of our visual design, even in areas as urban as abstract painting, web page layout, and office furniture design.

The Face

eyebrow accentsWe all have visual language skills that make us especially sensitive to eyebrow motion, eye contact, head orientation, and mouth shape. Our sensitivity to facial expression may even influence other, more abstract forms of visual language, making us responsive to some visual signals more than others—because those face-specific sensitivities gave us an evolutionary advantage as a species so dependent on social signaling. This may have been reinforced by sexual selection.

Even visual features as small as the pupil of an eye contribute to the emotional reading of a face—usually unconsciously. Perhaps the evolved sensitivity to small black dots as information-givers contributed to our subsequent invention of small-yet-critical elements in typographical alphabets.

evolution-of-smiley-2

We see faces in clouds, trees, and spaghetti. Donald Norman wrote a book called, Turn Signals are the Facial Expressions of Automobiles (1992). Recent studies using fMRI scans show that car aficionados use the same brain modules to recognize cars that people in general use to recognize faces (Gauthier et al. 2003).

Screen Shot 2014-12-02 at 11.47.48 PMSomething about the face appears to be baked into the brain.

18394bo715ci0jpg

Dog Smiling

We smile with our faces. Dogs smile with their tails. The next time you see a Doberman Pinscher with his ears clipped and his tail removed (docked), imagine yourself with your smile muscles frozen dead and your eyebrows shaved off.

Like humans perceiving human smiles, it is likely that the canine brain has neural wiring highly tuned to recognize and process a certain kind of visual energy: an oscillating motion of a linear shape occurring near the backside of another dog. The photoreceptors go to work doing edge detection and motion detection so they can send efficient signals to the brain for rapid space-time pattern recognition.

dachsund-110113-400x300

Tail wagging is apparently a learned behavior: puppies don’t wag until they are one or two months old. Recent findings in dog tail wagging show that a wagger will favor the right side as a result of feeling fundamentally positive about something, and will favor the left side when there are negative overtones in feeling (Quaranta et al. 2007). This is not something humans have commonly known until recently. Could it be that left-right wagging asymmetry has always been a subtle part of canine-to-canine body language?

higgs-eyesI often watch intently as my terrier mix, Higgs, encounters a new dog in the park whom he has never met. The two dogs approach each other—often moving very slowly and cautiously. If the other dog has its tail down between its legs and its ears and head held down, and is frequently glancing to the side to avert eye contact, this generally means it is afraid, shy, or intimidated. If its tail is sticking straight up (and not wagging) and if its ears are perked up and the hair on the back is standing on end, it could mean trouble. But if the new stranger is wagging its tail, this is a pretty good sign that things are going to be just fine. Then a new phase of dog body language takes over. If Higgs is in a playful mood, he’ll start a series of quick motions, squatting his chest down to assume the “play bow”, jumping and stopping suddenly, and watching the other dog closely (probably to keep an eye on the tail, among other things). If Higgs is successful, the other dog will accept his invitation, and they will start running around the park, chasing each other, and having a grand old time. It is such a joy to watch dogs at play.

dogs_color

So here is a question: assuming dogs have an instinctive—or easily-learned—ability to process the body language of other dogs, like wagging tails, can the same occur in humans for processing canine body language? Did I have to learn to read canine body language from scratch, or was I was born with this ability? I am after all a member of a species that has been co-evolving with canines for more than ten thousand years, sometimes in deeply symbiotic relationships. So, perhaps I, Homo Sapiens, already have a biophilic predisposition to canine body language.

Animals_in_Translation_(book_cover)Recent studies in using dogs as therapy for helping children with autism have shown remarkable results. These children become more socially and emotionally connected. According to autistic author Temple Grandin, this is because animals, like people with autism, do not experience ambivalent emotion; their emotions are pure and direct (Grandin, Johnson 2005). Perhaps canines helped to modulate, buffer, and filter human emotions throughout our symbiotic evolution. They may have helped to offset a tendency towards neurosis and cognitive chaos.

Vestigial Response

image5931Having a dog in the family provides a continual reminder of my affiliation with canines, not just as companions, but as Earthly relatives. On a few rare occasions while sitting quietly at home working on something, I remember hearing an unfamiliar noise somewhere in the house. But before I even knew consciously that I was hearing anything, I felt a vague tug at my ears; it was not an entirely comfortable feeling. This may have happened at other times, but I probably didn’t notice.

I later learned that this is a leftover vestigial response that we inherited from our ancestors.

The ability for some people to wiggle their ears is due to muscles that are basically useless in humans, but were once used by our ancestors to aim their ears toward a sound. Remembering the feeling of that tug on my ears gives me a feeling of connection to my ancestors’ physical experiences.

In a moment we will consider how these primal vestiges might be coming back into modern currency. But I’m still in a meandering, storytelling, pipe-smoking kind of mood, so hang with me just a bit longer and then we’ll get back to the subject of avatars.

This vestigial response is called ear perking. It is shared by many of our living mammalian relatives, including cats. I remember once hanging out and playing mind-games with a cat. I was sitting on a couch, and the cat was sitting in the middle of the floor, looking away, pretending to ignore me (but of course—it’s a cat).

dims.vetstreet

I was trying to get the cat to look at me or to acknowledge me in some way. I called its name, I hissed, clicked my tongue, and clapped my hands. Nothing. I scratched the upholstery on the couch. Nothing. Then I tore a small piece of paper from my notebook, crumpled it into a tiny ball, and discreetly tossed it onto the floor, behind the cat, outside of its field of view. The tiny sound of the crumpled ball of paper falling to the floor caused one of the cat’s ears to swivel around and aim towards the sound. I jumped up and yelled, “Hah—got you!” The cat’s higher brain quickly tugged at its ear, pulling it back into place, and the cat continued to serenely look off into the distance, as if nothing had ever happened.

earsmatrix

Cat body language is harder to read than dog body language. Perhaps that’s by design (I mean…evolution). Dogs don’t seem to have the same talents of reserve and constraint. Dog expressions just flop out in front of you. And their vocabulary is quite rich. Most world languages have several dog-related words and phrases. We easily learn to recognize the body language of dogs, and even less familiar social animals. Certain forms of body language are processed by the brain more easily than others. A baby learns to respond to its mother’s smile very soon after birth. Learning to read a dog’s tail motions may not be so instinctive, but the plastic brain of Homo Sapiens is ready to adapt. Learning to read the signs that your pet turtle is hungry would be much harder (I assume). At the extreme: learning to read the signals from an enemy character in a complicated computer game may be a skill only for a select few elite players.

1883d6jlb7tb0jpgThis I’m sure of: if we had tails, we’d be wagging them for our dogs when they are being good (and for that matter, we’d also be wagging them for each other :D ). It would not be easy to add a tail to the base of a human spine and wire up the nerves and muscles. But if it could be done, our brains would easily and happily adapt, employing some appropriate system of neurons to the purpose of wagging the tail—perhaps re-adapting a system of neurons normally dedicated to doing The Twist. While it may not be easy to adapt our bodies to acquire such organs of expression, our brains can easily adapt. And that’s where avatars come in.

Furries

Screen Shot 2014-12-02 at 11.48.04 PMThe Furry Species has proliferated in Second Life. Furry Fandom is a subculture featuring fictional anthropomorphic animal characters with human personalities and human-like attributes. Furry fandom already had a virtual head start—people were role-playing with their online “fursonas” before Second Life existed (witness FurryMUCK, a user-extendable online text-based role-playing game started in 1990). Furries have animal heads, tails, and other such features.

While many Second Life avatars have true animal forms (such as the “ferals” and the “tinies”), many are anthropomorphic: walking upright with human postures and gaits. This anthropomorphism has many creative expressions, ranging from quaint and cute to mythical, scary, and kinky.

Furry anthropomorphism in Second Life is appropriate in one sense: there is no way to directly change the Second Life avatar into a non-human form. The underlying avatar skeleton software is based solely on the upright-walking human form. I know this fact in an intimate way, because I spent several months digging into the code in an attempt to reconstitute the avatar skeleton to allow open-ended morphologies (quadrupeds, etc.) This proved to be difficult. And no surprise: the humanoid avatar morphology code, and all its associated animations—procedural and otherwise—had been in place for several years. It serves a similar purpose to a group of genes in our DNA called Hox genes.

6800872f6

They are baked deep into our genetic structure, and are critical to the formation of a body plan. Hox genes specify the overall structure of the body and are critical in embryonic development when the segmentation and placement of limbs and other body parts are first established. After struggling to override the effects of the Second Life “avatar Hox genes”, I concluded that I could not do this on my own. It was a strategic surgical process that many core Linden Lab engineers would have to perform. Evolution in virtual worlds happens fast. It’s hard to go back to an earlier stage and start over.

Despite the anthropomorphism of the Second Life avatar (or perhaps because of this constraint), the Linden scripting language (LSL) and other customizing abilities have provided a means for some remarkably creative workarounds, including packing the avatar geometry into a small compact form called a “meatball”, and then surrounding it with a custom 3D object, such as a robot or a dragon or some abstract form, complete with responsive animations and particle system effects.

furriesPerhaps furry residents prefer having this constraint of anthropomorphism; it fits with their nature as hybrids. Some furry residents have customized tail animations and use them to express mood and intent. I wouldn’t be surprised if those who have been furries for many years have dreams in which they are living and expressing in their Furry bodies—communicating with tails, ears, and all. Most would agree that customizing a Furry with a wagging-tail animation is a lot easier than undergoing surgery to attach a physical tail.

But as far as the brain is concerned, it may not make a difference.

Where Does my Virtual Body Live?

Virtual reality is not manifest in computer chips, computer screens, headsets, keyboards, mice, joysticks, or head-mounted displays. Nor does it live in running software. Virtual reality manifests in the brain, and in the collective brains of societies. The blurring of real and virtual experiences is a theme that Jeremy Bailenson and his team at Stanford’s Virtual Human Interaction Lab have been researching.

vhil-logo

Virtual environments are now being used for research in social sciences, as well as for the treatment of many brain disorders. An amputee who has suffered from phantom pain for years can be cured of the pain through a disciplined and focused regimen of rewiring his or her body image to effectively “amputate” the phantom limb. Immersive virtual reality has been used recently to treat this problem (Murray et al. 2007). Previous techniques using physical mirrors have been replaced with sophisticated simulations that allow more controlled settings and adjustments. When the brain’s body image gets tweaked away from reality too far, psychological problems ensue, such as anorexia. Having so many super-thin sex-symbol avatars may not be helping the situation. On the other hand, virtual reality is being used in research to treat this and other body image-related disorders.

BradDeGraf-01A creative kind of short-term body image shifting is nothing new to animators, actors, and puppeteers, who routinely tweak their own body images in order to express like ostriches or hummingbirds or dogs. When I was working for Brad deGraf, one of the early pioneers in real-time character animation, I would frequent the offices of Protozoa, a game company he founded in the mid-90s.

I was hired to develop a tool for modeling 3D trees which were to be used in a game called “Squeezils”, featuring animals that scurry among the branches. I remember the first time I went to Protozoa. I was waiting in the lobby to meet Brad, and I noticed a video monitor at the other end of the room. An animation was playing that had a couple of 3D characters milling about. One of them was a crab-like cartoon character with huge claws, and the other was a Big Bird-like character with a very long neck. After gazing at these characters for a while, it occurred to me that both of these characters were moving in coordination—as if they were both being puppeteered by the same person. My curiosity got the best of me and I started wandering around the offices, looking for the puppet master. I peeked around the corner. In the next room were a couple of motion capture experts, testing their hardware. On the stage was a guy wearing motion capture gear. When his arms moved, so did the huge claws of the crab—and so did the wimpy wings of the tall bird. When his head moved, the eyes of the crab looked around, and the bird’s head moved around on top of its long neck.

Brad deGraf and puppeteer/animator Emre Yilmaz call this “…performance animation…a new kind of jazz. Also known as digital puppetry…it brings characters to life, i.e. ‘animates’ them, through real-time control of the three-dimensional computer renderings, enabled by fast graphics computers, live motion sampling, and smart software” (deGraf and Yilmaz 1999). When applying human-sourced motion to exaggerated cartoon forms, the human imagination is stimulated: “motion capture” escapes its negative association with droll, unimaginative literal recording. It inspires the human controller to think more like a puppeteer than an actor. Puppeteering is more out-of-body than acting. Anything can become a puppet (a sock, a salt shaker, a rubber hose).

Screen Shot 2014-12-02 at 11.48.15 PM deGraf and Yilmaz make this out-of-body transference possible by “re-proportioning” the data stream from the human controller to fit non-human anatomies.

Screen Shot 2014-12-02 at 11.48.22 PM

Research by Nick Yee found that people’s behaviors change as a result of having different virtual representations, a phenomenon he calls the Proteus Effect (Yee 2007). Artist Micha Cardenas spent 365 hours continuously as a dragon in Second Life. She employed a Vicon motion capture system to translate her motions into the virtual world. This project, called “Becoming Dragon”, explored the limits of body modification, “challenging” the one-year transition requirement that transgender people face before gender confirmation surgery. Cardenas told me, “The dragon as a figure of the shapeshifter was important to me to help consider the idea of permanent transition, rejecting any simple conception of identity as tied to a single body at a single moment, and instead reflecting on the process of learning a new way of moving, walking, talking and how that breaks down any possible concept of an original or natural state for the body to be in” (Cardenas 2010).

Screen Shot 2014-12-02 at 11.48.30 PM With this performance, Cardenas wanted to explore not only the issues surrounding gender transformation, but the larger question of how we experience our bodies, and what it means to inhabit the body of another being—real or simulated. During this 365-hour art-performance, the lines between real and virtual progressively blurred for Cardenas, and she found herself “thinking” in the body of a dragon. What interests me is this: how did Micha’s brain adapt to be able to “think” in a different body? What happened to her body image?

The Homunculus

Dr. Wilder Penfield was operating on a patient’s brain to relieve symptoms of epilepsy. In the process he made a remarkable discovery: each part of the body is associated with a specific region in the brain. He had discovered the homunculus: a map of the body in the brain. The homunculus is an invisible cartoon character. It can only be “seen” by carefully probing parts of it and asking the patient what he or she is feeling, or by watching different parts of the body twitch in response. Most interesting is the fact that some parts of the homunculus are much larger than others—totally out of proportion with normal human anatomy. There are two primary “homunculi” (sensory and motor) and their distorted proportions correspond to the relative differences in how much of the brain is dedicated to the different regions.

1421_Sensory_Homunculus

For instance, eyes, lips, tongue, and hands are proportionately large, whereas the skull and thighs are proportionately small. I would not want to encounter a homunculus while walking down the street or hanging out at a party—homunculi are not pretty—in fact, I find them quite frightening. Indeed they have cropped up in haunting ways throughout the history of literature, science, and philosophy.

But happily, for purposes of my research, any homunculus I encounter would be very good at nonverbal communication, because eyes, mouth, and hands are major expressive organs of the body. As a general rule, the parts of our body that receive the most information are also the parts that give the most. What would a “concert pianist homunculus” look like? Huge fingers. How about a “soccer player homunculus”? Gargantuan legs and tiny arms.

Screen Shot 2014-12-02 at 11.49.33 PM

The Homuncular Avatar

Avatar code etches embodiment into virtual space. A “communication homunculus” was sitting on my shoulder while I was working at There.com and arguing with some computer graphics guys about how to engineer the avatar. Chuck Clanton and I were both advocates for dedicating more polygons and procedural animation capability to the avatar’s hands and face. But polygons are graphics-intensive (at least they were back then), and procedural animation takes a lot of engineering work. Consider this: in order to properly animate two articulated avatar hands, you need at least twenty extra joints, on top of the approximately twenty joints used in a typical avatar animation skeleton. That’s nearly twice as many joints.

The argument for full hand articulation was as follows: like the proportions of cortex dedicated to these communication-intensive areas of the body, socially-oriented avatars should have ample resources dedicated to face and hands.

Screen Shot 2014-12-02 at 11.49.46 PM When the developers of Traveler were faced with the problem of how to squeeze the most out of the few polygons they could render on the screen at any given time, they decided to just make avatars as floating heads—because their world centered on vocal communication (telephone meets avatar). The developers of ActiveWorlds, which had a similar polygonal predicament, chose whole avatar bodies (and they were quite clunky by today’s standards).

These kinds of choices determine where and how human expression will manifest.

Non-Human Avatars

Avatar body language does not have to be a direct prosthetic to our corporeal expression. It can extend beyond the human form; this theme has been a part of avatar lore since the very beginning. What are the implications for a non-human body language alphabet? It means that our body language alphabet can (and I would claim already has begun to) include semantic units, attributes, descriptors, and grammars that encompass a superset of human form and behavior.

The illustration below shows pentadactl morphology of various vertebrate limbs. This has implications for a common underlying code used to express meta-human forms and motions.

Screen Shot 2014-12-02 at 11.49.55 PM

A set of parameters (bone lengths, angle offsets, motor control attributes, etc.) can be specified, along with gross morphological settings (i.e., four limbs and a tail; six limbs and no head; no limbs and no head, etc.) Once these parameters are provided in the system to accommodate the various locomotion behaviors and forms of body language, they can be manipulated as genes, using interactive evolution interfaces or genetic algorithms, or simply tweaked directly in a customization interface.

But this morphological space of variation doesn’t need to stop at the vertebrates. We’ve seen avatars in the form of fishes, floating eyeballs, cartoon characters, abstract designs—you name it. Artists, in the spirit of performance artist Stelarc, are exploring the expressive possibilities of avatars and remote embodiment. Micha Cardenas, Max Moswitzer, Jeremy Owen Turner and others articulate the very aesthetics of embodiment and avatarhood—the entire possible expression-space. What are the possibilities of having a visual (or even aural) manifestation of yourself in an alternate reality? As I mentioned before, my Second Life avatar is a non-humanoid creature, consisting of a cube with tentacles.

Screen Shot 2014-12-02 at 11.50.03 PM

On each side of the cube are fractals based on an image of my face. This cube floats in space where my head would normally be. Attached to the bottom of the cube are several green tentacles that hang like those of a jellyfish. This avatar was built by JoannaTrail Blazer, based on my design. In the illustration, Joanna’s avatar is shown next to mine.

I chose a non-human form for a few reasons: One reason is that I prefer to avoid the uncanny valley, and will go to extremes in avoiding droll realism, employing instead the visual tools of abstraction and symbolism. By not trying to replicate the image of a real human, I can sidestep the problem of my avatar “not looking like me” or “not looking right”. My non-human avatar also allows me to explore the realm of non-human body language. On the one hand, I can express yes and no by triggering animations that cause the cube to oscillate like a normal head. But I could also express happiness or excitement by triggering an animation that causes the tentacles to flair out and oscillate, or to roll sensually like an inverted bouquet of cat tails. Negative emotions could be represented by causing the tentacles to droop limply (reinforced by having the cube-head slump downward). While the cube-head mimics normal human head motions, the tentacles do not correspond to any body language that I am physically capable of generating. They could tap other sources, like animal movement, and basic concepts like energy and gravity, expanding my capacity for expression.

Virtual Dogs

Screen Shot 2014-12-02 at 11.50.15 PMI want to come back to the topic of canine body language now, but this time, I’d like to discuss some of the ways that sheer dogginess in the gestalt can be expressed in games and virtual worlds. The canine species has made many appearances throughout the history of animation research, games, and virtual worlds. Rob Fulop, of the ‘90s game company PF Magic, created a series of games based on characters made of spheres, including the game “Dogz”. Bruce Blumberg of the MIT Media Lab developed a virtual interactive dog AI, and is also an active dog trainer. Dogs are a great subject for doing AI—they are easier to simulate than humans, and they are highly emotional, interactive, and engaging.

While prototyping There.com, Will and I developed a dog that would chase a virtual Frisbee tossed by your avatar (involving simultaneous mouse motion to throw the Frisbee and hitting the space key to release the Frisbee at the right time). Since I was interested in the “essence of dog energy”, I decided not to focus on the graphical representation of the dog so much as the motion, sound, and overall behavior of the dog. So, for this prototype, the dog had no legs: just a sphere for a body and a sphere for a head (each rendered as toon-shaded circles—flat coloring with an outline). These spheres could move slightly relative to each other, so that the head had a little bounce to it when the dog jumped.

The eyes were rendered as two black cartoon dots, and they would disappear when the dog blinked. Two ears, a tail, and a tongue were programmed, because they are expressive components. The ears were animated using forward dynamics, so they flopped around when the dog moved, and hung downward slightly from the force of gravity. I programmed some simple logic to make an oscillating, “panting” force in the tongue which would increase in frequency and force when the dog was especially active, tired, or nervous. I also programmed “ear perkiness” which would cause the ears to aim upwards (still with a little bit of droop) whenever a nearby avatar produced a chat that included the dog’s name. I programmed the dog to change its mood when a nearby avatar produced the chats “good dog” or “bad dog”. And this was just the beginning. Later, I added more AI allowing the dog to learn to recognize chat words, and to bond to certain avatars.

Despite the lack of legs and its utterly crude rendering, this dog elicited remarkable puppy-like responses in the people who watched it or played with it. The implication from this is that what makes a dog a dog is not merely the way it looks, but the way it acts—its essence as a distinctly canine space-time energy event. Recall the exaggerated proportions of the human communication homunculus. For this dog, ears, tail, and tongue constituted the vast majority of computational processing. That was on purpose; this dog was intended as a communicator above all else. Consider the visual language of tail wagging that I brought up earlier, a visual language which humans (especially dog-owners) have incorporated into their unconscious vocabularies. What lessons might we take from the subject of wag semiotics, as applied to the art and science of wagging on the internet?

Tail Wagging on the Internet

In the There.com dog, the tail was used as a critical indicator of its mood at any given moment, raising when the dog was alert or aroused, drooping when sad or afraid, and wagging when happy. Take wagging as an example. The body language information packet for tail wagging consisted of a Boolean value sent from the dog’s Brain to the dog’s Animated Body, representing simply “start wagging” or “stop wagging”. It is important to note that the dog’s Animated Body is constituted on clients (computers sitting in front of users), whereas the dog’s Brain resides on servers (arrays of big honkin’ computers in a data center that manage the “shared reality” of all users). The reason for this mind/body separation is to make sure that body language messaging (as well as overall emotional state, physical location and orientation of the dog, and other aspects of its dynamic state) are truthfully conveyed to all the users.

Clients are subject to animation frame rates lagging, internet messages dropping out, and other issues. All users whose avatars are hanging out in a part of the virtual world where the dog is hanging out need to see the same behavior; they are all “seeing the same dog”—virtually-speaking.

Screen Shot 2014-12-02 at 11.50.24 PM

It would be unnecessary (and expensive in terms of internet traffic) for the server to send individual wags several times a second to all the clients. Each client’s Animated Body code is perfectly capable of performing this repetitive animation. And, because of different rendering speeds among various clients, lag times, etc, the wagging tail on your client might be moving left-right-left-right, while the wagging tail on my client is moving right-left-right-left. In other words, they might be out of phase or wagging at slightly different speeds. These slight differences have almost no effect on the reading of the wag. “I am wagging my tail” is the point. That’s a Boolean message: one bit. The reason I am laboring over this point harkens back to the chapter on a Body Language Alphabet: a data-compressed message, easy to zip through the internet, is efficient for helping virtual worlds run smoothly. It is also inspired by biosemiotics: Mother Nature’s efficient message-passing protocol.

On Cuttlefish and Dolphins

The homunculus of Homo Sapiens might evolve into a more plastic form—maybe not on a genetic/species level, but at least during the lifetimes of individual brains, assisted by the scaffolding of culture and virtual world technology. This plasticity could approach strange proportions, even as our physical bodies remain roughly the same. As we spend more of our time socializing and interacting on the internet as part of the program to travel less to reduce greenhouse gases, our embodiment will naturally take on the forms appropriate to the virtual spaces we occupy. And these spaces will not necessarily mimic the spaces of the real world, nor will our embodiments always look and feel like our own bodies. And with these new virtual embodiments will come new layers of body language. Jaron Lanier uses the example of cephalopods as species capable of animated texturemapping and physical morphing used for communication and camouflage—feats that are outside the range of physical human expression.

vid

 

 

 

 

 

 

 

 

In reference to avatars, Lanier says, “The problem is that in order to morph, humans must design avatars in laborious detail in advance. Our software tools are not yet flexible enough to enable us, in virtual reality, to think ourselves into different forms. Why would we want to? Consider the existing benefits of our ability to create sounds with our mouths. We can make new noises and mimic existing ones, spontaneously and instantaneously. But when it comes to visual communication, we are hamstrung…We can learn to draw and paint, or use computer-graphics design software. But we cannot generate images at the speed with which we can imagine them” (Lanier 2006).

Screen Shot 2014-12-02 at 11.50.31 PM

Once we have developed the various non-humanoid puppeteering interfaces that would allow Lanier’s vision, we will begin to invent new visual languages for realtime communication. Lanier believes that in the future, humans will be truly “multihomuncular”.

Researchers from Aberdeen University and the Polytechnic University of Catalonia found that dolphins use discrete units of body language as they swim together near the surface of water. They observed efficiency in these signals, similar to what occurs in frequently-used words in human verbal language (Ferrer i Cancho and Lusseau 2009). As human natural language goes online, and as our body language gets processed, data-compressed, and alphabetized for efficient traversal over the internet, we may start to see more patterns of our embodied language that resemble those created by dolphins, and many other social species besides. The background communicative buzz of the biosphere may start to make more sense in the process of whittling our own communicative energy down to its essential features, and being able to analyze it digitally. With a universal body language alphabet, we might someday be able to animate our skin like cephalopods, or speak “dolphin”, using our tails, as we lope across the virtual waves.

Screen Shot 2014-12-02 at 11.50.37 PM