Do tech companies really need more vaginas, dark skin, and gray beards?

We need to get to the bottom of the issue about diversity in the software industry.

It’s not that software companies simply need to hire more people who possess vaginas, dark skin, or grey beards…to reach some kind of quota, or to make their About page look hip. It’s that software companies need to embrace diversity in ways of thinking, life experience, socio-economic backgrounds, ways of building things, and ways of setting priorities. This might result in some outwardly-visible diversity, as a by product. But in my opinion, that’s not the point.

Software increasingly runs our lives – EVERYONE’S LIVES – including people who possess vaginas, dark skin, and gray beards. One should not assume that all those young overpaid white males who would sooner send you a Slack message than look you in the eye are going to know how to build the tools that are running more and more of our lives….

…in a country that is becoming more diverse, not less – a fact that the United States Bollocks in Chief is clearly not happy about.

Building Diversity Where it Matters

Sure, there tends to be more diversity in design, business, and marketing departments, but these aspects of a tech company generally get established after the DNA of the company has been forged. The DNA of a software company is typically established when wealthy white male venture capitalists invest in wealthy white male programmers who (sometimes) become more wealthy, after which time they start new companies using their wealth. They hire their wealthy white male programmer friends (who can work without salary in exchange for shares – thereby becoming more likely to acquire more wealth).

Follow the money.

A slightly more diverse company is then built around this core of wealthy white males. Then a slick, mobile-friendly web page is erected, featuring high-res photos of gleeful African Americans and Chinese women. Maybe an Indian. And (occasionally) the token graybeard.

Dynastic Privilege

It’s the same phenomenon that drives wealth inequality in our country. Unchecked capitalism is fueling an oligarchy that is inhibiting the American Dream for those who find themselves on the losing end of financial opportunity.

Did I just change the subject from tech company diversity to wealth inequality in the United States? No: it’s the same subject.

“I believe dynastic privilege is one of the major contributors to the lack of diversity in tech” – Adam Pisoni

So, instead of talking about skin color, gender, and age, we should be talking about the deeper underlying cultural and economic forces that make it so hard for tech companies to change their DNA.

Please reply with your comments. Agree or disagree. Either way, I’d love to hear your thoughts!

Advertisements

Here’s one way to evolve an artificial intelligence

This picture illustrates an idea for how to evolve an AI system. It is derived from the sensor-brain-actuator-world model.

Machine learning algorithms have been doing some impressive things. Simply by crawling through massive oceans of data and finding correlations, some AI systems are able to make unexpected predictions and reveal insights.

Neural nets and evolutionary algorithms constitute a natural pairing of technologies for designing AI systems. But evolutionary algorithms require selection criteria that can be difficult to design. One solution is to use millions of human observers as a Darwinian fitness force to guide an AI towards an optimal state.

Clarifications

Since there is so much discussion (and confusion) on AI these days I want make some clarifications.

  • This has nothing to do with consciousness or self. This AI is disembodied.
  • The raw data input is (not) curated. It has no added interpretation.
  • Any kind of data can be input. The AI will ignore most of it at first.
  • The AI presents its innards to humans. I am calling these “simulations”.
  • The AI algorithm uses some unspecified form of machine learning.
  • The important innovation here is the ability to generate “simulations”.

Mothering

The humanist in me says we need to act as the collective Mother for our brain children by providing continual reinforcement for good behavior and discouraging bad behavior. As a world view emerges in the AI, and as an implicit code of morals comes into focus, the AI will “mature”. Widely-expressed fears of AI run amok could be partially alleviated by imposing a Mothering filter on the AI as it comes of age.

Can Anything Evolve without Selection?

I suppose it is possible for an AI to arrive at every possible good idea, insight, and judgement just by digesting the constant data spew from humanity. But without an implicit learning process (such as back-propagation and other feedback mechanisms used in training AI), the AI cannot truly learn in an ecosystem of continual feedback.

Abstract Simulations 

Abstraction in Modernist painting is about generalizing the visual world into forms and colors that substitute detail for overall impressions. Art historians have charted the transition form realism to abstraction – a kind of freeing-up and opening-up of vision.

Imagine now a new path leading from abstraction to realism. And it doesn’t just apply to images: it also applies to audible signals, texts, movements, and patterns of behavior.

Imagine an AI that is set up like the illustration above coming alive for the first time. The inner-life of newborn infant is chaotic, formless, and devoid of meaning, with the exception of reactions to a mother’s smile, her scent, and her breasts.

A newborn AI would produce meaningless simulations. As the first few humans log in to give feedback, they will encounter mostly formless blobs. But eventually, some patterns may emerge – with just enough variation for the human judges to start making selective choices: “this blob is more interesting than that blob”.

As the young but continual stream of raw data accumulates, the AI will start to build impressions and common themes, like what Deep Dream does as it collects images and finds common themes and starts riffing on those themes.

http://theghostdiaries.com/10-most-nightmarish-images-from-googles-deepdream/

The important thing about this process is that it can self-correct if it starts to veer in an unproductive direction – initially with the guidance of humans and eventually on its own. It also maintains a memory of bad decisions, and failed experiments – which are all a part of growing up.

Takeaway

If this idea is interesting to you, just Google “evolving AI” and you will find many many links on the subject.

As far as my modest proposal: the takeaway I’d like to leave you with is this:

Every brain on earth builds inner-simulations of the world and plays parts of those simulations constantly as a matter of course. The simple animals have extremely simple models of reality. We humans have insanely complex models – which often get us into trouble. Trial simulations generated by an evolving AI would start pretty dumb, but with more sensory exposure, and human guidance, who knows what would emerge!

It would be irresponsible to launch AI programs without mothering. The evolved brains of most complex mammals naturally expect this. Our AI brain children are naturally derived from a mammalian brain. Mothering will allow us to evolve AI systems that don’t turn into evil monsters.

No Rafi. The brain is not a computer.

Rafi Letzter wrote an article called “If you think your brain is more than a computer, you must accept this fringe idea in physics“.

Screen Shot 2016-06-11 at 12.50.53 PM

The article states the view of computer scientist Scott Aaronson: “…because the brain exists inside the universe, and because computers can simulate the entire universe given enough power, your entire brain can be simulated in a computer.”

Who the fuck said computers can simulate the entire universe?

That is a huge assumption. It’s also wrong.

We need to always look close at the assumptions that people use to build theories. If it can be proven that computers can simulate the entire universe, then this theory will be slightly easier to swallow.

By the way, a computer cannot simulate the entire universe because it would have to simulate itself simulating itself simulating itself.

The human brain is capable of computation, and that’s why humans are able to invent computers.

The very question as to whether the brain “is a computer” is wrong-headed. Does the brain use computation? Of course it does (among other things). Is the brain a computer? Of course it isn’t.

The Miracle of My Hippocampus – and other Situated Mental Organs

I’m not very good at organizing.

pilesThe pile of papers, files, receipts, and other stuff and shit accumulating on my desk at home has grown to huge proportions. So today I decided to put it all into several boxes and bring it to the co-working space – where I could spend the afternoon going through it and pulling the items apart. I’m in the middle of doing that now. Here’s a picture of my progress. I’m feeling fairly productive, actually.

10457290-Six-different-piles-of-various-types-of-nuts-used-in-the-making-of-mixed-nuts--Stock-PhotoSome items go into the trash bin; some go to recycling; most of them get separated into piles where they will be stashed away into a file cabinet after I get home. At the moment, I have a substantial number of mini-piles. These accumulate as I sift through the boxes and decide where to put the items.

Here’s the amazing thing: when I pull an item out of the box, say, a bill from Verizon, I am supposed to put that bill onto the Verizon pile, along with the other Verizon bills that I have pulled out. When this happens, my eye and mind automatically gravitate towards the area on the table where I have been putting the Verizon bills. I’m not entirely conscious of this gravitation to that area.

Gravity Fields in my Brain

What causes this gravitation? What is happening in my brain that causes me to look over to that area of the table? It seems that my brain is building a spatial map of categories for the various things I’m pulling out of the box. I am not aware of it, and this is amazing to me – I just instinctively look over to the area on the table with the pile of Verizon bills, and…et voilà – there it is.

Other things happen too. As this map takes shape in my mind (and on the table), priorities line up in my subconscious. New connections get made and old connects get revived. Rummaging through this box has a therapeutic effect.

The fact that my eye and mind know where to look on the table is really not such a miracle, actually. It’s just my brain doing its job. The brain has many maps – spatial, temporal, etc. – that help connect and organize domains of information. One part of the brain – the hippocampus – is associated with spatial memory.

hippocampal-neurons_0-1

User Interface Design, The Brain, Space, and Time

I could easily collect numerous examples of software user interfaces that do a poor job of tapping the innate power of our spatial brains. These problematic user interfaces invoke the classic bouts of confusion, frustration, undiscoverability, and steep learning curves that we bitch about when comparing software interfaces.

This is why I am a strong proponent of Body Language (see my article about body language in web site design) as a paradigm for user interaction design. Similar to the body language that we produce naturally when we are communicating face-to-face, user interfaces should be designed with the understanding that information is communicated in space and in time (situated in the world). There is great benefit for designers to have some understanding of this aspect of natural language.

Okay, back to my pile of papers: I am fascinated with my unconscious ability to locate these piles as I sift through my stuff. It reminds me of why I like to use the fingers of my hand to “store” a handful of information pieces. I can recall these items later once they have been stored in my fingers (the thumb is usually saved for the most important item).

Body Maps, Brain, and Memory

inbodymaps

Screen Shot 2016-02-07 at 9.03.46 PMLast night I was walking with my friend Eddie (a fellow graduate of the MIT Media Lab, where the late Marvin Minsky taught). Eddie told me that he once heard Marvin telling people how he liked to remember the topics of an upcoming lecture: he would place the various topics onto his body parts.

…similar to the way the ancient Greeks learned to remember stuff.

During the lecture, Marvin would shift his focus to his left shoulder, his hand, his right index finger, etc., in order to recall various topics or concepts. Marvin was tapping the innate spatial organs in his brain to remember the key topics in his lecture.

My Extended BodyMap

18lta79g5tsytjpgMy body. My home town. My bed. My shoes. My wife. My community. The piles in my home office. These things in my life all occupy a place in the world. And these places are mapped in my brain to events that have happened in the past – or that happen on a regular basis. My brain is the product of countless generations of Darwinian iteration over billions of years.

All of this happened in space and time – in ecologies, animal communities, among collaborative workspaces.

Even the things that have no implicit place and time (as the many virtualized aspects of our lives on the internet)… even these things occupy a place and time in my mind.

Intelligence has a body. Information is situated.

Hail to Thee Oh Hippocampus. And all the venerated bodymaps. For you keep our flitting minds tethered to the world.

You offer guidance to bewildered designers – who seek the way – the way that has been forged over billions of years of intertwingled DNA formation…resulting in our spatially and temporally-situated brains.

treblebird

bodymapping.com.au

We must not let the no-place, no-time, any-place, any-time quality of the internet deplete us of our natural spacetime mapping abilities. In the future, this might be seen as one of the greatest challenges of our current digital age.

Hippocampus_and_seahorse_cropped

Why Nick Bostrom is Wrong About the Dangers of Artificial Intelligence

emvideo-youtube-VmtrvkGXBn0.jpg.pagespeed.ce.PHMYbBBuGwNick Bostrom is a philosopher who is known for his work on the dangers of AI in the future. Many other notable people, including Stephen Hawking, Elon Musk, and Bill Gates, have commented on the existential threats posed by a future AI. This is an important subject to discuss, but I believe that there are many careless assumptions being made as far as what AI actually is, and what it will become.

Yea yea, there’s Terminator, Her, Ex Machinima, and so many other science fiction films that touch upon deep and relevant themes about our relationship with autonomous technology. Good stuff to think about (and entertaining). But AI is much more boring than what we see in the movies. AI can be found distributed in little bits and pieces in cars, mobile phones, social media sites, hospitals…just about anywhere that software can run and where people need some help making decisions or getting new ideas.

John McCarthy, who coined the term “Artificial Intelligence” in 1956, said something that is totally relevant today: “as soon as it works, no one calls it AI anymore.” Given how poorly-defined AI is – how the definition of it seems to morph so easily, it is curious how excited some people get about its existential dangers. Perhaps these people are afraid of AI precisely because they do not know what it is.

Screen Shot 2015-09-02 at 10.51.56 AMElon Musk, who warns us of the dangers of AI, was asked the following question by Walter Isaacson: “Do you think you maybe read too much science fiction?” To which Musk replied:

“Yes, that’s possible”….“Probably.”

Should We Be Terrified?

In an article with the very subtle title, “You Should Be Terrified of Superintelligent Machines“, Bostrom says this:

An AI whose sole final goal is to count the grains of sand on Boracay would care instrumentally about its own survival in order to accomplish this.”

godzilla-610x439Point taken. If we built an intelligent machine to do that, we might get what we asked for. Fifty years later we might be telling it, “we were just kidding! It was a joke. Hahahah. Please stop now. Please?” It will push us out of the way and keep counting…and it just might kill us if we try to stop it.

Part of Bostrom’s argument is that if we build machines to achieve goals in the future, then these machines will “want” to survive in order to achieve those goals.

“Want?”

Bostrom warns against anthropomorphizing AI. Amen! In a TED Talk, he even shows a picture of the typical scary AI robot – like so many that have been polluting the air waves of late. He discounts this as anthropomorphizing AI.

Screen Shot 2015-08-31 at 9.51.57 PM

And yet Bostrom frequently refers to what an AI “wants” to do, the AI’s “preferences”, “goals”, even “values”. How can anyone be certain that an AI can have what we call “values” in any way that we can recognize as such? In other words, are we able to talk about “values” in any other context than a human one?

Screen Shot 2015-09-01 at 3.49.13 PMFrom my experience in developing AI-related code for the past 20 years, I can say this with some confidence: it is senseless to talk about software having anything like “values”. By the time something vaguely resembling “value” emerges in AI-driven technology, humans will be so intertwingled with it that they will not be able to separate themselves from it.

It will not be easy – or possible – to distinguish our values from “its” values. In fact, it is quite possible that we won’t refer to it at “it”. “It” will be “us”.

Bostrom’s fear sounds like fear of the Other.

That Disembodied Thing Again

Let’s step out of the ivory tower for a moment. I want to know how that AI machine on Boracay is going to actually go about counting grains of sand.

Many people who talk about AI refer to many amazing physical feats that an AI would supposedly be able to accomplish. But they often leave out the part about “how” this is done. We cannot separate the AI (running software) from the physical machinery that has an effect on the world – any more than we can talk about what a brain can do that has been taken out one’s head and placed on a table.

Screen Shot 2015-08-31 at 9.56.50 PM

It can jiggle. That’s about it.

Once again, the Cartesian separation of mind and body rears its ugly head – as it were – and deludes people into thinking that they can talk about intelligence in the absence of a physical body. Intelligence doesn’t exist outside of its physical manifestation. Can’t happen. Never has happened. Never will happen.

Ray Kurzweil predicted that by 2023 a $1,000 laptop would have the computing power and storage capacity of a human brain. When put in these terms, it sounds quite plausible. But if you were to extrapolate that to make the assumption that a laptop in 2023 will be “intelligent” you would be making a mistake.

Many people who talk about AI make reference to computational speed and bandwidth. Kurzweil helped to popularize a trend for plotting computer performance along with with human intelligence, which perpetuates computationalism. Your brain doesn’t just run on electricity: synapse behavior is electrochemical. Your brain is soaking in chemicals provided by this thing called the bloodstream – and these chemicals have a lot to do with desire and value. And… surprise! Your body is soaking in these same chemicals.

Intelligence resides in the bodymind. Always has, always will.

So, when there’s lot of talk about AI and hardly any mention of the physical technology that actually does something, you should be skeptical.

Bostrom asks: when will we have achieved human-level machine intelligence? And he defines this as the ability “to perform almost any job at least as well as a human”.

I wonder if his list of jobs includes this:

Screen Shot 2015-09-02 at 12.45.02 AM

Intelligence is Multi-Multi-Multi-Dimensional

Bostrom plots a one-dimensional line which includes a mouse, a chimp, a stupid human, and a smart human. And he considers how AI is traveling along this line, and how it will fly past humans.

Screen Shot 2015-08-31 at 9.51.34 PM

Intelligence is not one dimensional. It’s already a bit of a simplification to plot mice and chimps on the same line – as if there were some single number that you could extract from each and compute which is greater.

Charles Darwin once said: “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change.”

Is a bat smarter than a mouse? Bats are blind (dumber?) but their sense of echolocation is miraculous (smarter?)

parrot

Is an autistic savant who can compose complicated algorithms but can’t hold a conversation smarter than a charismatic but dyslexic soccer coach who inspires kids to be their best? Intelligence is not one-dimensional, and this is ESPECIALLY true when comparing AI to humans. Plotting them both on a single one-dimensional line is not just an oversimplification. By plotting AI on the same line as human intelligence, Bostrom is committing anthropomorphism.

AI cannot be compared apples-to-apples to human intelligence because it emerges from human intelligence. Emergent phenomena by their nature operate on a different plane than what they emerge from.

WE HAVE ONLY OURSELVES TO FEAR BECAUSE WE ARE INSEPARABLE FROM OUR AI

We and our AI grow together, side by side. AI evolves with us, for us, in us. It will change us as much as we change it. This is the posthuman condition. You probably have a smart phone (you might even be reading this article on it). Can you imagine what life was like before the internet? For half of my life, there was no internet, and yet I can’t imagine not having the internet as a part of my brain. And I mean that literally. If you think this is far-reaching, just wait another 5 years. Our reliance on the internet, self-driving cars, automated this, automated that, will increase beyond our imaginations.

Posthumanism is pulling us into the future. That train has left the station.

african cell phoneBut…all these technologies that are so folded-in to our daily lives are primarily about enhancing our own abilities. They are not about becoming conscious or having “values”. For the most part, the AI that is growing around us is highly-distributed, and highly-integrated with our activities – OUR values.

I predict that Siri will not turn into a conscious being with morals, emotions, and selfish ambitions…although others are not quite so sure. Okay – I take it back; Siri might have a bit of a bias towards Apple, Inc. Ya think?

Giant Killer Robots

armyrobotThere is one important caveat to my argument. Even though I believe that the future of AI will not be characterized by a frightening army of robots with agendas, we could potentially face a real threat: if military robots that are ordered to kill and destroy – and use AI and sophisticated sensor fusion to outsmart their foes – were to get out of hand, then things could get ugly.

But with the exception of weapon-based AI that is housed in autonomous mobile robots, the future of AI will be mostly custodial, highly distributed, and integrated with our own lives; our clothes, houses, cars, and communications. We will not be able to separate it from ourselves – increasingly over time. We won’t see it as “other” – we might just see ourselves as having more abilities than we did before.

Those abilities could include a better capacity to kill each other, but also a better capacity to compose music, build sustainable cities, educate kids, and nurture the environment.

If my interpretation is correct, then Bolstrom’s alarm bells might be better aimed at ourselves. And in that case, what’s new? We have always had the capacity to create love and beauty … and death and destruction.

To quote David Byrne: “Same as it ever was”.

Maybe Our AI Will Evolve to Protect Us And the Planet

Here’s a more positive future to contemplate:

AI will not become more human-like – which is analogous to how the body of an animal does not look like the cells that it is made of.

tree-of-lifeBillions of years ago, single cells decided to come together in order to make bodies, so they could do more using teamwork. Some of these cells were probably worried about the bodies “taking over”. And oh did they! But, these bodies also did their little cells a favor: they kept them alive and provided them with nutrition. Win-win baby!

To conclude, I disagree with Bostrom: we should not be terrified.

Terror is counter-productive to human progress.

Why Having a Tiny Brain Can Make You a Good Programmer

This post is not just for software developers. It is intended for a wider readership; we all encounter complexity in life, and we all strive to achieve goals, to grow, to become more resilient, and to be more efficient in our work.

xray-of-homer-simpsons-tiny-brain-295x300

Here’s what I’m finding: when I am dealing with a complicated software problem – a problem that has a lot of moving parts – many dimensions, I can easily get overwhelmed with the complexity. Most programmers have this experience when a wicked problem arises, or when a nasty bug is found that requires delving into unknown territories or parts of the code that you’d rather just forget about.

Dealing with complexity is a fact of life in general. What’s a good rule of thumb?

Externalize

parts-of-the-brainWe can only hold so many variables in our minds at once. I have heard figures like “about 7”. But of course, this begs the question of what a “thing” is. Let’s just say that there are only so many threads of a conversation, only so many computer variables, only so many aspects to a system that can be held in the mind at once. It’s like juggling.

Most of us are not circus clowns.

Externalizing is a way of taking parts of a problem that you are working on and manifesting them in some physical place outside of your mind. This exercise can free the mind to explore a few variables at a time…and not drop all the balls.

Dude, Your Brain is Too Big

I have met several programmers in my career who have an uncanny ability to hold many variables in their heads at once. These guys are amazing. And they deserve all the respect that is often given to them. But here’s the problem:

kim_peekPeople who can hold many things in their minds at once can write code, think about code, and refactor code in a way that most of us mortals could never do. While these people should be admired, they should not set the standard for how programming should be done. Their uncanny genius does not equate with good engineering.

This subject is touched-upon in this article by Levi Notik:

Screen Shot 2015-04-01 at 9.47.43 AM

who says: 

“It’s not hard to see why the popular perception of a programmer is one of some freak genius, sitting by a computer, frantically typing while keeping a million things in their head and making magic happen”

wsshom1A common narrative is that these freak geniuses are “ideal  for the job of programming”. In some cases, this may be true. But software has a tendency to become complex in the same way that rain water has a tendency to flow into rivers and eventually into the ocean. People with a high tolerance for complexity or a savant-like ability to hold many things in their minds are not (I contend) the agents of good software design.

I propose that people who cannot hold more than a few variables in their minds at once have something very valuable to contribute to the profession. We (and I’m taking about those of us with normal brains…but who are very resourceful) have built a lifetime’s worth of tools (mental, procedural, and physical) that allow us to build complexity – without limit – that has lasting value, and which other professionals can use. It’s about building robust tools that can outlive our brains – which gradually replace memory with wisdom.

My Fun Fun Fun Job Interview

I remember being in a job interview many years ago. The guy interviewing me was a young cocky brogrammer who was determined to show me how amazingly clever and cocky he could be, and to determine how amazingly clever and cocky I was willing to be. He asked me how I would write a certain algorithm (doesn’t matter what it was – your typical low-level routine).

Well, I was stumped. I had nothing. I knew I had written one of these algorithms before but I couldn’t for the life of me remember how I did it.

Why could I not remember how I had written the algorithm?

Because I did such a good job at writing it, testing it, and optimizing it, that I was able to wrap it up in a bow, tuck it away in a toolbox, and use it for eternity – and NEVER THINK ABOUT HOW IT WORKED ANY MORE.

Hello.

“Forgetting” is not only a trick we use to un-clutter our lives – it actually allows us to build complex, useful things.

Memory is way too precious to be cluttered with nuts and bolts.

tech_brogramming10__01a__630x420Consider a 25-year-old brogrammer who relies on his quick wit and multitaskery. He will not be the same brogrammer 25 years later. His nimble facility for details will gradually give way to wisdom – or at least one would hope.

I personally think it is tragic that programming is a profession dominated by young men with athletic synapses. (At least that is the case here in the San Francisco Bay area). The brains of these guys do not represent the brains of most of the people who use software.

Over time, the tools of software development will – out of necessity – rely less and less on athletic synapses and clever juggling, and more on plain good design.

Quotation-Donald-Knuth-science-reality-people-Meetville-Quotes-184888

IS “ARTIFICIAL LIFE GAME” AN OXYMORON?

(This is a re-posting from Self Animated Systems)

langtonca

Artificial Life (Alife) began with a colorful collection of biologists, robot engineers, computer scientists, artists, and philosophers. It is a cross-disciplinary field, although many believe that biologists have gotten the upper-hand on the agendas of Alife. This highly-nuanced debate is alluded to in this article.

Games

What better way to get a feel for the magical phenomenon of life than through simulation games! (You might argue that spending time in nature is the best way to get a feel for life; I would suggest that a combination of time with nature and time with well-crafted simulations is a great way to get deep intuition. And I would also recommend reading great books like The Ancestor’s Tale :)

Simulation games can help build intuition on subjects like adaptation, evolution, symbiosis, inheritance, swarming behavior, food chains….the list goes on.

Screen Shot 2014-10-17 at 7.48.02 PMScreen Shot 2014-10-19 at 12.24.54 PMOn the more abstract end of the spectrum are simulation-like interactive experiences involving semi-autonomous visual stuff (or sound) that generates novelty. Kinetic art that you can touch, influence, and witness lifelike dynamics can be more than just aesthetic and intellectually stimulating.

These interactive experiences can also build intuition and insight about the underlying forces of nature that come together to oppose the direction of entropy (that ever-present tendency for things in the universe to decay).

Screen Shot 2014-10-17 at 7.58.33 PM

On the less-abstract end of the spectrum, we have virtual pets and avatars (a subject I discussed in a keynote at VISIGRAPP).

“Hierarchy Hinders” –  Lesson from Spore

Screen Shot 2014-10-17 at 8.18.59 PMWill Wright, the designer of Spore, is a celebrated simulation-style game designer who introduced many Alife concepts in the “Sim” series of games. Many of us worried that his epicSpore would encounter some challenges, considering that Maxis had been acquired by Electronic Arts. The Sims was quite successful, but Spore fell short of expectations. Turns out there is a huge difference between building a digital dollhouse game and building a game about evolving lifeforms.

Also, mega-game corporations have their share of social hierarchy, with well-paid executives at the top and sweat shop animators and code monkeys at the bottom. Hierarchy (of any kind) is generally not friendly to artificial life.

For blockbuster games, there are expectations of reliable, somewhat repeatable behavior, highly-crafted game levels, player challenges, scoring, etc. Managing expectations for artificial life-based games is problematic. It’s also hard to market a game which is essentially a bunch of game-mechanics rolled into one. Each sub-game features a different “level of emergence” (see the graph below for reference). Spore presents several slices of emergent reality, with significant gaps in-between. Spore may have also suffered partly due to overhyped marketing.

Artificial Life is naturally and inherently unpredictable. It is close cousins with chaos theory, fractals, emergence, and uh…life itself.

Emergence

alife graphAt the right is a graph I drew which shows how an Alife simulation (or any emergent system) creates novelty, creativity, adaptation, and emergent behavior. This emergence grows out of the base level inputs into the system. At the bottom are atoms, molecules, and bio-chemistry. Simulated protein-folding for discovering new drugs might be an example of a simulation that explores the space of possibilities and essentially pushes up to a higher level (protein-folding creates the 3-dimensional structure that makes complex life possible).

The middle level might represent some evolutionary simulation whereby new populations emerge that find a novel way to survive within a fitness landscape. On the higher level, we might place artificial intelligence, where basic rules of language, logic, perception, and internal modeling of the world might produce intelligent behavior.

In all cases, there is some level of emergence that takes the simulation to a higher level. The more emergence, the more the simulation is able to exhibit behaviors on the higher level. What is the best level of reality to create an artificial life game? And how much emergence is needed for it to be truly considered “artificial life”?

Out Of Control

Can a mega-corporation like Electronic Arts give birth to a truly open-ended artificial life game? Alife is all about emergence. An Alife engineer or artist expects the unexpected. Surprise equals success. And the more unexpected, the better. Surprise, emergent novelty, and the unexpected – these are not easy things to manage…or to build a brand around – at least not in the traditional way.

Screen Shot 2014-10-17 at 9.04.07 PMMaybe the best way to make an artificial life game is to spread the primordial soup out into the world, and allow “crowdsourced evolution” of emergent lifeforms.  OpenWorm comes to mind as a creative use of crowdsourcing.

What if we replaced traditional marketing with something that grows organically within the culture of users? What if, in addition to planting the seeds of evolvable creatures, we also planted the seeds of an emergent culture of users? This is not an unfamiliar kind problem to many internet startups.

Are you a fan of artificial life-based games? God games? Simulations for emergence? What is your opinion of Spore, and the Sims games that preceded it?

This is a subject that I have personally been interested in for my entire career. I think there are still unanswered questions. And I also think that there is a new genre of artificial game that is just waiting to be invented…

…or evolved in the wild.

Onward and Upward.

-Jeffrey