Why Nick Bostrom is Wrong About the Dangers of Artificial Intelligence

emvideo-youtube-VmtrvkGXBn0.jpg.pagespeed.ce.PHMYbBBuGwNick Bostrom is a philosopher who is known for his work on the dangers of AI in the future. Many other notable people, including Stephen Hawking, Elon Musk, and Bill Gates, have commented on the existential threats posed by a future AI. This is an important subject to discuss, but I believe that there are many careless assumptions being made as far as what AI actually is, and what it will become.

Yea yea, there’s Terminator, Her, Ex Machinima, and so many other science fiction films that touch upon deep and relevant themes about our relationship with autonomous technology. Good stuff to think about (and entertaining). But AI is much more boring than what we see in the movies. AI can be found distributed in little bits and pieces in cars, mobile phones, social media sites, hospitals…just about anywhere that software can run and where people need some help making decisions or getting new ideas.

John McCarthy, who coined the term “Artificial Intelligence” in 1956, said something that is totally relevant today: “as soon as it works, no one calls it AI anymore.” Given how poorly-defined AI is – how the definition of it seems to morph so easily, it is curious how excited some people get about its existential dangers. Perhaps these people are afraid of AI precisely because they do not know what it is.

Screen Shot 2015-09-02 at 10.51.56 AMElon Musk, who warns us of the dangers of AI, was asked the following question by Walter Isaacson: “Do you think you maybe read too much science fiction?” To which Musk replied:

“Yes, that’s possible”….“Probably.”

Should We Be Terrified?

In an article with the very subtle title, “You Should Be Terrified of Superintelligent Machines“, Bostrom says this:

An AI whose sole final goal is to count the grains of sand on Boracay would care instrumentally about its own survival in order to accomplish this.”

godzilla-610x439Point taken. If we built an intelligent machine to do that, we might get what we asked for. Fifty years later we might be telling it, “we were just kidding! It was a joke. Hahahah. Please stop now. Please?” It will push us out of the way and keep counting…and it just might kill us if we try to stop it.

Part of Bostrom’s argument is that if we build machines to achieve goals in the future, then these machines will “want” to survive in order to achieve those goals.


Bostrom warns against anthropomorphizing AI. Amen! In a TED Talk, he even shows a picture of the typical scary AI robot – like so many that have been polluting the air waves of late. He discounts this as anthropomorphizing AI.

Screen Shot 2015-08-31 at 9.51.57 PM

And yet Bostrom frequently refers to what an AI “wants” to do, the AI’s “preferences”, “goals”, even “values”. How can anyone be certain that an AI can have what we call “values” in any way that we can recognize as such? In other words, are we able to talk about “values” in any other context than a human one?

Screen Shot 2015-09-01 at 3.49.13 PMFrom my experience in developing AI-related code for the past 20 years, I can say this with some confidence: it is senseless to talk about software having anything like “values”. By the time something vaguely resembling “value” emerges in AI-driven technology, humans will be so intertwingled with it that they will not be able to separate themselves from it.

It will not be easy – or possible – to distinguish our values from “its” values. In fact, it is quite possible that we won’t refer to it at “it”. “It” will be “us”.

Bostrom’s fear sounds like fear of the Other.

That Disembodied Thing Again

Let’s step out of the ivory tower for a moment. I want to know how that AI machine on Boracay is going to actually go about counting grains of sand.

Many people who talk about AI refer to many amazing physical feats that an AI would supposedly be able to accomplish. But they often leave out the part about “how” this is done. We cannot separate the AI (running software) from the physical machinery that has an effect on the world – any more than we can talk about what a brain can do that has been taken out one’s head and placed on a table.

Screen Shot 2015-08-31 at 9.56.50 PM

It can jiggle. That’s about it.

Once again, the Cartesian separation of mind and body rears its ugly head – as it were – and deludes people into thinking that they can talk about intelligence in the absence of a physical body. Intelligence doesn’t exist outside of its physical manifestation. Can’t happen. Never has happened. Never will happen.

Ray Kurzweil predicted that by 2023 a $1,000 laptop would have the computing power and storage capacity of a human brain. When put in these terms, it sounds quite plausible. But if you were to extrapolate that to make the assumption that a laptop in 2023 will be “intelligent” you would be making a mistake.

Many people who talk about AI make reference to computational speed and bandwidth. Kurzweil helped to popularize a trend for plotting computer performance along with with human intelligence, which perpetuates computationalism. Your brain doesn’t just run on electricity: synapse behavior is electrochemical. Your brain is soaking in chemicals provided by this thing called the bloodstream – and these chemicals have a lot to do with desire and value. And… surprise! Your body is soaking in these same chemicals.

Intelligence resides in the bodymind. Always has, always will.

So, when there’s lot of talk about AI and hardly any mention of the physical technology that actually does something, you should be skeptical.

Bostrom asks: when will we have achieved human-level machine intelligence? And he defines this as the ability “to perform almost any job at least as well as a human”.

I wonder if his list of jobs includes this:

Screen Shot 2015-09-02 at 12.45.02 AM

Intelligence is Multi-Multi-Multi-Dimensional

Bostrom plots a one-dimensional line which includes a mouse, a chimp, a stupid human, and a smart human. And he considers how AI is traveling along this line, and how it will fly past humans.

Screen Shot 2015-08-31 at 9.51.34 PM

Intelligence is not one dimensional. It’s already a bit of a simplification to plot mice and chimps on the same line – as if there were some single number that you could extract from each and compute which is greater.

Charles Darwin once said: “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change.”

Is a bat smarter than a mouse? Bats are blind (dumber?) but their sense of echolocation is miraculous (smarter?)


Is an autistic savant who can compose complicated algorithms but can’t hold a conversation smarter than a charismatic but dyslexic soccer coach who inspires kids to be their best? Intelligence is not one-dimensional, and this is ESPECIALLY true when comparing AI to humans. Plotting them both on a single one-dimensional line is not just an oversimplification. By plotting AI on the same line as human intelligence, Bostrom is committing anthropomorphism.

AI cannot be compared apples-to-apples to human intelligence because it emerges from human intelligence. Emergent phenomena by their nature operate on a different plane than what they emerge from.


We and our AI grow together, side by side. AI evolves with us, for us, in us. It will change us as much as we change it. This is the posthuman condition. You probably have a smart phone (you might even be reading this article on it). Can you imagine what life was like before the internet? For half of my life, there was no internet, and yet I can’t imagine not having the internet as a part of my brain. And I mean that literally. If you think this is far-reaching, just wait another 5 years. Our reliance on the internet, self-driving cars, automated this, automated that, will increase beyond our imaginations.

Posthumanism is pulling us into the future. That train has left the station.

african cell phoneBut…all these technologies that are so folded-in to our daily lives are primarily about enhancing our own abilities. They are not about becoming conscious or having “values”. For the most part, the AI that is growing around us is highly-distributed, and highly-integrated with our activities – OUR values.

I predict that Siri will not turn into a conscious being with morals, emotions, and selfish ambitions…although others are not quite so sure. Okay – I take it back; Siri might have a bit of a bias towards Apple, Inc. Ya think?

Giant Killer Robots

armyrobotThere is one important caveat to my argument. Even though I believe that the future of AI will not be characterized by a frightening army of robots with agendas, we could potentially face a real threat: if military robots that are ordered to kill and destroy – and use AI and sophisticated sensor fusion to outsmart their foes – were to get out of hand, then things could get ugly.

But with the exception of weapon-based AI that is housed in autonomous mobile robots, the future of AI will be mostly custodial, highly distributed, and integrated with our own lives; our clothes, houses, cars, and communications. We will not be able to separate it from ourselves – increasingly over time. We won’t see it as “other” – we might just see ourselves as having more abilities than we did before.

Those abilities could include a better capacity to kill each other, but also a better capacity to compose music, build sustainable cities, educate kids, and nurture the environment.

If my interpretation is correct, then Bolstrom’s alarm bells might be better aimed at ourselves. And in that case, what’s new? We have always had the capacity to create love and beauty … and death and destruction.

To quote David Byrne: “Same as it ever was”.

Maybe Our AI Will Evolve to Protect Us And the Planet

Here’s a more positive future to contemplate:

AI will not become more human-like – which is analogous to how the body of an animal does not look like the cells that it is made of.

tree-of-lifeBillions of years ago, single cells decided to come together in order to make bodies, so they could do more using teamwork. Some of these cells were probably worried about the bodies “taking over”. And oh did they! But, these bodies also did their little cells a favor: they kept them alive and provided them with nutrition. Win-win baby!

To conclude, I disagree with Bostrom: we should not be terrified.

Terror is counter-productive to human progress.

Why Having a Tiny Brain Can Make You a Good Programmer

This post is not just for software developers. It is intended for a wider readership; we all encounter complexity in life, and we all strive to achieve goals, to grow, to become more resilient, and to be more efficient in our work.


Here’s what I’m finding: when I am dealing with a complicated software problem – a problem that has a lot of moving parts – many dimensions, I can easily get overwhelmed with the complexity. Most programmers have this experience when a wicked problem arises, or when a nasty bug is found that requires delving into unknown territories or parts of the code that you’d rather just forget about.

Dealing with complexity is a fact of life in general. What’s a good rule of thumb?


parts-of-the-brainWe can only hold so many variables in our minds at once. I have heard figures like “about 7”. But of course, this begs the question of what a “thing” is. Let’s just say that there are only so many threads of a conversation, only so many computer variables, only so many aspects to a system that can be held in the mind at once. It’s like juggling.

Most of us are not circus clowns.

Externalizing is a way of taking parts of a problem that you are working on and manifesting them in some physical place outside of your mind. This exercise can free the mind to explore a few variables at a time…and not drop all the balls.

Dude, Your Brain is Too Big

I have met several programmers in my career who have an uncanny ability to hold many variables in their heads at once. These guys are amazing. And they deserve all the respect that is often given to them. But here’s the problem:

kim_peekPeople who can hold many things in their minds at once can write code, think about code, and refactor code in a way that most of us mortals could never do. While these people should be admired, they should not set the standard for how programming should be done. Their uncanny genius does not equate with good engineering.

This subject is touched-upon in this article by Levi Notik:

Screen Shot 2015-04-01 at 9.47.43 AM

who says: 

“It’s not hard to see why the popular perception of a programmer is one of some freak genius, sitting by a computer, frantically typing while keeping a million things in their head and making magic happen”

wsshom1A common narrative is that these freak geniuses are “ideal  for the job of programming”. In some cases, this may be true. But software has a tendency to become complex in the same way that rain water has a tendency to flow into rivers and eventually into the ocean. People with a high tolerance for complexity or a savant-like ability to hold many things in their minds are not (I contend) the agents of good software design.

I propose that people who cannot hold more than a few variables in their minds at once have something very valuable to contribute to the profession. We (and I’m taking about those of us with normal brains…but who are very resourceful) have built a lifetime’s worth of tools (mental, procedural, and physical) that allow us to build complexity – without limit – that has lasting value, and which other professionals can use. It’s about building robust tools that can outlive our brains – which gradually replace memory with wisdom.

My Fun Fun Fun Job Interview

I remember being in a job interview many years ago. The guy interviewing me was a young cocky brogrammer who was determined to show me how amazingly clever and cocky he could be, and to determine how amazingly clever and cocky I was willing to be. He asked me how I would write a certain algorithm (doesn’t matter what it was – your typical low-level routine).

Well, I was stumped. I had nothing. I knew I had written one of these algorithms before but I couldn’t for the life of me remember how I did it.

Why could I not remember how I had written the algorithm?

Because I did such a good job at writing it, testing it, and optimizing it, that I was able to wrap it up in a bow, tuck it away in a toolbox, and use it for eternity – and NEVER THINK ABOUT HOW IT WORKED ANY MORE.


“Forgetting” is not only a trick we use to un-clutter our lives – it actually allows us to build complex, useful things.

Memory is way too precious to be cluttered with nuts and bolts.

tech_brogramming10__01a__630x420Consider a 25-year-old brogrammer who relies on his quick wit and multitaskery. He will not be the same brogrammer 25 years later. His nimble facility for details will gradually give way to wisdom – or at least one would hope.

I personally think it is tragic that programming is a profession dominated by young men with athletic synapses. (At least that is the case here in the San Francisco Bay area). The brains of these guys do not represent the brains of most of the people who use software.

Over time, the tools of software development will – out of necessity – rely less and less on athletic synapses and clever juggling, and more on plain good design.


The Unicorn Myth

I was recently called a “unicorn” – a term being bantered around to describe people who have multiple skills.



Here’s a quote from http://unicornspeakeasy.com/

“Dear outsiders,

Silicon Valley calls us unicorns because they doubt the [successful, rigorous] existence of such a multidisciplinary artist-engineer, or designer-programmer, when in fact, we have always been here, we are excellent at practicing “both disciplines” in spite of the jack-of-all-trades myth, and oh yeah – we disagree with you that it’s 2 separate disciplines. Unicorns are influential innovators in ways industries at large cannot fathom. In a corporate labor division where Release Engineer is a separate role from Software Requirements Engineer, and Icon Designer is a separate role from Information Architect, unicorn disbelief is perfectly understandable. To further complicate things, many self-titled unicorns are actually just programmers with a photoshop habit, or designers who dabble with Processing. Know the difference.”

Here’s David Cole on the “The Myth of the Myth of the Unicorn Designer“. He says:

“Design is already not a single skill.”


There are valid arguments on either side of the debate as to whether designers should be able to code. What do you think?


Programming Languages Need Nouns and Verbs

I created the following Grammar Lesson many years ago:

grammar lesson

Like many people my age, my first programming language was BASIC. Next I learned Pascal, which I found to be extremely expressive. Learning C was difficult, because it required me to be closer to the metal.

computer-memory-2Graduating to C++ made a positive difference. Object-oriented programming affords ways to encapsulate the aspects of the code that are close to the metal, allowing one to ascend to higher levels of abstraction, and express the things that really matter (I realize many programmers would take issue with this – claiming that hardware matters a lot).

Having since learned Java, and then later…JavaScript, I have come to the opinion that the more like natural language I can make my code, the happier I am.

Opinions vary of course, and that’s a good thing. Many programmers don’t like verbosity. Opinions vary on strong vs. weak typed languages. The list goes on. It’s good to have different languages to accommodate differing work styles and technical needs.


if you believe that artificial languages (i.e., programming languages) need to be organic, evolvable, plastic, adaptable, and expressive (like natural language, only precise and resistant to ambiguity and interpretation), what’s the right balance?

Should Programs Look Like Math? 

Should software programs be reduced to elegant, terse, math-like expressions, stripped of all fat and carbohydrates? Many math-happy coders would say yes. Some programmers prefer declarative languages over procedural languages. As you can probably guess, I prefer procedural languages.

Is software math or poetry? Is software machine or language?

I think it could – and should – be all of these.

Screen Shot 2015-01-31 at 6.04.00 PM

Sarah Mei has an opinion. She says that Programming is Not Math.

Programming with Nouns and Verbs

First: let me just make a request of all programmers out there. When you are trying to come up with a name for a function, PLEASE include a verb. Functions DO things. Like any other kind of language, your code will grow in a healthy way within the ecology of human communicators if you make it appropriately expressive.

Don’t believe me? Wait until you’ve lived through several companies and watched a codebase try to survive through three generations of developers. Poorly-communicating software, put into the wrong hands, can set off a pathological chain of events, ending in ultimate demise. Healthy communication keeps marriages and friendships from breaking down. The same is true of software.

Many have pontificated on the subject of software having nouns and verbs. For instance, Matt’s Blog promotes programming with nouns and verbs.

And according to John MacIntyre, “Take your requirements and circle all the nouns, those are your classes. Then underline all the adjectives, those are your properties. Then highlight all your verbs, those are your methods”.

When I read code, I unconsciously look for the verbs and nouns to understand it.


When I can’t identify any nouns or verbs, when I can’t figure out “who” is doing “what” to “whom”, I become cranky, and prone to breaking things around me. Imagine having to read a novel where all the verbs look like nouns and all the nouns look like verbs. It would make you cranky, right?

The human brain is wired for nouns and verbs, and each is processed in a different cortical region.


There are two entities in the universe that use software:

(1) Computers, and (2) Humans.

Computers run software. Human communicate with it.



Software Development = Growing an Organism

In college I had a wonderful art teacher named Jewett Campbell. He was a modernist painter, and he taught us to see our paintings as visions that emerged from the canvas. At every step of the way, we were told that our paintings must always have a kind of architectural solidity – a compositional wholeness.

We were told to start a painting with the most important macro-features, and gradually fill-in the details, catching the basic overall scheme within the first few strokes. And we were taught to appreciate the techniques of post-impressionist Paul Cézanne.


Cézanne would often stop a painting, leaving large areas of the canvas empty. Apparently, he could stop working on a painting at any time, and the integrity of the composition would be solid; balanced. The composition would hold up. Leaving so much raw canvas was Cézanne’s way of inviting the viewer’s inner eye to fill-in the rest.

Little did I know after studying Cézanne that…decades later…I would see software development in the same way.

Meditate on the Organic Whole

Kidney transplantI don’t know about you, but as a developer, I really hate it when a software project gets pulled apart, with large components being rendered inoperable, while key parts are re-written. Would a surgeon kill a patient in order to do a kidney transplant?

Cézanne would never neglect one part of the canvas in order to obsess on another part. The big picture was always kept in his mind. Like a Cézanne painting, a body of software should be seen as a living being – a whole.

Now, let’s talk about chickens.

How is Cézanne Like a Chicken?

You see: a painting by Cézanne is like a an organism whose genes – whose full potential – is present from the very start.

Consider the way an embryo grows into an adult. The embryo of…say…a chicken, is a living thing; a moving, functioning, eating, breathing animal. At every stage of its growth, all of its genes are present. By the time it hatches from the egg, pretty much all of its internal organs exist in some rudimentary form.


(BTW – the picture above is actually a penguin. I think it’s kinda cute, don’t you?)

Usually, when I start a new project, or work with a team to decide on how to get a project started from scratch, I try to get a sense of what the main components will be.


Sometimes, a key component is sketched-in as a stub, like a splotch of gray-blue in a Cézanne painting that shows where an important cloud will be filled-in later. This cloud’s looming presence will set the mood and provide a compositional counter-weight to the mountain on the other side of the scene.

Oops – I forgot. We’re talking about chickens.

So, what if we considered the vital components of a software system to be the organs of an animal?


Are there other people who like to use biological metaphors when talking about software development? You bet:

Biological Metaphors in the Design of Complex Software Systems

Structural biology metaphors applied to the design of a distributed object system

Biological Inspiration for Computing

Here’s a piece on the Organic Metaphor, where the author says, “Industry pundits have taken to calling the process of software design “system architecture.” They borrow, of course, from the ancient and time honored tradition of construction.”

Later, the author says, “My experience is that the best programs aren’t designed. They evolve.”


600px-FallingwaterWrightLike building architecture, the organic paradigm is the most resilient, most evolvable, most sustainable way to ensure growth and longevity in software systems.


Organism = “Organ” + “ism”

“Modules” in biology are described by [Raff, R. A. 1996] as having…

  1. discrete genetic specification
  2. hierarchical organization
  3. interactions with other modules
  4. a particular physical location within a developing organism
  5. the ability to undergo transformations on both developmental and evolutionary time scales

Let’s look at these five requirements in terms of software development:

1. discrete genetic specification:

I’m tempted to compare this to Design. However, software rarely begins with a known genetic code. Perhaps “the evolution of a new species” is a better metaphor than “the development of an individual”.

In other words: the genetic blueprint of a software organism is pretty much guaranteed to evolve. And programmers are better off letting that evolve through prototyping, design iteration, unit testing, and user-testing  – especially in the early stages of its life.  

2. hierarchical organization:

Classes, Encapsulation, Method-calling, Inheritance. OOP OOP a DOOP.

Earth’s Biosphere invented Object-Oriented Programming before we came along and gave it a name.

My liver performs certain functions, and those functions are different than the functions that my kidneys perform. My organs communicate with each other using arrays of biochemicals. 

3. interactions with other modules:

Data Flow, API’s, Interfaces

4. a particular physical location within a developing organism

This may only apply to visual code editing and file directory access. Software is made of bits, not atoms. We have to make it appear that it occupies space and time in order to bring it into existence. 

If the software gets deployed in a distributed physical environment (a spacecraft, a smart house, or among an internet of things), then physical location does indeed apply. In this case, we’re not just talking about software anymore.

5. the ability to undergo transformations on both developmental and evolutionary time scales

Amen Brother!

That last one is the punch line. Besides having a healthy birth, how do we design software to scale across the development cycle? How do we make it evolve organically as teams of programmers come and go, or even as companies and open-source communities merge and transform?


I think mother nature has many more clues.


By the way, you may prefer to think of a body of software as behaving more like an ecosystem than an individual organism. Although, I would suggest (being a fan of Gaia Theory) that ecosystems are really just super-organisms.


Software is the most versatile, plastic, and evolvable brainchild of Homo sapiens. It will probably not go away, and it may even outlast us. In fact, software could even eventually become a new living layer on Gaia. It might have to be put to use to monitor and manage the health of the planet (as well as continuing to help people find funny cat videos).

An era of autonomous, evolvable software is predicted by Hillis, Kelly, Koza, and many others. But fully sustainable software – as a new digital species – won’t come about for a long time. Software is in its adolescence. It is still in its growing pains.

Bucky Fuller, Where are You? (On the Boxiness of Corporate Employment)


“Okay, but…if you had to choose between calling yourself a designer or calling yourself an engineer, which would you choose?”


Specialists and Generalists

I have often needed a specialist to do a specific task for me. This is normal. Specialists have a role in the economy and one could argue (along with Adam Smith) that specialization is the very basis of economy.

But too much specialization comes at a cost to innovative tech companies…and to creative individuals. Especially now, and increasingly – into the future…

Here’s an article in the Harvard Business Review on that topic:

Screen Shot 2014-11-02 at 10.43.34 PM

Nourishing My Inner Bucky

Interviewers have often asked me how I rank myself in terms of software engineering skill. As if there were a one-dimensional yardstick upon which all engineers can place themselves.

When one is evaluated with a one-dimensional yardstick, one usually ends up with a low grade.

For the same reason that there are multiple dimensions to intelligence, why not use more than one yardstick to evaluate an engineer?


The space that lies between all these one-dimensional yardsticks yields great connective knowledge. This is the domain of the COMPREHENSIVIST.

I lament the boxiness of the standard company recruiting process – even within companies that claim to employ people who think outside the box (like Google). Here’s a Google employee admitting to their deplorable interview process); “Pablo writes that his best skill is product design, but that his Google recruiters only showed interest in his ability to code.”

Screen Shot 2014-11-02 at 9.17.30 PMWe hear of how generalists and right-brain thinkers are in such demand these days.

Bullshit. When it comes to finding employment in companies, we are still confronted with an array of boxes, and we are still expected to show how well we fit into (one) of them. Consider Linked-In.

linkedinMy Linked-In profile has the following as my “industry”:


Why did I choose Shipbuilding? LinkedIn REQUIRES that I choose ONLY ONE of the industries from its list, and it DOES NOT allow me to choose more than one industry. Shipbuilding was the furthest thing I could find from what I do. Instead of trying to use a single box to characterize myself, I prefer to go in the opposite direction.

Linked-In = Boxed-In

Now I want to say a few things about being an older person who has faced difficulty fitting into the workforce.

We Are All Multi-Dimensional – Increasingly as we Age

Experienced (i.e., older) programmer/innovator/designers should be contributing more of those intangibles to the tech industry that Google is so bad at seeking out.

The tech industry has a fundamental problem: software plays an increasing role in people’s lives. The world’s population is aging. Young engineers who know the latest buzzwords of the last five years are hired quickly and eagerly. An aging population tries to keep up with fast-changing software interfaces. And more and more of this aging population consists of software engineers who have something the young programmers don’t have: wisdom, experience, perspective.

We are exactly what Silicon Valley needs.

Screen Shot 2014-11-02 at 9.01.40 PM

No one in particular is to blame for ageism in high-tech startups. The problem does not stem from any particular favoritism of young people: it is due to the short-sightedness of the tech industry, and the emphasis on the quick-thinking, risk-taking attributes associated with youth.

People who are professionally multi-dimensional should play a key role in human-centered software design. The cultural divide, identified by C. P. Snow in 1959, is still with us. Boxes breed boxes. That’s why we’re in the box we’re in.


Hunter Gatherer Programmer

Which side of the corporate corpus callosum are you on?

Screen Shot 2013-08-10 at 10.22.00 PMIan McGilchrist gave a nice lecture, animated by RSA Animate about the “Divided Brain” – and how it created Western society. The mediating/inhibiting influence of the corpus collsum between the two brain hemispheres has become weakened, allowing the logical, linear left to dominate over the sensory, panoramic nature of the right.


The high tech culture of programmers has become, in my mind, the epitome of society’s left brain, and it is ghettoizing the right brain.

Let me toss out an idea: Programming skill shouldn’t be based on how good one is at manipulating numbers. Programming skill should primarily be about finding (and making) patterns, seeing connections, and using metaphors.

Computers are famously good with numbers, memory, and repetition, so why should programmers have to be good at these things too? Originally, when the computer age was young, programmers had to be sort of computer-like, just in order to build the damn things. I would contend that the culture really needs to change now that software runs so much of our lives. Programmers should be spending more time engaging in meta-math: creative pattern-finding, and building tools that match our human-like thinking; the thinking that comes from brains evolved to hunt, gather, play, explore, and build.

Our lives are increasingly dependent on software. I believe the wrong people are writing the software that runs our lives. The priests of high tech are extremely good at linear thinking (and often also at manipulating money and laws). Programmers are generally good at computation, and holding many levels of complex logic in their heads. These people have a high tolerance for software complexity.

No wonder software is so hard to use.

Hunter-gatherer skills deal with a different kind of complexity – the kind of complexity that characterizes the nonlinear world we live in. It requires all of our senses: sight, sound, touch, smell, balance…all merged unconsciously to form intuition. This intuition gathers environmental clues and builds context. These skills are ignored in most of our interactions with software. We are required to remember volumes of passwords and navigate geeky user interfaces with poor affordances. Many of these interfaces change every few months.

There is an under-appreciated range of people working on the periphery of the high-tech software industry. They know that there is a problem; they know they can help make it better. But they are on the wrong side of the corporate corpus callosum. They are in the ghetto. In order to make the situation better, they need to be empowered. They need to be on the inside. 


Being dyslexic, poor at math, slow at solving puzzles, distracted, and easily frustrated with nonintuitive tools should not keep people from participating in the software development process. In fact, I think these are the very people who are most needed. These are the people who will make software interfaces resonate with humanity at large.