The Miracle of My Hippocampus – and other Situated Mental Organs

I’m not very good at organizing.

pilesThe pile of papers, files, receipts, and other stuff and shit accumulating on my desk at home has grown to huge proportions. So today I decided to put it all into several boxes and bring it to the co-working space – where I could spend the afternoon going through it and pulling the items apart. I’m in the middle of doing that now. Here’s a picture of my progress. I’m feeling fairly productive, actually.

10457290-Six-different-piles-of-various-types-of-nuts-used-in-the-making-of-mixed-nuts--Stock-PhotoSome items go into the trash bin; some go to recycling; most of them get separated into piles where they will be stashed away into a file cabinet after I get home. At the moment, I have a substantial number of mini-piles. These accumulate as I sift through the boxes and decide where to put the items.

Here’s the amazing thing: when I pull an item out of the box, say, a bill from Verizon, I am supposed to put that bill onto the Verizon pile, along with the other Verizon bills that I have pulled out. When this happens, my eye and mind automatically gravitate towards the area on the table where I have been putting the Verizon bills. I’m not entirely conscious of this gravitation to that area.

Gravity Fields in my Brain

What causes this gravitation? What is happening in my brain that causes me to look over to that area of the table? It seems that my brain is building a spatial map of categories for the various things I’m pulling out of the box. I am not aware of it, and this is amazing to me – I just instinctively look over to the area on the table with the pile of Verizon bills, and…et voilà – there it is.

Other things happen too. As this map takes shape in my mind (and on the table), priorities line up in my subconscious. New connections get made and old connects get revived. Rummaging through this box has a therapeutic effect.

The fact that my eye and mind know where to look on the table is really not such a miracle, actually. It’s just my brain doing its job. The brain has many maps – spatial, temporal, etc. – that help connect and organize domains of information. One part of the brain – the hippocampus – is associated with spatial memory.

hippocampal-neurons_0-1

User Interface Design, The Brain, Space, and Time

I could easily collect numerous examples of software user interfaces that do a poor job of tapping the innate power of our spatial brains. These problematic user interfaces invoke the classic bouts of confusion, frustration, undiscoverability, and steep learning curves that we bitch about when comparing software interfaces.

This is why I am a strong proponent of Body Language (see my article about body language in web site design) as a paradigm for user interaction design. Similar to the body language that we produce naturally when we are communicating face-to-face, user interfaces should be designed with the understanding that information is communicated in space and in time (situated in the world). There is great benefit for designers to have some understanding of this aspect of natural language.

Okay, back to my pile of papers: I am fascinated with my unconscious ability to locate these piles as I sift through my stuff. It reminds me of why I like to use the fingers of my hand to “store” a handful of information pieces. I can recall these items later once they have been stored in my fingers (the thumb is usually saved for the most important item).

Body Maps, Brain, and Memory

inbodymaps

Screen Shot 2016-02-07 at 9.03.46 PMLast night I was walking with my friend Eddie (a fellow graduate of the MIT Media Lab, where the late Marvin Minsky taught). Eddie told me that he once heard Marvin telling people how he liked to remember the topics of an upcoming lecture: he would place the various topics onto his body parts.

…similar to the way the ancient Greeks learned to remember stuff.

During the lecture, Marvin would shift his focus to his left shoulder, his hand, his right index finger, etc., in order to recall various topics or concepts. Marvin was tapping the innate spatial organs in his brain to remember the key topics in his lecture.

My Extended BodyMap

18lta79g5tsytjpgMy body. My home town. My bed. My shoes. My wife. My community. The piles in my home office. These things in my life all occupy a place in the world. And these places are mapped in my brain to events that have happened in the past – or that happen on a regular basis. My brain is the product of countless generations of Darwinian iteration over billions of years.

All of this happened in space and time – in ecologies, animal communities, among collaborative workspaces.

Even the things that have no implicit place and time (as the many virtualized aspects of our lives on the internet)… even these things occupy a place and time in my mind.

Intelligence has a body. Information is situated.

Hail to Thee Oh Hippocampus. And all the venerated bodymaps. For you keep our flitting minds tethered to the world.

You offer guidance to bewildered designers – who seek the way – the way that has been forged over billions of years of intertwingled DNA formation…resulting in our spatially and temporally-situated brains.

treblebird

bodymapping.com.au

We must not let the no-place, no-time, any-place, any-time quality of the internet deplete us of our natural spacetime mapping abilities. In the future, this might be seen as one of the greatest challenges of our current digital age.

Hippocampus_and_seahorse_cropped

Failure and Recovery – an Important Concept in Design..and Life

I have observed that good design takes into consideration two important aspects of use:

  1. Failure Rate
  2. Recovery Rate

Well-designed products or software interfaces have low failure rates and low failure amounts. This is related to the concept of fault tolerance. A well-designed product or interface should not fail easily, and failure should not be complete.

“If its operating quality decreases at all, the decrease is proportional to the severity of the failure, as compared to a naively designed system in which even a small failure can cause total breakdown.”

A well-designed product or interface should also be easy to recover from failure.

81oCqPfe5wL._SX522_I recently bought a set of headphones. These were good headphones in most respects…until they broke at the complicated juncture where the ear pieces rotate. Once these headphones broke, there was really nothing I could do to fix them. But I decided to try – using a special putty that dries and holds things into place.

 

photoIt took a long time to figure out how to do this. When I finally repaired the broken part, I realized that the wires had been severed inside. There was no sound coming through. I had no choice but to put them into the garbage bin where they will contribute to the growing trash heap of humanity. Bad design is not just bad for consumers: it’s bad for the planet.

While most people (including myself) would claim that Audio Technica headphones are generally well-designed, we are usually not taking into account what happens when they break.

13687716887463pSometimes the breakdown is cognitive in nature. There’s a Keurig coffee machine at work. It uses visual symbols to tell the user what to do.

As I have pointed out in another article, visual languages are only useful to the extent that the user knows the language. And designers who use visual language need to understand that natural language includes how something behaves, and shows its internal states, not just what kinds of icons is displays on its surface.

The Keurig coffee machine is a nice specimen in many respects. But I had discovered that if I apply the necessary actions in the wrong order, it fails. Namely: if I add the little coffee supply and press down the top part before the water has finished heating up, it doesn’t allow me to brew the coffee.

So…after the water finished heating up, I saw the buttons light up. “Cool” – I said.

But nothing happened when I pressed a button to dispense the coffee. “WTF” – I said. Then I decided to open up the lid and close it again. That did the trick. The lights started blinking. But I was not satisfied with the solution. The discoverability of this bit of behavioral body language ranks low on my list.

Hint: “Blinking Lights” Means “You Can Press a Button”

I have to say, though: I have experienced worse examples of undiscoverability with appliances – especially appliances that are so simple, sleek, and elegant that they have no body language to speak of. This infuriates me to no end. It is not unlike the people I meet on occasion who exhibit almost no body language. It makes me squirm. I want to run away.

Now, thanks to YouTube and the interwebs in general, there are plenty of people who can help us get around these problems…such as this guy who has a solution to a related blinking light problem:

Screen Shot 2016-01-31 at 12.04.09 PM

I realize there are not many people who are bringing up this seemingly small problem. But I bring it up because it is just one of many examples of poor affordance in industrial design that are so small as to be imperceptible to the average user. However, the aggregate of these small imperceptible stumbles that occur throughout our lives constitutes a lowering of the quality of life. And they dull our sense of what good design should be about.

Tiny Rapid Failures and Tiny Rapid Recoveries

148159580_GeneralBicycleNow consider what happens when you ride a bicycle. When riding a bike, you may occasionally lose balance. But that balance can easily be recovered my shifting your weight, turning the wheel, or several other actions – many of which are unconscious to you.

Think of riding a bike as a high-rate of tiny failures with a high-rate of tiny recoveries.

Taken to the extreme: a bird who is standing on one leg has neuromuscular controls that are correcting the balance of the bird’s center of gravity at such a high rate and in such tiny amounts, that we don’t even notice it (and neither does the bird).

flamingo-on-one-leg

Natural Affordance: Perceived Signifiers

User interfaces (in computer software as well as in appliances) should use natural affordances whenever possible so that users can make a good guess as to whether something is about to fail, whether it is failing, how much it is failing, and how to recover.

The best design allows for rapid, and often unconscious correction while using the product. Good design is invisible!

Donald Norman brought some of these ideas to the fore when he wrote the Design of Everyday Things. We still have a lot to learn from his teachings.

Design is a way of life. If you design your life with resilience in mind – with the ability to recognize failures before they happen, and with the ability to recover from those failures, you will live a better life. The same goes for designing the technology that increasingly fills our lives.

The Body Language of a Happy Lizard

lizardhappy-dog-running-by-500px-600x350I love watching my dog greet us when we come home after being out of the house for several hours. His body language displays a mix of running in circles, panting, bobbing his head up and down, wagging his tail vigorously, wagging his body vigorously, yapping, yipping, barking, doing the down-dog, shaking off, and finally, jumping into our laps. All of this activity is followed by a lot of of licking.

There was a time not long ago when people routinely asked, “do animals have intelligence?” and “do animals have emotions?” People who are still asking whether animals have intelligence and emotions seriously need to go to a doctor to get their mirror neurons polished. We realize now that these are useless, pointless questions.

Deconstructing Intelligence

self-cars-300x190The change of heart about animal intelligence is not just because of results from animal research: it’s also due to a softening of the definition of intelligence. People now discuss artificial intelligence at the dinner table. We often hear ourselves saying things like “your computer wants you to change the filename”, or “self-driving cars in the future will have to be very intelligent”.

The concept of intelligence is working its way into so many non-human realms, both technological and animal. We talk about the “intelligence of nature”, the “wisdom of crowds”, and other attributions of intelligence that reside in places other than individual human skulls.

imgres-1

Can a Lizard Actually Be “Happy”? 

I want to say a few things about emotions.

The problem with asking questions like “can a lizard be happy?” is in the dependency of words, like “happy”, “sad”, and jealous”. It is futile to try to fit a complex dynamic of brain chemistry, neural firing, and semiosis between interacting animals into a box with a label on it. Researchers doing work on animal and human emotion should avoid using words for emotions. Just the idea of trying to capture something as visceral, somatic, and, um…wordless as an emotion in a single word is counterproductive. Can you even claim that you are feeling one emotion at a time? No: emotions ebb and flow, they overlap, they are fluid – ephemeral. Like memory itself, as soon as you start to study your own emotions, they change.

And besides; words for emotions differ among languages. While English may be the official language of science, it does not mean that its words for emotions are more accurate.

Alas…since I’m using words to write this article (!) I have to eat my words. I guess I would have to give the following answer the question, “can a lizard be happy?”

Yes. Kind of.

The thing is: it’s not as easy to detect a happy lizard as it is to detect a happy dog. Let’s compare these animals:

HUMAN        DOG         COW           BIRD         LIZARD         WORM

This list is roughly ordered by how similar the animal is to humans in terms of intelligent body language. Dogs share a great deal of the body language that we associate with emotions. Dogs are especially good at expressing shame. (Do cats feel less shame than dogs? They don’t appear to show it as much as dogs, but we shouldn’t immediately jump to conclusions because we can’t see it in terms of familiar body language signals).

3009107.largeOn the surface, a cow may appear placid and relaxed…in that characteristic bovine way. But an experienced veterinarian or rancher can easily detect a stressed-out cow. As we move farther away from humans in this list of animals, the body language cues become harder and harder to detect. In the simpler animals, do we even know if these emotions exist at all? Again…that may be the wrong question to ask.

happy-worm

It would be wrong of me to assume that there are no emotional signals being generated by an insect, just because I can’t see them.

ants communicating via touch

Ant body language is just not something I am familiar with. The more foreign the animal, the more difficult it is for us humans to attribute “intelligence” or “emotion” to it.

Zoosemiotics may help to disambiguate these problematic definitions, and place the gaze where it may be more productive.

I would conclude that we need to continue to remove those anthropocentric biases that have gotten in the way of science throughout our history.

8212f1d8d4ab1d159c6e0837439524c3When we have adequately removed those biases regarding intelligence and emotion, we may more easily see the rich signaling that goes on between all animals on this planet. We will begin to see more clearly a kind of super-intelligence that permeates the biosphere. Our paltry words will step aside to reveal a bigger vista.

Dinosaur_615I have never taken LSD or ayahuasca, but I’ve heard from those that have that they have seen this super-intelligence. Perhaps these chemicals are one way of removing that bias, and taking a peek at that which binds us with all of nature.

But short of using chemicals….I guess some good unbiased science, an open mind, and a lot of compassion for our non-human friends can help us see farther – to see beyond our own body language.

Programming Languages Need Nouns and Verbs

I created the following Grammar Lesson many years ago:

grammar lesson

Like many people my age, my first programming language was BASIC. Next I learned Pascal, which I found to be extremely expressive. Learning C was difficult, because it required me to be closer to the metal.

computer-memory-2Graduating to C++ made a positive difference. Object-oriented programming affords ways to encapsulate the aspects of the code that are close to the metal, allowing one to ascend to higher levels of abstraction, and express the things that really matter (I realize many programmers would take issue with this – claiming that hardware matters a lot).

Having since learned Java, and then later…JavaScript, I have come to the opinion that the more like natural language I can make my code, the happier I am.

Opinions vary of course, and that’s a good thing. Many programmers don’t like verbosity. Opinions vary on strong vs. weak typed languages. The list goes on. It’s good to have different languages to accommodate differing work styles and technical needs.

But…

if you believe that artificial languages (i.e., programming languages) need to be organic, evolvable, plastic, adaptable, and expressive (like natural language, only precise and resistant to ambiguity and interpretation), what’s the right balance?

Should Programs Look Like Math? 

Should software programs be reduced to elegant, terse, math-like expressions, stripped of all fat and carbohydrates? Many math-happy coders would say yes. Some programmers prefer declarative languages over procedural languages. As you can probably guess, I prefer procedural languages.

Is software math or poetry? Is software machine or language?

I think it could – and should – be all of these.

Screen Shot 2015-01-31 at 6.04.00 PM

Sarah Mei has an opinion. She says that Programming is Not Math.

Programming with Nouns and Verbs

First: let me just make a request of all programmers out there. When you are trying to come up with a name for a function, PLEASE include a verb. Functions DO things. Like any other kind of language, your code will grow in a healthy way within the ecology of human communicators if you make it appropriately expressive.

Don’t believe me? Wait until you’ve lived through several companies and watched a codebase try to survive through three generations of developers. Poorly-communicating software, put into the wrong hands, can set off a pathological chain of events, ending in ultimate demise. Healthy communication keeps marriages and friendships from breaking down. The same is true of software.

Many have pontificated on the subject of software having nouns and verbs. For instance, Matt’s Blog promotes programming with nouns and verbs.

And according to John MacIntyre, “Take your requirements and circle all the nouns, those are your classes. Then underline all the adjectives, those are your properties. Then highlight all your verbs, those are your methods”.

When I read code, I unconsciously look for the verbs and nouns to understand it.

codefragment-thumb-300x268-258

When I can’t identify any nouns or verbs, when I can’t figure out “who” is doing “what” to “whom”, I become cranky, and prone to breaking things around me. Imagine having to read a novel where all the verbs look like nouns and all the nouns look like verbs. It would make you cranky, right?

The human brain is wired for nouns and verbs, and each is processed in a different cortical region.

fig

There are two entities in the universe that use software:

(1) Computers, and (2) Humans.

Computers run software. Human communicate with it.

mammalian-brain-computer-inside

-Jeffrey

Disappearing UI Elements – I Mean, WTF?

I recently noticed something while using software products by Google, Apple, and other trend-setters in user interface design. I’m talking about…

User interface elements that disappear

baby_peekabooI don’t know about you, but I am just tickled pink when I walk into the kitchen to look for the can opener, and can’t find it. (I could have sworn it was in THIS drawer. Or….maybe it was in THAT drawer?)

Sometimes I come BACK to the previous drawer, and see it right under my nose…it is as if an invisible prankster were playing tricks on me.

Oh how it makes me giggle like a baby who is playing peekaboo with mommy.

An open male hand, isolated on a white background.

Not really. I lied. It makes me irrational. It makes me crazy. It makes me want to dismember kittens.

Introducing: Apple’s New Disappearing Scrollbar!

scrollbar-exhibit-300x226

According to DeSantis Briendel, “A clean and uncluttered visual experience is a laudable goal, but not at the expense of an intuitive and useful interface tool. Making a user hunt around for a hidden navigation element doesn’t seem very user friendly to us. We hope developers will pull back from the disappearing scrollbar brink, and save this humble but useful tool.”

It gets worse. If you have ever had to endure the process of customizing a YouTube channel, you may have discovered that the button to edit your channel is invisible until you move your mouse cursor over it.

maxresdefault

invisible

I observed much gnashing of teeth on the internet about this issue.

One frustrated user on the Google Product Forums was looking for the gear icon in hopes to find a way to edit his channel. To his remark that there is no gear icon, another user joked…

“That is because “the team” is changing the layout and operation of the site faster than most people change underwear.”

…which is a problem that I brought up in a previous blog about the arbitrariness of modern user interfaces.

There are those who love bringing up Minority Report – and claim that user interfaces will eventually go away, allowing us to just wave our hands in the air and speak naturally, and to communicate with gestures in thin air.

Bore hole

In Disappearing UI: You Are the Interface, the author describes (in utopian prose) a rosy future where interfaces will disappear, liberating us to interact with software naturally.

But what is “natural”?

What is natural for the human eye, brain, and hand is to see, feel, and hear the things that we want to interact with, to physically interact with them, and then to see, feel, and hear the results of that interaction. Because of physics and human nature, I don’t see this ever changing.

OK, sure. John Maeda’s first rule for simplicity is to REDUCE.

But, by “reduce”, I don’t think he meant…

play hide and seek with the edit button or make the scroll bar disappear while no one is looking.

Perhaps one reason Apple is playing the disappearing scroll bar trick is that they want us to start doing their two-finger swipe. That would be ok if everyone had already given up their mice. Not so fast, Apple!

Perhaps Apple and Google are slowly and gradually preparing us for a world where interfaces will dissolve away completely, eventually disappearing altogether, allowing us to be one with their software.

Seems to me that a few billions years of evolution should have some sway over how we prefer to interact with objects and information in the world.

I like to see and feel the knife peeling the skin off a potato. It’s not just aesthetics: it’s information.

I like seeing a person’s eyes when I’m talking to them. I like seeing the doorknobs in the room. I like knowing where the light switch is.

yy

Apple, please stop playing hide-and-seek with my scroll bar.

-Jeffrey

The Tail Wagging the Brain

 

weird_cover_image

(This is chapter 8 of Virtual Body Language). Extra images have been added for this post. An earlier variation of this blog post was published here.)

The Brain That Changes ItselfThe brain can rewire its neurons, growing new connections, well into old age. Marked recovery is often possible even in stroke victims, porn addicts, and obsessive-compulsive sufferers, several years after the onset of their conditions. Neural Plasticity is now proven to exist in adults, not just in babies and young children going through the critical learning stages of life. This subject has been popularized with books like The Brain that Changes Itself (Doidge 2007).

Like language grammar, music theory, and dance, visual language can be developed throughout life. As visual stimuli are taken in repeatedly, our brains reinforce neural networks for recognizing and interpreting them.

biophiliaSome aspects of visual language are easily learned, and some may even be instinctual. Biophilia is the human aesthetic appreciation of biological form and all living systems (Kellert and Wilson 1993). The flip side of aesthetic biophilia is a readiness to learn disgust (of feces and certain smells) or fear (of large insects or snakes, for instance). Easy recognition of the shapes of branching trees, puffy clouds, flowing water, plump fruit, and subtle differences in the shades of green and red, have laid a foundation for human aesthetic response in all things, natural or manufactured. Our biophilia is the foundation of much of our visual design, even in areas as urban as abstract painting, web page layout, and office furniture design.

The Face

eyebrow accentsWe all have visual language skills that make us especially sensitive to eyebrow motion, eye contact, head orientation, and mouth shape. Our sensitivity to facial expression may even influence other, more abstract forms of visual language, making us responsive to some visual signals more than others—because those face-specific sensitivities gave us an evolutionary advantage as a species so dependent on social signaling. This may have been reinforced by sexual selection.

Even visual features as small as the pupil of an eye contribute to the emotional reading of a face—usually unconsciously. Perhaps the evolved sensitivity to small black dots as information-givers contributed to our subsequent invention of small-yet-critical elements in typographical alphabets.

evolution-of-smiley-2

We see faces in clouds, trees, and spaghetti. Donald Norman wrote a book called, Turn Signals are the Facial Expressions of Automobiles (1992). Recent studies using fMRI scans show that car aficionados use the same brain modules to recognize cars that people in general use to recognize faces (Gauthier et al. 2003).

Screen Shot 2014-12-02 at 11.47.48 PMSomething about the face appears to be baked into the brain.

18394bo715ci0jpg

Dog Smiling

We smile with our faces. Dogs smile with their tails. The next time you see a Doberman Pinscher with his ears clipped and his tail removed (docked), imagine yourself with your smile muscles frozen dead and your eyebrows shaved off.

Like humans perceiving human smiles, it is likely that the canine brain has neural wiring highly tuned to recognize and process a certain kind of visual energy: an oscillating motion of a linear shape occurring near the backside of another dog. The photoreceptors go to work doing edge detection and motion detection so they can send efficient signals to the brain for rapid space-time pattern recognition.

dachsund-110113-400x300

Tail wagging is apparently a learned behavior: puppies don’t wag until they are one or two months old. Recent findings in dog tail wagging show that a wagger will favor the right side as a result of feeling fundamentally positive about something, and will favor the left side when there are negative overtones in feeling (Quaranta et al. 2007). This is not something humans have commonly known until recently. Could it be that left-right wagging asymmetry has always been a subtle part of canine-to-canine body language?

higgs-eyesI often watch intently as my terrier mix, Higgs, encounters a new dog in the park whom he has never met. The two dogs approach each other—often moving very slowly and cautiously. If the other dog has its tail down between its legs and its ears and head held down, and is frequently glancing to the side to avert eye contact, this generally means it is afraid, shy, or intimidated. If its tail is sticking straight up (and not wagging) and if its ears are perked up and the hair on the back is standing on end, it could mean trouble. But if the new stranger is wagging its tail, this is a pretty good sign that things are going to be just fine. Then a new phase of dog body language takes over. If Higgs is in a playful mood, he’ll start a series of quick motions, squatting his chest down to assume the “play bow”, jumping and stopping suddenly, and watching the other dog closely (probably to keep an eye on the tail, among other things). If Higgs is successful, the other dog will accept his invitation, and they will start running around the park, chasing each other, and having a grand old time. It is such a joy to watch dogs at play.

dogs_color

So here is a question: assuming dogs have an instinctive—or easily-learned—ability to process the body language of other dogs, like wagging tails, can the same occur in humans for processing canine body language? Did I have to learn to read canine body language from scratch, or was I was born with this ability? I am after all a member of a species that has been co-evolving with canines for more than ten thousand years, sometimes in deeply symbiotic relationships. So, perhaps I, Homo Sapiens, already have a biophilic predisposition to canine body language.

Animals_in_Translation_(book_cover)Recent studies in using dogs as therapy for helping children with autism have shown remarkable results. These children become more socially and emotionally connected. According to autistic author Temple Grandin, this is because animals, like people with autism, do not experience ambivalent emotion; their emotions are pure and direct (Grandin, Johnson 2005). Perhaps canines helped to modulate, buffer, and filter human emotions throughout our symbiotic evolution. They may have helped to offset a tendency towards neurosis and cognitive chaos.

Vestigial Response

image5931Having a dog in the family provides a continual reminder of my affiliation with canines, not just as companions, but as Earthly relatives. On a few rare occasions while sitting quietly at home working on something, I remember hearing an unfamiliar noise somewhere in the house. But before I even knew consciously that I was hearing anything, I felt a vague tug at my ears; it was not an entirely comfortable feeling. This may have happened at other times, but I probably didn’t notice.

I later learned that this is a leftover vestigial response that we inherited from our ancestors.

The ability for some people to wiggle their ears is due to muscles that are basically useless in humans, but were once used by our ancestors to aim their ears toward a sound. Remembering the feeling of that tug on my ears gives me a feeling of connection to my ancestors’ physical experiences.

In a moment we will consider how these primal vestiges might be coming back into modern currency. But I’m still in a meandering, storytelling, pipe-smoking kind of mood, so hang with me just a bit longer and then we’ll get back to the subject of avatars.

This vestigial response is called ear perking. It is shared by many of our living mammalian relatives, including cats. I remember once hanging out and playing mind-games with a cat. I was sitting on a couch, and the cat was sitting in the middle of the floor, looking away, pretending to ignore me (but of course—it’s a cat).

dims.vetstreet

I was trying to get the cat to look at me or to acknowledge me in some way. I called its name, I hissed, clicked my tongue, and clapped my hands. Nothing. I scratched the upholstery on the couch. Nothing. Then I tore a small piece of paper from my notebook, crumpled it into a tiny ball, and discreetly tossed it onto the floor, behind the cat, outside of its field of view. The tiny sound of the crumpled ball of paper falling to the floor caused one of the cat’s ears to swivel around and aim towards the sound. I jumped up and yelled, “Hah—got you!” The cat’s higher brain quickly tugged at its ear, pulling it back into place, and the cat continued to serenely look off into the distance, as if nothing had ever happened.

earsmatrix

Cat body language is harder to read than dog body language. Perhaps that’s by design (I mean…evolution). Dogs don’t seem to have the same talents of reserve and constraint. Dog expressions just flop out in front of you. And their vocabulary is quite rich. Most world languages have several dog-related words and phrases. We easily learn to recognize the body language of dogs, and even less familiar social animals. Certain forms of body language are processed by the brain more easily than others. A baby learns to respond to its mother’s smile very soon after birth. Learning to read a dog’s tail motions may not be so instinctive, but the plastic brain of Homo Sapiens is ready to adapt. Learning to read the signs that your pet turtle is hungry would be much harder (I assume). At the extreme: learning to read the signals from an enemy character in a complicated computer game may be a skill only for a select few elite players.

1883d6jlb7tb0jpgThis I’m sure of: if we had tails, we’d be wagging them for our dogs when they are being good (and for that matter, we’d also be wagging them for each other :D ). It would not be easy to add a tail to the base of a human spine and wire up the nerves and muscles. But if it could be done, our brains would easily and happily adapt, employing some appropriate system of neurons to the purpose of wagging the tail—perhaps re-adapting a system of neurons normally dedicated to doing The Twist. While it may not be easy to adapt our bodies to acquire such organs of expression, our brains can easily adapt. And that’s where avatars come in.

Furries

Screen Shot 2014-12-02 at 11.48.04 PMThe Furry Species has proliferated in Second Life. Furry Fandom is a subculture featuring fictional anthropomorphic animal characters with human personalities and human-like attributes. Furry fandom already had a virtual head start—people were role-playing with their online “fursonas” before Second Life existed (witness FurryMUCK, a user-extendable online text-based role-playing game started in 1990). Furries have animal heads, tails, and other such features.

While many Second Life avatars have true animal forms (such as the “ferals” and the “tinies”), many are anthropomorphic: walking upright with human postures and gaits. This anthropomorphism has many creative expressions, ranging from quaint and cute to mythical, scary, and kinky.

Furry anthropomorphism in Second Life is appropriate in one sense: there is no way to directly change the Second Life avatar into a non-human form. The underlying avatar skeleton software is based solely on the upright-walking human form. I know this fact in an intimate way, because I spent several months digging into the code in an attempt to reconstitute the avatar skeleton to allow open-ended morphologies (quadrupeds, etc.) This proved to be difficult. And no surprise: the humanoid avatar morphology code, and all its associated animations—procedural and otherwise—had been in place for several years. It serves a similar purpose to a group of genes in our DNA called Hox genes.

6800872f6

They are baked deep into our genetic structure, and are critical to the formation of a body plan. Hox genes specify the overall structure of the body and are critical in embryonic development when the segmentation and placement of limbs and other body parts are first established. After struggling to override the effects of the Second Life “avatar Hox genes”, I concluded that I could not do this on my own. It was a strategic surgical process that many core Linden Lab engineers would have to perform. Evolution in virtual worlds happens fast. It’s hard to go back to an earlier stage and start over.

Despite the anthropomorphism of the Second Life avatar (or perhaps because of this constraint), the Linden scripting language (LSL) and other customizing abilities have provided a means for some remarkably creative workarounds, including packing the avatar geometry into a small compact form called a “meatball”, and then surrounding it with a custom 3D object, such as a robot or a dragon or some abstract form, complete with responsive animations and particle system effects.

furriesPerhaps furry residents prefer having this constraint of anthropomorphism; it fits with their nature as hybrids. Some furry residents have customized tail animations and use them to express mood and intent. I wouldn’t be surprised if those who have been furries for many years have dreams in which they are living and expressing in their Furry bodies—communicating with tails, ears, and all. Most would agree that customizing a Furry with a wagging-tail animation is a lot easier than undergoing surgery to attach a physical tail.

But as far as the brain is concerned, it may not make a difference.

Where Does my Virtual Body Live?

Virtual reality is not manifest in computer chips, computer screens, headsets, keyboards, mice, joysticks, or head-mounted displays. Nor does it live in running software. Virtual reality manifests in the brain, and in the collective brains of societies. The blurring of real and virtual experiences is a theme that Jeremy Bailenson and his team at Stanford’s Virtual Human Interaction Lab have been researching.

vhil-logo

Virtual environments are now being used for research in social sciences, as well as for the treatment of many brain disorders. An amputee who has suffered from phantom pain for years can be cured of the pain through a disciplined and focused regimen of rewiring his or her body image to effectively “amputate” the phantom limb. Immersive virtual reality has been used recently to treat this problem (Murray et al. 2007). Previous techniques using physical mirrors have been replaced with sophisticated simulations that allow more controlled settings and adjustments. When the brain’s body image gets tweaked away from reality too far, psychological problems ensue, such as anorexia. Having so many super-thin sex-symbol avatars may not be helping the situation. On the other hand, virtual reality is being used in research to treat this and other body image-related disorders.

BradDeGraf-01A creative kind of short-term body image shifting is nothing new to animators, actors, and puppeteers, who routinely tweak their own body images in order to express like ostriches or hummingbirds or dogs. When I was working for Brad deGraf, one of the early pioneers in real-time character animation, I would frequent the offices of Protozoa, a game company he founded in the mid-90s.

I was hired to develop a tool for modeling 3D trees which were to be used in a game called “Squeezils”, featuring animals that scurry among the branches. I remember the first time I went to Protozoa. I was waiting in the lobby to meet Brad, and I noticed a video monitor at the other end of the room. An animation was playing that had a couple of 3D characters milling about. One of them was a crab-like cartoon character with huge claws, and the other was a Big Bird-like character with a very long neck. After gazing at these characters for a while, it occurred to me that both of these characters were moving in coordination—as if they were both being puppeteered by the same person. My curiosity got the best of me and I started wandering around the offices, looking for the puppet master. I peeked around the corner. In the next room were a couple of motion capture experts, testing their hardware. On the stage was a guy wearing motion capture gear. When his arms moved, so did the huge claws of the crab—and so did the wimpy wings of the tall bird. When his head moved, the eyes of the crab looked around, and the bird’s head moved around on top of its long neck.

Brad deGraf and puppeteer/animator Emre Yilmaz call this “…performance animation…a new kind of jazz. Also known as digital puppetry…it brings characters to life, i.e. ‘animates’ them, through real-time control of the three-dimensional computer renderings, enabled by fast graphics computers, live motion sampling, and smart software” (deGraf and Yilmaz 1999). When applying human-sourced motion to exaggerated cartoon forms, the human imagination is stimulated: “motion capture” escapes its negative association with droll, unimaginative literal recording. It inspires the human controller to think more like a puppeteer than an actor. Puppeteering is more out-of-body than acting. Anything can become a puppet (a sock, a salt shaker, a rubber hose).

Screen Shot 2014-12-02 at 11.48.15 PM deGraf and Yilmaz make this out-of-body transference possible by “re-proportioning” the data stream from the human controller to fit non-human anatomies.

Screen Shot 2014-12-02 at 11.48.22 PM

Research by Nick Yee found that people’s behaviors change as a result of having different virtual representations, a phenomenon he calls the Proteus Effect (Yee 2007). Artist Micha Cardenas spent 365 hours continuously as a dragon in Second Life. She employed a Vicon motion capture system to translate her motions into the virtual world. This project, called “Becoming Dragon”, explored the limits of body modification, “challenging” the one-year transition requirement that transgender people face before gender confirmation surgery. Cardenas told me, “The dragon as a figure of the shapeshifter was important to me to help consider the idea of permanent transition, rejecting any simple conception of identity as tied to a single body at a single moment, and instead reflecting on the process of learning a new way of moving, walking, talking and how that breaks down any possible concept of an original or natural state for the body to be in” (Cardenas 2010).

Screen Shot 2014-12-02 at 11.48.30 PM With this performance, Cardenas wanted to explore not only the issues surrounding gender transformation, but the larger question of how we experience our bodies, and what it means to inhabit the body of another being—real or simulated. During this 365-hour art-performance, the lines between real and virtual progressively blurred for Cardenas, and she found herself “thinking” in the body of a dragon. What interests me is this: how did Micha’s brain adapt to be able to “think” in a different body? What happened to her body image?

The Homunculus

Dr. Wilder Penfield was operating on a patient’s brain to relieve symptoms of epilepsy. In the process he made a remarkable discovery: each part of the body is associated with a specific region in the brain. He had discovered the homunculus: a map of the body in the brain. The homunculus is an invisible cartoon character. It can only be “seen” by carefully probing parts of it and asking the patient what he or she is feeling, or by watching different parts of the body twitch in response. Most interesting is the fact that some parts of the homunculus are much larger than others—totally out of proportion with normal human anatomy. There are two primary “homunculi” (sensory and motor) and their distorted proportions correspond to the relative differences in how much of the brain is dedicated to the different regions.

1421_Sensory_Homunculus

For instance, eyes, lips, tongue, and hands are proportionately large, whereas the skull and thighs are proportionately small. I would not want to encounter a homunculus while walking down the street or hanging out at a party—homunculi are not pretty—in fact, I find them quite frightening. Indeed they have cropped up in haunting ways throughout the history of literature, science, and philosophy.

But happily, for purposes of my research, any homunculus I encounter would be very good at nonverbal communication, because eyes, mouth, and hands are major expressive organs of the body. As a general rule, the parts of our body that receive the most information are also the parts that give the most. What would a “concert pianist homunculus” look like? Huge fingers. How about a “soccer player homunculus”? Gargantuan legs and tiny arms.

Screen Shot 2014-12-02 at 11.49.33 PM

The Homuncular Avatar

Avatar code etches embodiment into virtual space. A “communication homunculus” was sitting on my shoulder while I was working at There.com and arguing with some computer graphics guys about how to engineer the avatar. Chuck Clanton and I were both advocates for dedicating more polygons and procedural animation capability to the avatar’s hands and face. But polygons are graphics-intensive (at least they were back then), and procedural animation takes a lot of engineering work. Consider this: in order to properly animate two articulated avatar hands, you need at least twenty extra joints, on top of the approximately twenty joints used in a typical avatar animation skeleton. That’s nearly twice as many joints.

The argument for full hand articulation was as follows: like the proportions of cortex dedicated to these communication-intensive areas of the body, socially-oriented avatars should have ample resources dedicated to face and hands.

Screen Shot 2014-12-02 at 11.49.46 PM When the developers of Traveler were faced with the problem of how to squeeze the most out of the few polygons they could render on the screen at any given time, they decided to just make avatars as floating heads—because their world centered on vocal communication (telephone meets avatar). The developers of ActiveWorlds, which had a similar polygonal predicament, chose whole avatar bodies (and they were quite clunky by today’s standards).

These kinds of choices determine where and how human expression will manifest.

Non-Human Avatars

Avatar body language does not have to be a direct prosthetic to our corporeal expression. It can extend beyond the human form; this theme has been a part of avatar lore since the very beginning. What are the implications for a non-human body language alphabet? It means that our body language alphabet can (and I would claim already has begun to) include semantic units, attributes, descriptors, and grammars that encompass a superset of human form and behavior.

The illustration below shows pentadactl morphology of various vertebrate limbs. This has implications for a common underlying code used to express meta-human forms and motions.

Screen Shot 2014-12-02 at 11.49.55 PM

A set of parameters (bone lengths, angle offsets, motor control attributes, etc.) can be specified, along with gross morphological settings (i.e., four limbs and a tail; six limbs and no head; no limbs and no head, etc.) Once these parameters are provided in the system to accommodate the various locomotion behaviors and forms of body language, they can be manipulated as genes, using interactive evolution interfaces or genetic algorithms, or simply tweaked directly in a customization interface.

But this morphological space of variation doesn’t need to stop at the vertebrates. We’ve seen avatars in the form of fishes, floating eyeballs, cartoon characters, abstract designs—you name it. Artists, in the spirit of performance artist Stelarc, are exploring the expressive possibilities of avatars and remote embodiment. Micha Cardenas, Max Moswitzer, Jeremy Owen Turner and others articulate the very aesthetics of embodiment and avatarhood—the entire possible expression-space. What are the possibilities of having a visual (or even aural) manifestation of yourself in an alternate reality? As I mentioned before, my Second Life avatar is a non-humanoid creature, consisting of a cube with tentacles.

Screen Shot 2014-12-02 at 11.50.03 PM

On each side of the cube are fractals based on an image of my face. This cube floats in space where my head would normally be. Attached to the bottom of the cube are several green tentacles that hang like those of a jellyfish. This avatar was built by JoannaTrail Blazer, based on my design. In the illustration, Joanna’s avatar is shown next to mine.

I chose a non-human form for a few reasons: One reason is that I prefer to avoid the uncanny valley, and will go to extremes in avoiding droll realism, employing instead the visual tools of abstraction and symbolism. By not trying to replicate the image of a real human, I can sidestep the problem of my avatar “not looking like me” or “not looking right”. My non-human avatar also allows me to explore the realm of non-human body language. On the one hand, I can express yes and no by triggering animations that cause the cube to oscillate like a normal head. But I could also express happiness or excitement by triggering an animation that causes the tentacles to flair out and oscillate, or to roll sensually like an inverted bouquet of cat tails. Negative emotions could be represented by causing the tentacles to droop limply (reinforced by having the cube-head slump downward). While the cube-head mimics normal human head motions, the tentacles do not correspond to any body language that I am physically capable of generating. They could tap other sources, like animal movement, and basic concepts like energy and gravity, expanding my capacity for expression.

Virtual Dogs

Screen Shot 2014-12-02 at 11.50.15 PMI want to come back to the topic of canine body language now, but this time, I’d like to discuss some of the ways that sheer dogginess in the gestalt can be expressed in games and virtual worlds. The canine species has made many appearances throughout the history of animation research, games, and virtual worlds. Rob Fulop, of the ‘90s game company PF Magic, created a series of games based on characters made of spheres, including the game “Dogz”. Bruce Blumberg of the MIT Media Lab developed a virtual interactive dog AI, and is also an active dog trainer. Dogs are a great subject for doing AI—they are easier to simulate than humans, and they are highly emotional, interactive, and engaging.

While prototyping There.com, Will and I developed a dog that would chase a virtual Frisbee tossed by your avatar (involving simultaneous mouse motion to throw the Frisbee and hitting the space key to release the Frisbee at the right time). Since I was interested in the “essence of dog energy”, I decided not to focus on the graphical representation of the dog so much as the motion, sound, and overall behavior of the dog. So, for this prototype, the dog had no legs: just a sphere for a body and a sphere for a head (each rendered as toon-shaded circles—flat coloring with an outline). These spheres could move slightly relative to each other, so that the head had a little bounce to it when the dog jumped.

The eyes were rendered as two black cartoon dots, and they would disappear when the dog blinked. Two ears, a tail, and a tongue were programmed, because they are expressive components. The ears were animated using forward dynamics, so they flopped around when the dog moved, and hung downward slightly from the force of gravity. I programmed some simple logic to make an oscillating, “panting” force in the tongue which would increase in frequency and force when the dog was especially active, tired, or nervous. I also programmed “ear perkiness” which would cause the ears to aim upwards (still with a little bit of droop) whenever a nearby avatar produced a chat that included the dog’s name. I programmed the dog to change its mood when a nearby avatar produced the chats “good dog” or “bad dog”. And this was just the beginning. Later, I added more AI allowing the dog to learn to recognize chat words, and to bond to certain avatars.

Despite the lack of legs and its utterly crude rendering, this dog elicited remarkable puppy-like responses in the people who watched it or played with it. The implication from this is that what makes a dog a dog is not merely the way it looks, but the way it acts—its essence as a distinctly canine space-time energy event. Recall the exaggerated proportions of the human communication homunculus. For this dog, ears, tail, and tongue constituted the vast majority of computational processing. That was on purpose; this dog was intended as a communicator above all else. Consider the visual language of tail wagging that I brought up earlier, a visual language which humans (especially dog-owners) have incorporated into their unconscious vocabularies. What lessons might we take from the subject of wag semiotics, as applied to the art and science of wagging on the internet?

Tail Wagging on the Internet

In the There.com dog, the tail was used as a critical indicator of its mood at any given moment, raising when the dog was alert or aroused, drooping when sad or afraid, and wagging when happy. Take wagging as an example. The body language information packet for tail wagging consisted of a Boolean value sent from the dog’s Brain to the dog’s Animated Body, representing simply “start wagging” or “stop wagging”. It is important to note that the dog’s Animated Body is constituted on clients (computers sitting in front of users), whereas the dog’s Brain resides on servers (arrays of big honkin’ computers in a data center that manage the “shared reality” of all users). The reason for this mind/body separation is to make sure that body language messaging (as well as overall emotional state, physical location and orientation of the dog, and other aspects of its dynamic state) are truthfully conveyed to all the users.

Clients are subject to animation frame rates lagging, internet messages dropping out, and other issues. All users whose avatars are hanging out in a part of the virtual world where the dog is hanging out need to see the same behavior; they are all “seeing the same dog”—virtually-speaking.

Screen Shot 2014-12-02 at 11.50.24 PM

It would be unnecessary (and expensive in terms of internet traffic) for the server to send individual wags several times a second to all the clients. Each client’s Animated Body code is perfectly capable of performing this repetitive animation. And, because of different rendering speeds among various clients, lag times, etc, the wagging tail on your client might be moving left-right-left-right, while the wagging tail on my client is moving right-left-right-left. In other words, they might be out of phase or wagging at slightly different speeds. These slight differences have almost no effect on the reading of the wag. “I am wagging my tail” is the point. That’s a Boolean message: one bit. The reason I am laboring over this point harkens back to the chapter on a Body Language Alphabet: a data-compressed message, easy to zip through the internet, is efficient for helping virtual worlds run smoothly. It is also inspired by biosemiotics: Mother Nature’s efficient message-passing protocol.

On Cuttlefish and Dolphins

The homunculus of Homo Sapiens might evolve into a more plastic form—maybe not on a genetic/species level, but at least during the lifetimes of individual brains, assisted by the scaffolding of culture and virtual world technology. This plasticity could approach strange proportions, even as our physical bodies remain roughly the same. As we spend more of our time socializing and interacting on the internet as part of the program to travel less to reduce greenhouse gases, our embodiment will naturally take on the forms appropriate to the virtual spaces we occupy. And these spaces will not necessarily mimic the spaces of the real world, nor will our embodiments always look and feel like our own bodies. And with these new virtual embodiments will come new layers of body language. Jaron Lanier uses the example of cephalopods as species capable of animated texturemapping and physical morphing used for communication and camouflage—feats that are outside the range of physical human expression.

vid

 

 

 

 

 

 

 

 

In reference to avatars, Lanier says, “The problem is that in order to morph, humans must design avatars in laborious detail in advance. Our software tools are not yet flexible enough to enable us, in virtual reality, to think ourselves into different forms. Why would we want to? Consider the existing benefits of our ability to create sounds with our mouths. We can make new noises and mimic existing ones, spontaneously and instantaneously. But when it comes to visual communication, we are hamstrung…We can learn to draw and paint, or use computer-graphics design software. But we cannot generate images at the speed with which we can imagine them” (Lanier 2006).

Screen Shot 2014-12-02 at 11.50.31 PM

Once we have developed the various non-humanoid puppeteering interfaces that would allow Lanier’s vision, we will begin to invent new visual languages for realtime communication. Lanier believes that in the future, humans will be truly “multihomuncular”.

Researchers from Aberdeen University and the Polytechnic University of Catalonia found that dolphins use discrete units of body language as they swim together near the surface of water. They observed efficiency in these signals, similar to what occurs in frequently-used words in human verbal language (Ferrer i Cancho and Lusseau 2009). As human natural language goes online, and as our body language gets processed, data-compressed, and alphabetized for efficient traversal over the internet, we may start to see more patterns of our embodied language that resemble those created by dolphins, and many other social species besides. The background communicative buzz of the biosphere may start to make more sense in the process of whittling our own communicative energy down to its essential features, and being able to analyze it digitally. With a universal body language alphabet, we might someday be able to animate our skin like cephalopods, or speak “dolphin”, using our tails, as we lope across the virtual waves.

Screen Shot 2014-12-02 at 11.50.37 PM

We Need a Revolution in Software Interaction Design

restaurant-kitchen We need a revolution in software interaction design. Apple and Google will not provide it. They are too big. They are not the solution. They are the problem.

This revolution will probably come from some unsuspecting source, like the Maker Movement, or an independent group of people or company that is manufacturing physical goods. Here’s why: as computers increasingly inhabit physical objects, as the “internet of things” grows, as more and more computing makes its way into cars, clothing, and houses, there will come new modalities of interacting with software. And it will be dictated by properties of the physical things themselves. Not by the whims and follies of interface designers, whose entire universe consists of a rectangle of pixels and the touch of a user.

Let me explain what I mean when I say that software interaction design needs a major paradigm shift.

Affordance

I hate having to use the word “affordance”. It’s not a very attractive or colorful word. But it’s the best I’ve got. If you’ve read my other blogs posts or my book, Virtual Body Language, you have heard me use it before. The word was given higher currency in the user interface design world thanks to Donald Norman, whose book, The Design of EveryDay Things, I highly recommend. (He was forced to change the name from the “Psychology of Everyday things”. I like his original title better).

ava-coon-2Affordance, originally used in J.J. Gibson’s theory of ecological psychology, refers to the possible ways an animal or human can interact with an object (which can be another animal). We often use it in reference to the ways that one interprets visual, tactile and sonic features of a thing, be it an egg-beater, a frightened dog, or a new version of iMovie.

Sensory_Feedback_in_Brain_Computer_Interfaces1A “natural affordance” is a property that elicits an understanding or response that does not have to be learned  – it’s instinctual. In reference to industrial design: a knob affords twisting, and perhaps pushing, while a cord affords pulling. The term has more recently been used in relation to UI design to indicate the easy discoverability of possible actions.

UPDATE…

Bruce Tognazzini pointed out to me that Donald Norman has more recently been using the term “Perceived Signifier”. And this article explains some of the new semantic parsing going on regarding the word “affordance”. Personally, I would be happy if all of this got subsumed into the language of semiotics.

I believe we have WAY TOO MANY artificial affordances in our software interfaces. I will repeat the call of many wise and learned designers: we need to build tools with natural affordances in mind. Easier said than done, I realize. Consider a common modern interface, such as the typical drop-down dialog of the Apple that allows one to download a file:

dialog_2 As a general rule, I like to download files to my desktop instead of specifying the location using this dialog. Once a file is there, I then move it to the appropriate place. Even though it takes me a bit longer, I like the feeling of putting it there myself. My muscles and my brain prefer this.

Question: have you ever downloaded a file to a specific, somewhat obscure folder, and then later download another file, thinking it was to your desktop, and then not being able to find it? Well, you probably didn’t think to check the dialog box settings. You just hit SAVE, like you usually do, right? It’s automatic. You probably forgot that you had previously set the dialog box to that obscure, hard-to-remember folder, right? Files can get lost easily, right? Here’s the reason:

PESKIMO_Desks

Software files are abstract concepts. They have no physical location, no mass, no weight. All the properties that we associate with files are virtual. The computer interface is just a bundle of physical metaphors (primarily desktop metaphors) that provide us with affordances so we can think about them as if they were actual things with properties.

The dialog I showed you doesn’t visually express “in” in a natural way. The sensation of the action is not like putting a flower in a vase or drawing a dot in a circle.

Herein lies a fundamental problem of software interface and interaction design. Everything is entirely arbitrary. Natural laws do not apply.

Does the natural world present the same kind of problem as we have when we lose files? Sometimes, but not so often. That’s because the natural world is full of affordances. Our memories are decorated with sensations, associations, and connections, related to our actions. If I physically put a rubber band in an obscure bowl on the top shelf, I have reason to remember this action. Muscle-memory plays a major part in this. With downloading files on a computer, you may not know where you put the file. In fact, sometimes you can’t know!

In fact, it’s not fair to use the word “put”, since “putting” is a deliberate, conscious act. An accidental fumble on the keyboard can cause a keyboard shortcut command that deletes a file or opens up a new window. When this happens, the illusion breaks down completely: this is not a real desktop.

Am I getting too esoteric? Okay, I’ll get more down-to-earth and gritty…

My Deteriorating Relationship with Apple

Bruce Tognazzini says: “While Apple is doing a bang-up job of catering to buyers, they have a serious disconnect at the point at which the buyer becomes a user.” 

urlMy recent experiences with Apple software interfaces have left me worse than disenchanted. I am angry. Apple has changed the interface and interaction of one too many of its products, causing my productivity in some applications (like iMovie) to come to a screeching halt.

At the end of the day, I’d rather keep my old computer with my trusty collection of tools than to have a shiny new, sexy, super-thin Macbook that replaces my trusty old tools. There are years of muscle memory that I have built up in learning and using these tools. My career depends on this muscle memory. When Apple changes these interactions with no clear reason, I become very angry. And so should all its customers.

And then…there is iTunes.

itunessync

Let’s not talk about iTunes.

Apple’s latest operating system disabled many of the interactions that I (and many others) have been using for decades.

One annoyed user said: “I wish Apple wouldn’t change the fundamental functions of the Finder like this. First, they reversed the scroll button directions then they changed the default double-click to open a folder in a new window function. I’m just glad that they don’t make cars otherwise we would have the brake and accelerator pedal functions reversed (with a pull on the wiper lever to revert to the original)!”

And from the same thread… “If a company cares about users, it doesn’t make a change just for the sake of change that wipes out thirty years of muscle memory.”

Am I saying that a company like Apple should never change its interfaces? Of course not. But when they do, they should do it carefully, for valid reasons, and gracefully. And they should ALWAYS give their customers the choice of if, when and how to adapt to these changes. Easier said than done, I know. Apple is like most other companies. They feel they have to constantly make NEW products. And that’s because our capitalist system emphasizes growth over sustainability.

Will the Maker Revolution Cure Us of Arbitrary, Ephemeral Design?

This is why I believe we need to return to natural affordances that are as intuitive as putting a spoon in a bowl or carving the bark off a stick. The more natural the affordance, the less arbitrary the design. Designers will have to be less cocky, more reverent to human nature and physical nature.

When real physical things start dictating how we interact with software, the playing field will be different. And software interaction designers will have to fully understand natural affordances, and design for them. That’s a revolution I can get behind.

.