The Miracle of My Hippocampus – and other Situated Mental Organs

I’m not very good at organizing.

pilesThe pile of papers, files, receipts, and other stuff and shit accumulating on my desk at home has grown to huge proportions. So today I decided to put it all into several boxes and bring it to the co-working space – where I could spend the afternoon going through it and pulling the items apart. I’m in the middle of doing that now. Here’s a picture of my progress. I’m feeling fairly productive, actually.

10457290-Six-different-piles-of-various-types-of-nuts-used-in-the-making-of-mixed-nuts--Stock-PhotoSome items go into the trash bin; some go to recycling; most of them get separated into piles where they will be stashed away into a file cabinet after I get home. At the moment, I have a substantial number of mini-piles. These accumulate as I sift through the boxes and decide where to put the items.

Here’s the amazing thing: when I pull an item out of the box, say, a bill from Verizon, I am supposed to put that bill onto the Verizon pile, along with the other Verizon bills that I have pulled out. When this happens, my eye and mind automatically gravitate towards the area on the table where I have been putting the Verizon bills. I’m not entirely conscious of this gravitation to that area.

Gravity Fields in my Brain

What causes this gravitation? What is happening in my brain that causes me to look over to that area of the table? It seems that my brain is building a spatial map of categories for the various things I’m pulling out of the box. I am not aware of it, and this is amazing to me – I just instinctively look over to the area on the table with the pile of Verizon bills, and…et voilà – there it is.

Other things happen too. As this map takes shape in my mind (and on the table), priorities line up in my subconscious. New connections get made and old connects get revived. Rummaging through this box has a therapeutic effect.

The fact that my eye and mind know where to look on the table is really not such a miracle, actually. It’s just my brain doing its job. The brain has many maps – spatial, temporal, etc. – that help connect and organize domains of information. One part of the brain – the hippocampus – is associated with spatial memory.

hippocampal-neurons_0-1

User Interface Design, The Brain, Space, and Time

I could easily collect numerous examples of software user interfaces that do a poor job of tapping the innate power of our spatial brains. These problematic user interfaces invoke the classic bouts of confusion, frustration, undiscoverability, and steep learning curves that we bitch about when comparing software interfaces.

This is why I am a strong proponent of Body Language (see my article about body language in web site design) as a paradigm for user interaction design. Similar to the body language that we produce naturally when we are communicating face-to-face, user interfaces should be designed with the understanding that information is communicated in space and in time (situated in the world). There is great benefit for designers to have some understanding of this aspect of natural language.

Okay, back to my pile of papers: I am fascinated with my unconscious ability to locate these piles as I sift through my stuff. It reminds me of why I like to use the fingers of my hand to “store” a handful of information pieces. I can recall these items later once they have been stored in my fingers (the thumb is usually saved for the most important item).

Body Maps, Brain, and Memory

inbodymaps

Screen Shot 2016-02-07 at 9.03.46 PMLast night I was walking with my friend Eddie (a fellow graduate of the MIT Media Lab, where the late Marvin Minsky taught). Eddie told me that he once heard Marvin telling people how he liked to remember the topics of an upcoming lecture: he would place the various topics onto his body parts.

…similar to the way the ancient Greeks learned to remember stuff.

During the lecture, Marvin would shift his focus to his left shoulder, his hand, his right index finger, etc., in order to recall various topics or concepts. Marvin was tapping the innate spatial organs in his brain to remember the key topics in his lecture.

My Extended BodyMap

18lta79g5tsytjpgMy body. My home town. My bed. My shoes. My wife. My community. The piles in my home office. These things in my life all occupy a place in the world. And these places are mapped in my brain to events that have happened in the past – or that happen on a regular basis. My brain is the product of countless generations of Darwinian iteration over billions of years.

All of this happened in space and time – in ecologies, animal communities, among collaborative workspaces.

Even the things that have no implicit place and time (as the many virtualized aspects of our lives on the internet)… even these things occupy a place and time in my mind.

Intelligence has a body. Information is situated.

Hail to Thee Oh Hippocampus. And all the venerated bodymaps. For you keep our flitting minds tethered to the world.

You offer guidance to bewildered designers – who seek the way – the way that has been forged over billions of years of intertwingled DNA formation…resulting in our spatially and temporally-situated brains.

treblebird

bodymapping.com.au

We must not let the no-place, no-time, any-place, any-time quality of the internet deplete us of our natural spacetime mapping abilities. In the future, this might be seen as one of the greatest challenges of our current digital age.

Hippocampus_and_seahorse_cropped

Failure and Recovery – an Important Concept in Design..and Life

I have observed that good design takes into consideration two important aspects of use:

  1. Failure Rate
  2. Recovery Rate

Well-designed products or software interfaces have low failure rates and low failure amounts. This is related to the concept of fault tolerance. A well-designed product or interface should not fail easily, and failure should not be complete.

“If its operating quality decreases at all, the decrease is proportional to the severity of the failure, as compared to a naively designed system in which even a small failure can cause total breakdown.”

A well-designed product or interface should also be easy to recover from failure.

81oCqPfe5wL._SX522_I recently bought a set of headphones. These were good headphones in most respects…until they broke at the complicated juncture where the ear pieces rotate. Once these headphones broke, there was really nothing I could do to fix them. But I decided to try – using a special putty that dries and holds things into place.

 

photoIt took a long time to figure out how to do this. When I finally repaired the broken part, I realized that the wires had been severed inside. There was no sound coming through. I had no choice but to put them into the garbage bin where they will contribute to the growing trash heap of humanity. Bad design is not just bad for consumers: it’s bad for the planet.

While most people (including myself) would claim that Audio Technica headphones are generally well-designed, we are usually not taking into account what happens when they break.

13687716887463pSometimes the breakdown is cognitive in nature. There’s a Keurig coffee machine at work. It uses visual symbols to tell the user what to do.

As I have pointed out in another article, visual languages are only useful to the extent that the user knows the language. And designers who use visual language need to understand that natural language includes how something behaves, and shows its internal states, not just what kinds of icons is displays on its surface.

The Keurig coffee machine is a nice specimen in many respects. But I had discovered that if I apply the necessary actions in the wrong order, it fails. Namely: if I add the little coffee supply and press down the top part before the water has finished heating up, it doesn’t allow me to brew the coffee.

So…after the water finished heating up, I saw the buttons light up. “Cool” – I said.

But nothing happened when I pressed a button to dispense the coffee. “WTF” – I said. Then I decided to open up the lid and close it again. That did the trick. The lights started blinking. But I was not satisfied with the solution. The discoverability of this bit of behavioral body language ranks low on my list.

Hint: “Blinking Lights” Means “You Can Press a Button”

I have to say, though: I have experienced worse examples of undiscoverability with appliances – especially appliances that are so simple, sleek, and elegant that they have no body language to speak of. This infuriates me to no end. It is not unlike the people I meet on occasion who exhibit almost no body language. It makes me squirm. I want to run away.

Now, thanks to YouTube and the interwebs in general, there are plenty of people who can help us get around these problems…such as this guy who has a solution to a related blinking light problem:

Screen Shot 2016-01-31 at 12.04.09 PM

I realize there are not many people who are bringing up this seemingly small problem. But I bring it up because it is just one of many examples of poor affordance in industrial design that are so small as to be imperceptible to the average user. However, the aggregate of these small imperceptible stumbles that occur throughout our lives constitutes a lowering of the quality of life. And they dull our sense of what good design should be about.

Tiny Rapid Failures and Tiny Rapid Recoveries

148159580_GeneralBicycleNow consider what happens when you ride a bicycle. When riding a bike, you may occasionally lose balance. But that balance can easily be recovered my shifting your weight, turning the wheel, or several other actions – many of which are unconscious to you.

Think of riding a bike as a high-rate of tiny failures with a high-rate of tiny recoveries.

Taken to the extreme: a bird who is standing on one leg has neuromuscular controls that are correcting the balance of the bird’s center of gravity at such a high rate and in such tiny amounts, that we don’t even notice it (and neither does the bird).

flamingo-on-one-leg

Natural Affordance: Perceived Signifiers

User interfaces (in computer software as well as in appliances) should use natural affordances whenever possible so that users can make a good guess as to whether something is about to fail, whether it is failing, how much it is failing, and how to recover.

The best design allows for rapid, and often unconscious correction while using the product. Good design is invisible!

Donald Norman brought some of these ideas to the fore when he wrote the Design of Everyday Things. We still have a lot to learn from his teachings.

Design is a way of life. If you design your life with resilience in mind – with the ability to recognize failures before they happen, and with the ability to recover from those failures, you will live a better life. The same goes for designing the technology that increasingly fills our lives.

Why is it a Color “Wheel” and Not a Color “Line”?

This blog post was published in May of 2012 on EyeMath. It is being migrated to this blog, with a few minor changes.

I’ve been discussing color algorithms recently with a colleague at Visual Music Systems.

We’ve been talking about the hue-saturation-value model, which represents color in a more intuitive way for artists and designers than the red-green-blue model. The “hue” component is easily explained in terms of a color wheel.

Ever since I learned about the color wheel in art class as a young boy, I had been under the impression that the colors are cyclical; periodic. In other words, as you move through the color series, it repeats itself: red, orange, yellow, green, blue, violet…and then back to red. You may be thinking, yes of course…that’s how colors work. But now I have a question…

Why?

Consider five domains that can be used as the basis for inventing a color theory:

(1) the physics of light, (2) the human retina, (3) the human brain, (4) the nature of pigment and paint, and (5) visual communication and cultural conventions.

(1) In terms of light physics, the electromagnetic spectrum has a band visible to the human eye with violet at one end and red at the other. Beyond violet is ultraviolet, and beyond red is infrared. Once you pass out of the visible spectrum, there aint no comin’ back. There are no wheels in the electromagnetic spectrum.

(2) In terms of the human retina, our eyes can detect various wavelengths of light. It appears that our color vision system incorporates two schemes: (1) trichromatic (red-green-blue), and (2) the opponent process (red vs. green, blue vs. yellow, black vs. white). I don’t see anything that would lead me to believe that the retina “understands” colors in a periodic fashion, as represented in a color wheel. However, it may be that the retina “encourages” this model to be invented in the human brain…

(3) In terms of the brain, our internal representations of color don’t appear to be based on the one-dimensional electromagnetic spectrum. Other factors are more likely to have influence, such as the physiology of the retina, and the way pigments can be physically mixed together (a human activity dating back thousands of years).

(4) Pigment and paint are very physical materials that we manipulate (using subtractive color), thereby constituting a strong influence on how we think about and categorize color.

(5) Finally: visual communication and culture. This is the domain in which the color wheel was invented, with encouragement from the mixing properties of pigment, the physiology of the retina, and the mathematical processes that are formulated in our brains. (I should mention another influence: technology…such as computergraphical displays).

Red-Green-Blue

Consider the red-green-blue model, which defines a 3D color space – often represented as a cube. This is a common form of the additive color model. Within the volume of the cube, one can trace a circle, or a hexagon, or any other cyclical path one wishes to draw. This cyclical path defines a periodic color representation (a color wheel). A volume yields 2D shapes, traced onto planes that slice through the volume. It’s a process of reducing dimensions.

But the electromagnet spectrum is ONE-DIMENSIONAL. The physical basis for colored light cannot yield a higher-dimensional color space. The red-green-blue model (or any multi-dimensional space) therefore could not originate from the physics of light.

DID HUMANS INVENT PURPLE IN ORDER TO GLUE RED AND VIOLET TOGETHER?

An alternate theory as to the origin of the color wheel is this: the color wheel was created by taking the two ends of the visible spectrum and connecting them to form a loop (and adding some purple to form a connective link). I just learned that Purple is NOT a spectral color (although “violet” is :) Purple can only be made by combining red and blue. Here’s an explanation by Deron Meranda, in a piece called…

PURPLE: THE FAKE COLOR – OR, WHAT REALLY LIES AT THE END OF A RAINBOW?

And here’s a page about how purple is constructed in the retina: HOW CAN PURPLE EXIST?

Did the human mind and human society impose circularity onto the color spectrum in order to contain it? Was this encouraged by the physiology of our eyes, in which various wavelengths are perceived, and mixed (mapping from a one-dimensional color space to a higher-dimensional color space)? Or might it be more a matter of the influence of pigments, and the age-old technology of mixing paints?

Might the color wheel be a metaphorical blend between the color spectrum and the mixing behavior of pigment?

Similar questions can be applied to many mathematical concepts that we take for granted. We understand number and dimensionality because of the ways our bodies, and their senses, map reality to internal representations. And this ultimately influences culture and language, and the ways we discuss things…like color…which influences the algorithms we design.

 

………….. The Beauty of Gray Code …………..

30386a802345a35b20138b350d9ca50a34df7016

http://www.mathworks.com/matlabcentral/fileexchange/40928-generate-gray-code-disk

 ———————–

graycodebinaryclock_wallclock_render

http://anthony.liekens.net/index.php/Misc/TrueBinaryTime

———————–

greyXall-1

http://vision.middlebury.edu/~schar/papers/structlight/p1.html

———————–

opticalEncoder-italsensordotcom

 http://www.jeffreythompson.org/blog/tag/gray-code/

 ———————–

singleturn

http://www.fachlexika.de/technik/mechatronik/sensor.html

 ———————–

Gray Code is an alternative binary representation, cleverly devised so that, between any two adjacent numbers, only one bit changes at a time. If there is an error reading any bit that has changed then, at worse, the read value will never be out by more than one unit.

This has tremendous value in the real world. Computers might be digital, but we live in an analog world. Interfaces between these need to be carefully considered.

encoder2

http://www.qsl.net/oe5jfl/encoder.htm

Why Having a Tiny Brain Can Make You a Good Programmer

This post is not just for software developers. It is intended for a wider readership; we all encounter complexity in life, and we all strive to achieve goals, to grow, to become more resilient, and to be more efficient in our work.

xray-of-homer-simpsons-tiny-brain-295x300

Here’s what I’m finding: when I am dealing with a complicated software problem – a problem that has a lot of moving parts – many dimensions, I can easily get overwhelmed with the complexity. Most programmers have this experience when a wicked problem arises, or when a nasty bug is found that requires delving into unknown territories or parts of the code that you’d rather just forget about.

Dealing with complexity is a fact of life in general. What’s a good rule of thumb?

Externalize

parts-of-the-brainWe can only hold so many variables in our minds at once. I have heard figures like “about 7”. But of course, this begs the question of what a “thing” is. Let’s just say that there are only so many threads of a conversation, only so many computer variables, only so many aspects to a system that can be held in the mind at once. It’s like juggling.

Most of us are not circus clowns.

Externalizing is a way of taking parts of a problem that you are working on and manifesting them in some physical place outside of your mind. This exercise can free the mind to explore a few variables at a time…and not drop all the balls.

Dude, Your Brain is Too Big

I have met several programmers in my career who have an uncanny ability to hold many variables in their heads at once. These guys are amazing. And they deserve all the respect that is often given to them. But here’s the problem:

kim_peekPeople who can hold many things in their minds at once can write code, think about code, and refactor code in a way that most of us mortals could never do. While these people should be admired, they should not set the standard for how programming should be done. Their uncanny genius does not equate with good engineering.

This subject is touched-upon in this article by Levi Notik:

Screen Shot 2015-04-01 at 9.47.43 AM

who says: 

“It’s not hard to see why the popular perception of a programmer is one of some freak genius, sitting by a computer, frantically typing while keeping a million things in their head and making magic happen”

wsshom1A common narrative is that these freak geniuses are “ideal  for the job of programming”. In some cases, this may be true. But software has a tendency to become complex in the same way that rain water has a tendency to flow into rivers and eventually into the ocean. People with a high tolerance for complexity or a savant-like ability to hold many things in their minds are not (I contend) the agents of good software design.

I propose that people who cannot hold more than a few variables in their minds at once have something very valuable to contribute to the profession. We (and I’m taking about those of us with normal brains…but who are very resourceful) have built a lifetime’s worth of tools (mental, procedural, and physical) that allow us to build complexity – without limit – that has lasting value, and which other professionals can use. It’s about building robust tools that can outlive our brains – which gradually replace memory with wisdom.

My Fun Fun Fun Job Interview

I remember being in a job interview many years ago. The guy interviewing me was a young cocky brogrammer who was determined to show me how amazingly clever and cocky he could be, and to determine how amazingly clever and cocky I was willing to be. He asked me how I would write a certain algorithm (doesn’t matter what it was – your typical low-level routine).

Well, I was stumped. I had nothing. I knew I had written one of these algorithms before but I couldn’t for the life of me remember how I did it.

Why could I not remember how I had written the algorithm?

Because I did such a good job at writing it, testing it, and optimizing it, that I was able to wrap it up in a bow, tuck it away in a toolbox, and use it for eternity – and NEVER THINK ABOUT HOW IT WORKED ANY MORE.

Hello.

“Forgetting” is not only a trick we use to un-clutter our lives – it actually allows us to build complex, useful things.

Memory is way too precious to be cluttered with nuts and bolts.

tech_brogramming10__01a__630x420Consider a 25-year-old brogrammer who relies on his quick wit and multitaskery. He will not be the same brogrammer 25 years later. His nimble facility for details will gradually give way to wisdom – or at least one would hope.

I personally think it is tragic that programming is a profession dominated by young men with athletic synapses. (At least that is the case here in the San Francisco Bay area). The brains of these guys do not represent the brains of most of the people who use software.

Over time, the tools of software development will – out of necessity – rely less and less on athletic synapses and clever juggling, and more on plain good design.

Quotation-Donald-Knuth-science-reality-people-Meetville-Quotes-184888

IS “ARTIFICIAL LIFE GAME” AN OXYMORON?

(This is a re-posting from Self Animated Systems)

langtonca

Artificial Life (Alife) began with a colorful collection of biologists, robot engineers, computer scientists, artists, and philosophers. It is a cross-disciplinary field, although many believe that biologists have gotten the upper-hand on the agendas of Alife. This highly-nuanced debate is alluded to in this article.

Games

What better way to get a feel for the magical phenomenon of life than through simulation games! (You might argue that spending time in nature is the best way to get a feel for life; I would suggest that a combination of time with nature and time with well-crafted simulations is a great way to get deep intuition. And I would also recommend reading great books like The Ancestor’s Tale :)

Simulation games can help build intuition on subjects like adaptation, evolution, symbiosis, inheritance, swarming behavior, food chains….the list goes on.

Screen Shot 2014-10-17 at 7.48.02 PMScreen Shot 2014-10-19 at 12.24.54 PMOn the more abstract end of the spectrum are simulation-like interactive experiences involving semi-autonomous visual stuff (or sound) that generates novelty. Kinetic art that you can touch, influence, and witness lifelike dynamics can be more than just aesthetic and intellectually stimulating.

These interactive experiences can also build intuition and insight about the underlying forces of nature that come together to oppose the direction of entropy (that ever-present tendency for things in the universe to decay).

Screen Shot 2014-10-17 at 7.58.33 PM

On the less-abstract end of the spectrum, we have virtual pets and avatars (a subject I discussed in a keynote at VISIGRAPP).

“Hierarchy Hinders” –  Lesson from Spore

Screen Shot 2014-10-17 at 8.18.59 PMWill Wright, the designer of Spore, is a celebrated simulation-style game designer who introduced many Alife concepts in the “Sim” series of games. Many of us worried that his epicSpore would encounter some challenges, considering that Maxis had been acquired by Electronic Arts. The Sims was quite successful, but Spore fell short of expectations. Turns out there is a huge difference between building a digital dollhouse game and building a game about evolving lifeforms.

Also, mega-game corporations have their share of social hierarchy, with well-paid executives at the top and sweat shop animators and code monkeys at the bottom. Hierarchy (of any kind) is generally not friendly to artificial life.

For blockbuster games, there are expectations of reliable, somewhat repeatable behavior, highly-crafted game levels, player challenges, scoring, etc. Managing expectations for artificial life-based games is problematic. It’s also hard to market a game which is essentially a bunch of game-mechanics rolled into one. Each sub-game features a different “level of emergence” (see the graph below for reference). Spore presents several slices of emergent reality, with significant gaps in-between. Spore may have also suffered partly due to overhyped marketing.

Artificial Life is naturally and inherently unpredictable. It is close cousins with chaos theory, fractals, emergence, and uh…life itself.

Emergence

alife graphAt the right is a graph I drew which shows how an Alife simulation (or any emergent system) creates novelty, creativity, adaptation, and emergent behavior. This emergence grows out of the base level inputs into the system. At the bottom are atoms, molecules, and bio-chemistry. Simulated protein-folding for discovering new drugs might be an example of a simulation that explores the space of possibilities and essentially pushes up to a higher level (protein-folding creates the 3-dimensional structure that makes complex life possible).

The middle level might represent some evolutionary simulation whereby new populations emerge that find a novel way to survive within a fitness landscape. On the higher level, we might place artificial intelligence, where basic rules of language, logic, perception, and internal modeling of the world might produce intelligent behavior.

In all cases, there is some level of emergence that takes the simulation to a higher level. The more emergence, the more the simulation is able to exhibit behaviors on the higher level. What is the best level of reality to create an artificial life game? And how much emergence is needed for it to be truly considered “artificial life”?

Out Of Control

Can a mega-corporation like Electronic Arts give birth to a truly open-ended artificial life game? Alife is all about emergence. An Alife engineer or artist expects the unexpected. Surprise equals success. And the more unexpected, the better. Surprise, emergent novelty, and the unexpected – these are not easy things to manage…or to build a brand around – at least not in the traditional way.

Screen Shot 2014-10-17 at 9.04.07 PMMaybe the best way to make an artificial life game is to spread the primordial soup out into the world, and allow “crowdsourced evolution” of emergent lifeforms.  OpenWorm comes to mind as a creative use of crowdsourcing.

What if we replaced traditional marketing with something that grows organically within the culture of users? What if, in addition to planting the seeds of evolvable creatures, we also planted the seeds of an emergent culture of users? This is not an unfamiliar kind problem to many internet startups.

Are you a fan of artificial life-based games? God games? Simulations for emergence? What is your opinion of Spore, and the Sims games that preceded it?

This is a subject that I have personally been interested in for my entire career. I think there are still unanswered questions. And I also think that there is a new genre of artificial game that is just waiting to be invented…

…or evolved in the wild.

Onward and Upward.

-Jeffrey

The Unicorn Myth

I was recently called a “unicorn” – a term being bantered around to describe people who have multiple skills.

Unicorn-Vomit-Puke-E-Liquid-E-Juice-Vapor-Geekz-Vape-E-Cig-660x822

 

Here’s a quote from http://unicornspeakeasy.com/

“Dear outsiders,

Silicon Valley calls us unicorns because they doubt the [successful, rigorous] existence of such a multidisciplinary artist-engineer, or designer-programmer, when in fact, we have always been here, we are excellent at practicing “both disciplines” in spite of the jack-of-all-trades myth, and oh yeah – we disagree with you that it’s 2 separate disciplines. Unicorns are influential innovators in ways industries at large cannot fathom. In a corporate labor division where Release Engineer is a separate role from Software Requirements Engineer, and Icon Designer is a separate role from Information Architect, unicorn disbelief is perfectly understandable. To further complicate things, many self-titled unicorns are actually just programmers with a photoshop habit, or designers who dabble with Processing. Know the difference.”

Here’s David Cole on the “The Myth of the Myth of the Unicorn Designer“. He says:

“Design is already not a single skill.”

always_be_yourself_unless_you_can_be_a_unicorn_tshirt-r0a27d5bc26b7444f9233c46d012f5e3d_vj7bv_512

There are valid arguments on either side of the debate as to whether designers should be able to code. What do you think?

-Jeffrey