How much negentropy is Earth capable of?

Negentropy is the opposite of entropy. It refers to an increase in order, complexity, and usefulness, while entropy refers to the decay of order or the tendency for a system to become random and useless.

The universe as a whole tends toward total entropy, or heat death. This does not mean that ALL parts of the universe are becoming less ordered. There can be isolated parts of the universe that are actually increasing in order; becoming more organized and workable. The best example of this is our home: planet Earth.

A miracle of 7,000,000,000,000,000,000,000,000,000 atoms

I was walking from my bedroom to my bathroom this morning, pondering the miracle of my body purposefully moving itself from one place in the universe to another. Consider the atoms that make up my body; they are assembled in just the right way to construct a human capable of locomotion. It is a miracle. Of course, the atoms themselves are not the driving force of this capability. The driving force is a collaboration of emergent systems (molecules, tissues, electrochemical activity, signals between organs, and of course, a brain – which evolved in the context of a complex planet, with other brains in societies, and with an ever-complexifying backdrop of shared information.

It’s a curious thing: planet Earth – with its vast oceans, atmosphere, ecosystems and organisms – is determined to go against the overall tendency in the universe to decay towards the inevitable doom of heat death.

While walking the seven billion billion billion atoms of my body to the bathroom, I considered how far the negentropic urge of our planet could possibly push itself, in a universe that generally tries to ruin the party; a universe that will ultimately win in the end. The seven billion billion billion atoms currently in my body will eventually be strewn throughout a dead universe. At that point there will be nothing that can re-assemble them into anything useful.

How not to ruin a party

The party is not over; there is ample reason to believe that Earth is not done yet. Earth generated a biosphere – the only spherical ecosystem we know of – which produced animals and humans, and most recently – post-biological systems (technology and AI). I would not dismiss entirely the notion that Earth really wants us to invent AI, and to allow it to take over – because our AI could ultimately help Earth stay healthy, and continue its negentropic party. We humans (in our old, biological manifestation) are not capable of taking care of our own planet. We are only capable of exploiting its resources – left to our own primitive survival devices. It is only through our post-human systems that we will be able to give Earth the leverage it needs to continue its negentropic quest.

This is another way of saying that the solutions to climate change and mass extinction will require massive social movements, corporate and governmental leadership, global-scale technologies, and other trans-human-scale systems that far exceed the mental capacities of a single human brain. It is possible that the ultimate victory of AI will be to save ourselves from an angry Mother on the verge of committing infanticide.

In the meanwhile, Earth may decide that it needs to get rid of the majority of the human population; just another reason to reconsider the urge to make babies.

But just how far can Earth’s negentropic party extend? As Earth’s most potent agents of negentropy, we humans are preparing to tap the moon, asteroids, and other planets for resources. Will we eventually be able to develop energy shields to deflect renegade asteroids? Will our robots continue to colonize the solar system? How far will Earth’s panspermia extend?

There are plenty of science fiction stories and hypothetical explorations that offer exciting and illuminating possible answers to these questions; I will not attempt to venture beyond my level of knowledge in this area. All I will say is…I think there are two possible futures for us humans:

(1) Earth will decide it has had enough of climate change, and smack us down with rising oceans and chaotic storms, causing disease, mass migrations, and war, resulting in our ultimate demise (Earth will be fine after a brief recovery period).

(2) We will evolve a new layer of the biosphere – built of technology and AI – and this will regulate our destructive instincts, thus allowing Earth to stay healthy and to keep complexifying. It will allow Earth to reconsider what it currently sees as a cancer on its skin – and to see us as agents of health.

In the case of future (2), we will lose some of our autonomy – but it just might be a comfortable existence in the long run – because Earth will be better off – and it will want to keep us around. Eventually, the panspermic negentropic party will not be our own – we will be just one of the intermediate layers of emergence emanating from the planet. We will become mere organs of an extended post-Earth ecosystem that continues to defy the general entropy of the universe…at least for a few billion more years.

Trim, Mulch, Compost

It’s a rare warm evening, after a rare hot day. I just got back from a bout of night-time gardening, shirtless. The neighbors gave a few more glances than usual. Some things don’t require precise vision, like breaking twigs into smaller twigs, or forming a donut of dead leaves around the trunk of a young fruit tree. The feel of twigs snapping and the sound of leaves being shoved around is enough to go on.

Gardening, in my tiny little piece of Earth’s surface, consists mostly of trimming dead twigs and overly-ambitious vines, and then breaking them down into smaller bits – like what teeth do to food before passing it on to the stomach.

My love of gardening has grown as I have become more learned about two fascinating places where invisible creatures do wonderful symbiotic work: the colon (large intestine) and the rhrizosphere (the region near the roots of plants). In both of these regions, our invisible friends work busily to transform by-products into valuable nutrition, and then they share it.

My small front yard has become a Zen garden – I am both mindless servant and mindful observer. I wonder if I could apply my gardening approach to other aspects of my life.

The more I understand the things that affect the world swirling around me, the better I will be at knowing where to apply simple, mindless actions that have healthy outcomes. Where to trim a shrub to allow more sunlight in; where to use those trimmings as mulch to keep moisture underground; where to add compost to increase biodiversity and nutrition.

If I never succeed in transferring my gardening skills towards making my life flow more naturally, at least I can say that my gardening skills have improved: my thumbs are greener, and I have learned that most of the work is done underground – invisible to the eye.

When Earth Discovered Water

When Earth discovered water, life became possible.

header-earth-from-space

“Discover”? I was going to use the word “invent”. Then I remembered that water can be found on other planets, comets, asteroids…and the Moon. Water is not unique to Earth. It may be more accurate to say that Earth invented a way to preserve and manage its water by evolving the biosphere. The biosphere harnesses, protects, filters, and enlivens the water that covers most of the planet.

Mars did not succeed in preserving its water.

The Gaia Hypothesis blows open the perspective of what life is. The evolution of the self-adaptive, self-regulating spherical ecosystem that we call Earth is more than just a collection of interacting organisms. It also relies on the dynamics of storms, oceans, tectonic plates, and the balance of gases in the atmosphere.

8

I began thinking about this as I was pouring boiling water into a coffee press. A beautiful stream, sparkling and transparent, visible only by virtue of the fact that it reflects the darks and lights of the surrounding environment.

What an amazing fluid. A true friend of gravity and heat, forming a collaboration resulting in a network of clouds, rain, snow, glaciers, streams, rivers and oceans.

There’s a deep reason why we can generate so much poetry about water.

“Planetary scientists are quick to stress that it’s not just water that’s indispensable for life, but liquid water. The distinction is key”

Water made life on Earth possible. Some people go as far as to say that water is alive.

Screen Shot 2016-05-16 at 12.32.00 PM

On the Origins of Earth’s Water

http://www.iflscience.com/physics/origin-earths-water-discovered-0

Did life on Earth begin with replicating molecules? Depends on how you define life. The backdrop for these replicating molecules was already rich and dynamic, with an interplay of water (in all its forms), gravity, atmospheric and ocean chemistry, electric storms…

…and the rhythms of day and night, winter and summer, which forms a backbeat. That backbeat drives the polyrhythmic dance that pulls water through its many forms – and brings us into being.

early-earth-moon-system-closeup

 

The Body Language of a Happy Lizard

lizardhappy-dog-running-by-500px-600x350I love watching my dog greet us when we come home after being out of the house for several hours. His body language displays a mix of running in circles, panting, bobbing his head up and down, wagging his tail vigorously, wagging his body vigorously, yapping, yipping, barking, doing the down-dog, shaking off, and finally, jumping into our laps. All of this activity is followed by a lot of of licking.

There was a time not long ago when people routinely asked, “do animals have intelligence?” and “do animals have emotions?” People who are still asking whether animals have intelligence and emotions seriously need to go to a doctor to get their mirror neurons polished. We realize now that these are useless, pointless questions.

Deconstructing Intelligence

self-cars-300x190The change of heart about animal intelligence is not just because of results from animal research: it’s also due to a softening of the definition of intelligence. People now discuss artificial intelligence at the dinner table. We often hear ourselves saying things like “your computer wants you to change the filename”, or “self-driving cars in the future will have to be very intelligent”.

The concept of intelligence is working its way into so many non-human realms, both technological and animal. We talk about the “intelligence of nature”, the “wisdom of crowds”, and other attributions of intelligence that reside in places other than individual human skulls.

imgres-1

Can a Lizard Actually Be “Happy”? 

I want to say a few things about emotions.

The problem with asking questions like “can a lizard be happy?” is in the dependency of words, like “happy”, “sad”, and jealous”. It is futile to try to fit a complex dynamic of brain chemistry, neural firing, and semiosis between interacting animals into a box with a label on it. Researchers doing work on animal and human emotion should avoid using words for emotions. Just the idea of trying to capture something as visceral, somatic, and, um…wordless as an emotion in a single word is counterproductive. Can you even claim that you are feeling one emotion at a time? No: emotions ebb and flow, they overlap, they are fluid – ephemeral. Like memory itself, as soon as you start to study your own emotions, they change.

And besides; words for emotions differ among languages. While English may be the official language of science, it does not mean that its words for emotions are more accurate.

Alas…since I’m using words to write this article (!) I have to eat my words. I guess I would have to give the following answer the question, “can a lizard be happy?”

Yes. Kind of.

The thing is: it’s not as easy to detect a happy lizard as it is to detect a happy dog. Let’s compare these animals:

HUMAN        DOG         COW           BIRD         LIZARD         WORM

This list is roughly ordered by how similar the animal is to humans in terms of intelligent body language. Dogs share a great deal of the body language that we associate with emotions. Dogs are especially good at expressing shame. (Do cats feel less shame than dogs? They don’t appear to show it as much as dogs, but we shouldn’t immediately jump to conclusions because we can’t see it in terms of familiar body language signals).

3009107.largeOn the surface, a cow may appear placid and relaxed…in that characteristic bovine way. But an experienced veterinarian or rancher can easily detect a stressed-out cow. As we move farther away from humans in this list of animals, the body language cues become harder and harder to detect. In the simpler animals, do we even know if these emotions exist at all? Again…that may be the wrong question to ask.

happy-worm

It would be wrong of me to assume that there are no emotional signals being generated by an insect, just because I can’t see them.

ants communicating via touch

Ant body language is just not something I am familiar with. The more foreign the animal, the more difficult it is for us humans to attribute “intelligence” or “emotion” to it.

Zoosemiotics may help to disambiguate these problematic definitions, and place the gaze where it may be more productive.

I would conclude that we need to continue to remove those anthropocentric biases that have gotten in the way of science throughout our history.

8212f1d8d4ab1d159c6e0837439524c3When we have adequately removed those biases regarding intelligence and emotion, we may more easily see the rich signaling that goes on between all animals on this planet. We will begin to see more clearly a kind of super-intelligence that permeates the biosphere. Our paltry words will step aside to reveal a bigger vista.

Dinosaur_615I have never taken LSD or ayahuasca, but I’ve heard from those that have that they have seen this super-intelligence. Perhaps these chemicals are one way of removing that bias, and taking a peek at that which binds us with all of nature.

But short of using chemicals….I guess some good unbiased science, an open mind, and a lot of compassion for our non-human friends can help us see farther – to see beyond our own body language.

The Tail Wagging the Brain

 

weird_cover_image

(This is chapter 8 of Virtual Body Language). Extra images have been added for this post. An earlier variation of this blog post was published here.)

The Brain That Changes ItselfThe brain can rewire its neurons, growing new connections, well into old age. Marked recovery is often possible even in stroke victims, porn addicts, and obsessive-compulsive sufferers, several years after the onset of their conditions. Neural Plasticity is now proven to exist in adults, not just in babies and young children going through the critical learning stages of life. This subject has been popularized with books like The Brain that Changes Itself (Doidge 2007).

Like language grammar, music theory, and dance, visual language can be developed throughout life. As visual stimuli are taken in repeatedly, our brains reinforce neural networks for recognizing and interpreting them.

biophiliaSome aspects of visual language are easily learned, and some may even be instinctual. Biophilia is the human aesthetic appreciation of biological form and all living systems (Kellert and Wilson 1993). The flip side of aesthetic biophilia is a readiness to learn disgust (of feces and certain smells) or fear (of large insects or snakes, for instance). Easy recognition of the shapes of branching trees, puffy clouds, flowing water, plump fruit, and subtle differences in the shades of green and red, have laid a foundation for human aesthetic response in all things, natural or manufactured. Our biophilia is the foundation of much of our visual design, even in areas as urban as abstract painting, web page layout, and office furniture design.

The Face

eyebrow accentsWe all have visual language skills that make us especially sensitive to eyebrow motion, eye contact, head orientation, and mouth shape. Our sensitivity to facial expression may even influence other, more abstract forms of visual language, making us responsive to some visual signals more than others—because those face-specific sensitivities gave us an evolutionary advantage as a species so dependent on social signaling. This may have been reinforced by sexual selection.

Even visual features as small as the pupil of an eye contribute to the emotional reading of a face—usually unconsciously. Perhaps the evolved sensitivity to small black dots as information-givers contributed to our subsequent invention of small-yet-critical elements in typographical alphabets.

evolution-of-smiley-2

We see faces in clouds, trees, and spaghetti. Donald Norman wrote a book called, Turn Signals are the Facial Expressions of Automobiles (1992). Recent studies using fMRI scans show that car aficionados use the same brain modules to recognize cars that people in general use to recognize faces (Gauthier et al. 2003).

Screen Shot 2014-12-02 at 11.47.48 PMSomething about the face appears to be baked into the brain.

18394bo715ci0jpg

Dog Smiling

We smile with our faces. Dogs smile with their tails. The next time you see a Doberman Pinscher with his ears clipped and his tail removed (docked), imagine yourself with your smile muscles frozen dead and your eyebrows shaved off.

Like humans perceiving human smiles, it is likely that the canine brain has neural wiring highly tuned to recognize and process a certain kind of visual energy: an oscillating motion of a linear shape occurring near the backside of another dog. The photoreceptors go to work doing edge detection and motion detection so they can send efficient signals to the brain for rapid space-time pattern recognition.

dachsund-110113-400x300

Tail wagging is apparently a learned behavior: puppies don’t wag until they are one or two months old. Recent findings in dog tail wagging show that a wagger will favor the right side as a result of feeling fundamentally positive about something, and will favor the left side when there are negative overtones in feeling (Quaranta et al. 2007). This is not something humans have commonly known until recently. Could it be that left-right wagging asymmetry has always been a subtle part of canine-to-canine body language?

higgs-eyesI often watch intently as my terrier mix, Higgs, encounters a new dog in the park whom he has never met. The two dogs approach each other—often moving very slowly and cautiously. If the other dog has its tail down between its legs and its ears and head held down, and is frequently glancing to the side to avert eye contact, this generally means it is afraid, shy, or intimidated. If its tail is sticking straight up (and not wagging) and if its ears are perked up and the hair on the back is standing on end, it could mean trouble. But if the new stranger is wagging its tail, this is a pretty good sign that things are going to be just fine. Then a new phase of dog body language takes over. If Higgs is in a playful mood, he’ll start a series of quick motions, squatting his chest down to assume the “play bow”, jumping and stopping suddenly, and watching the other dog closely (probably to keep an eye on the tail, among other things). If Higgs is successful, the other dog will accept his invitation, and they will start running around the park, chasing each other, and having a grand old time. It is such a joy to watch dogs at play.

dogs_color

So here is a question: assuming dogs have an instinctive—or easily-learned—ability to process the body language of other dogs, like wagging tails, can the same occur in humans for processing canine body language? Did I have to learn to read canine body language from scratch, or was I was born with this ability? I am after all a member of a species that has been co-evolving with canines for more than ten thousand years, sometimes in deeply symbiotic relationships. So, perhaps I, Homo Sapiens, already have a biophilic predisposition to canine body language.

Animals_in_Translation_(book_cover)Recent studies in using dogs as therapy for helping children with autism have shown remarkable results. These children become more socially and emotionally connected. According to autistic author Temple Grandin, this is because animals, like people with autism, do not experience ambivalent emotion; their emotions are pure and direct (Grandin, Johnson 2005). Perhaps canines helped to modulate, buffer, and filter human emotions throughout our symbiotic evolution. They may have helped to offset a tendency towards neurosis and cognitive chaos.

Vestigial Response

image5931Having a dog in the family provides a continual reminder of my affiliation with canines, not just as companions, but as Earthly relatives. On a few rare occasions while sitting quietly at home working on something, I remember hearing an unfamiliar noise somewhere in the house. But before I even knew consciously that I was hearing anything, I felt a vague tug at my ears; it was not an entirely comfortable feeling. This may have happened at other times, but I probably didn’t notice.

I later learned that this is a leftover vestigial response that we inherited from our ancestors.

The ability for some people to wiggle their ears is due to muscles that are basically useless in humans, but were once used by our ancestors to aim their ears toward a sound. Remembering the feeling of that tug on my ears gives me a feeling of connection to my ancestors’ physical experiences.

In a moment we will consider how these primal vestiges might be coming back into modern currency. But I’m still in a meandering, storytelling, pipe-smoking kind of mood, so hang with me just a bit longer and then we’ll get back to the subject of avatars.

This vestigial response is called ear perking. It is shared by many of our living mammalian relatives, including cats. I remember once hanging out and playing mind-games with a cat. I was sitting on a couch, and the cat was sitting in the middle of the floor, looking away, pretending to ignore me (but of course—it’s a cat).

dims.vetstreet

I was trying to get the cat to look at me or to acknowledge me in some way. I called its name, I hissed, clicked my tongue, and clapped my hands. Nothing. I scratched the upholstery on the couch. Nothing. Then I tore a small piece of paper from my notebook, crumpled it into a tiny ball, and discreetly tossed it onto the floor, behind the cat, outside of its field of view. The tiny sound of the crumpled ball of paper falling to the floor caused one of the cat’s ears to swivel around and aim towards the sound. I jumped up and yelled, “Hah—got you!” The cat’s higher brain quickly tugged at its ear, pulling it back into place, and the cat continued to serenely look off into the distance, as if nothing had ever happened.

earsmatrix

Cat body language is harder to read than dog body language. Perhaps that’s by design (I mean…evolution). Dogs don’t seem to have the same talents of reserve and constraint. Dog expressions just flop out in front of you. And their vocabulary is quite rich. Most world languages have several dog-related words and phrases. We easily learn to recognize the body language of dogs, and even less familiar social animals. Certain forms of body language are processed by the brain more easily than others. A baby learns to respond to its mother’s smile very soon after birth. Learning to read a dog’s tail motions may not be so instinctive, but the plastic brain of Homo Sapiens is ready to adapt. Learning to read the signs that your pet turtle is hungry would be much harder (I assume). At the extreme: learning to read the signals from an enemy character in a complicated computer game may be a skill only for a select few elite players.

1883d6jlb7tb0jpgThis I’m sure of: if we had tails, we’d be wagging them for our dogs when they are being good (and for that matter, we’d also be wagging them for each other :D ). It would not be easy to add a tail to the base of a human spine and wire up the nerves and muscles. But if it could be done, our brains would easily and happily adapt, employing some appropriate system of neurons to the purpose of wagging the tail—perhaps re-adapting a system of neurons normally dedicated to doing The Twist. While it may not be easy to adapt our bodies to acquire such organs of expression, our brains can easily adapt. And that’s where avatars come in.

Furries

Screen Shot 2014-12-02 at 11.48.04 PMThe Furry Species has proliferated in Second Life. Furry Fandom is a subculture featuring fictional anthropomorphic animal characters with human personalities and human-like attributes. Furry fandom already had a virtual head start—people were role-playing with their online “fursonas” before Second Life existed (witness FurryMUCK, a user-extendable online text-based role-playing game started in 1990). Furries have animal heads, tails, and other such features.

While many Second Life avatars have true animal forms (such as the “ferals” and the “tinies”), many are anthropomorphic: walking upright with human postures and gaits. This anthropomorphism has many creative expressions, ranging from quaint and cute to mythical, scary, and kinky.

Furry anthropomorphism in Second Life is appropriate in one sense: there is no way to directly change the Second Life avatar into a non-human form. The underlying avatar skeleton software is based solely on the upright-walking human form. I know this fact in an intimate way, because I spent several months digging into the code in an attempt to reconstitute the avatar skeleton to allow open-ended morphologies (quadrupeds, etc.) This proved to be difficult. And no surprise: the humanoid avatar morphology code, and all its associated animations—procedural and otherwise—had been in place for several years. It serves a similar purpose to a group of genes in our DNA called Hox genes.

6800872f6

They are baked deep into our genetic structure, and are critical to the formation of a body plan. Hox genes specify the overall structure of the body and are critical in embryonic development when the segmentation and placement of limbs and other body parts are first established. After struggling to override the effects of the Second Life “avatar Hox genes”, I concluded that I could not do this on my own. It was a strategic surgical process that many core Linden Lab engineers would have to perform. Evolution in virtual worlds happens fast. It’s hard to go back to an earlier stage and start over.

Despite the anthropomorphism of the Second Life avatar (or perhaps because of this constraint), the Linden scripting language (LSL) and other customizing abilities have provided a means for some remarkably creative workarounds, including packing the avatar geometry into a small compact form called a “meatball”, and then surrounding it with a custom 3D object, such as a robot or a dragon or some abstract form, complete with responsive animations and particle system effects.

furriesPerhaps furry residents prefer having this constraint of anthropomorphism; it fits with their nature as hybrids. Some furry residents have customized tail animations and use them to express mood and intent. I wouldn’t be surprised if those who have been furries for many years have dreams in which they are living and expressing in their Furry bodies—communicating with tails, ears, and all. Most would agree that customizing a Furry with a wagging-tail animation is a lot easier than undergoing surgery to attach a physical tail.

But as far as the brain is concerned, it may not make a difference.

Where Does my Virtual Body Live?

Virtual reality is not manifest in computer chips, computer screens, headsets, keyboards, mice, joysticks, or head-mounted displays. Nor does it live in running software. Virtual reality manifests in the brain, and in the collective brains of societies. The blurring of real and virtual experiences is a theme that Jeremy Bailenson and his team at Stanford’s Virtual Human Interaction Lab have been researching.

vhil-logo

Virtual environments are now being used for research in social sciences, as well as for the treatment of many brain disorders. An amputee who has suffered from phantom pain for years can be cured of the pain through a disciplined and focused regimen of rewiring his or her body image to effectively “amputate” the phantom limb. Immersive virtual reality has been used recently to treat this problem (Murray et al. 2007). Previous techniques using physical mirrors have been replaced with sophisticated simulations that allow more controlled settings and adjustments. When the brain’s body image gets tweaked away from reality too far, psychological problems ensue, such as anorexia. Having so many super-thin sex-symbol avatars may not be helping the situation. On the other hand, virtual reality is being used in research to treat this and other body image-related disorders.

BradDeGraf-01A creative kind of short-term body image shifting is nothing new to animators, actors, and puppeteers, who routinely tweak their own body images in order to express like ostriches or hummingbirds or dogs. When I was working for Brad deGraf, one of the early pioneers in real-time character animation, I would frequent the offices of Protozoa, a game company he founded in the mid-90s.

I was hired to develop a tool for modeling 3D trees which were to be used in a game called “Squeezils”, featuring animals that scurry among the branches. I remember the first time I went to Protozoa. I was waiting in the lobby to meet Brad, and I noticed a video monitor at the other end of the room. An animation was playing that had a couple of 3D characters milling about. One of them was a crab-like cartoon character with huge claws, and the other was a Big Bird-like character with a very long neck. After gazing at these characters for a while, it occurred to me that both of these characters were moving in coordination—as if they were both being puppeteered by the same person. My curiosity got the best of me and I started wandering around the offices, looking for the puppet master. I peeked around the corner. In the next room were a couple of motion capture experts, testing their hardware. On the stage was a guy wearing motion capture gear. When his arms moved, so did the huge claws of the crab—and so did the wimpy wings of the tall bird. When his head moved, the eyes of the crab looked around, and the bird’s head moved around on top of its long neck.

Brad deGraf and puppeteer/animator Emre Yilmaz call this “…performance animation…a new kind of jazz. Also known as digital puppetry…it brings characters to life, i.e. ‘animates’ them, through real-time control of the three-dimensional computer renderings, enabled by fast graphics computers, live motion sampling, and smart software” (deGraf and Yilmaz 1999). When applying human-sourced motion to exaggerated cartoon forms, the human imagination is stimulated: “motion capture” escapes its negative association with droll, unimaginative literal recording. It inspires the human controller to think more like a puppeteer than an actor. Puppeteering is more out-of-body than acting. Anything can become a puppet (a sock, a salt shaker, a rubber hose).

Screen Shot 2014-12-02 at 11.48.15 PM deGraf and Yilmaz make this out-of-body transference possible by “re-proportioning” the data stream from the human controller to fit non-human anatomies.

Screen Shot 2014-12-02 at 11.48.22 PM

Research by Nick Yee found that people’s behaviors change as a result of having different virtual representations, a phenomenon he calls the Proteus Effect (Yee 2007). Artist Micha Cardenas spent 365 hours continuously as a dragon in Second Life. She employed a Vicon motion capture system to translate her motions into the virtual world. This project, called “Becoming Dragon”, explored the limits of body modification, “challenging” the one-year transition requirement that transgender people face before gender confirmation surgery. Cardenas told me, “The dragon as a figure of the shapeshifter was important to me to help consider the idea of permanent transition, rejecting any simple conception of identity as tied to a single body at a single moment, and instead reflecting on the process of learning a new way of moving, walking, talking and how that breaks down any possible concept of an original or natural state for the body to be in” (Cardenas 2010).

Screen Shot 2014-12-02 at 11.48.30 PM With this performance, Cardenas wanted to explore not only the issues surrounding gender transformation, but the larger question of how we experience our bodies, and what it means to inhabit the body of another being—real or simulated. During this 365-hour art-performance, the lines between real and virtual progressively blurred for Cardenas, and she found herself “thinking” in the body of a dragon. What interests me is this: how did Micha’s brain adapt to be able to “think” in a different body? What happened to her body image?

The Homunculus

Dr. Wilder Penfield was operating on a patient’s brain to relieve symptoms of epilepsy. In the process he made a remarkable discovery: each part of the body is associated with a specific region in the brain. He had discovered the homunculus: a map of the body in the brain. The homunculus is an invisible cartoon character. It can only be “seen” by carefully probing parts of it and asking the patient what he or she is feeling, or by watching different parts of the body twitch in response. Most interesting is the fact that some parts of the homunculus are much larger than others—totally out of proportion with normal human anatomy. There are two primary “homunculi” (sensory and motor) and their distorted proportions correspond to the relative differences in how much of the brain is dedicated to the different regions.

1421_Sensory_Homunculus

For instance, eyes, lips, tongue, and hands are proportionately large, whereas the skull and thighs are proportionately small. I would not want to encounter a homunculus while walking down the street or hanging out at a party—homunculi are not pretty—in fact, I find them quite frightening. Indeed they have cropped up in haunting ways throughout the history of literature, science, and philosophy.

But happily, for purposes of my research, any homunculus I encounter would be very good at nonverbal communication, because eyes, mouth, and hands are major expressive organs of the body. As a general rule, the parts of our body that receive the most information are also the parts that give the most. What would a “concert pianist homunculus” look like? Huge fingers. How about a “soccer player homunculus”? Gargantuan legs and tiny arms.

Screen Shot 2014-12-02 at 11.49.33 PM

The Homuncular Avatar

Avatar code etches embodiment into virtual space. A “communication homunculus” was sitting on my shoulder while I was working at There.com and arguing with some computer graphics guys about how to engineer the avatar. Chuck Clanton and I were both advocates for dedicating more polygons and procedural animation capability to the avatar’s hands and face. But polygons are graphics-intensive (at least they were back then), and procedural animation takes a lot of engineering work. Consider this: in order to properly animate two articulated avatar hands, you need at least twenty extra joints, on top of the approximately twenty joints used in a typical avatar animation skeleton. That’s nearly twice as many joints.

The argument for full hand articulation was as follows: like the proportions of cortex dedicated to these communication-intensive areas of the body, socially-oriented avatars should have ample resources dedicated to face and hands.

Screen Shot 2014-12-02 at 11.49.46 PM When the developers of Traveler were faced with the problem of how to squeeze the most out of the few polygons they could render on the screen at any given time, they decided to just make avatars as floating heads—because their world centered on vocal communication (telephone meets avatar). The developers of ActiveWorlds, which had a similar polygonal predicament, chose whole avatar bodies (and they were quite clunky by today’s standards).

These kinds of choices determine where and how human expression will manifest.

Non-Human Avatars

Avatar body language does not have to be a direct prosthetic to our corporeal expression. It can extend beyond the human form; this theme has been a part of avatar lore since the very beginning. What are the implications for a non-human body language alphabet? It means that our body language alphabet can (and I would claim already has begun to) include semantic units, attributes, descriptors, and grammars that encompass a superset of human form and behavior.

The illustration below shows pentadactl morphology of various vertebrate limbs. This has implications for a common underlying code used to express meta-human forms and motions.

Screen Shot 2014-12-02 at 11.49.55 PM

A set of parameters (bone lengths, angle offsets, motor control attributes, etc.) can be specified, along with gross morphological settings (i.e., four limbs and a tail; six limbs and no head; no limbs and no head, etc.) Once these parameters are provided in the system to accommodate the various locomotion behaviors and forms of body language, they can be manipulated as genes, using interactive evolution interfaces or genetic algorithms, or simply tweaked directly in a customization interface.

But this morphological space of variation doesn’t need to stop at the vertebrates. We’ve seen avatars in the form of fishes, floating eyeballs, cartoon characters, abstract designs—you name it. Artists, in the spirit of performance artist Stelarc, are exploring the expressive possibilities of avatars and remote embodiment. Micha Cardenas, Max Moswitzer, Jeremy Owen Turner and others articulate the very aesthetics of embodiment and avatarhood—the entire possible expression-space. What are the possibilities of having a visual (or even aural) manifestation of yourself in an alternate reality? As I mentioned before, my Second Life avatar is a non-humanoid creature, consisting of a cube with tentacles.

Screen Shot 2014-12-02 at 11.50.03 PM

On each side of the cube are fractals based on an image of my face. This cube floats in space where my head would normally be. Attached to the bottom of the cube are several green tentacles that hang like those of a jellyfish. This avatar was built by JoannaTrail Blazer, based on my design. In the illustration, Joanna’s avatar is shown next to mine.

I chose a non-human form for a few reasons: One reason is that I prefer to avoid the uncanny valley, and will go to extremes in avoiding droll realism, employing instead the visual tools of abstraction and symbolism. By not trying to replicate the image of a real human, I can sidestep the problem of my avatar “not looking like me” or “not looking right”. My non-human avatar also allows me to explore the realm of non-human body language. On the one hand, I can express yes and no by triggering animations that cause the cube to oscillate like a normal head. But I could also express happiness or excitement by triggering an animation that causes the tentacles to flair out and oscillate, or to roll sensually like an inverted bouquet of cat tails. Negative emotions could be represented by causing the tentacles to droop limply (reinforced by having the cube-head slump downward). While the cube-head mimics normal human head motions, the tentacles do not correspond to any body language that I am physically capable of generating. They could tap other sources, like animal movement, and basic concepts like energy and gravity, expanding my capacity for expression.

Virtual Dogs

Screen Shot 2014-12-02 at 11.50.15 PMI want to come back to the topic of canine body language now, but this time, I’d like to discuss some of the ways that sheer dogginess in the gestalt can be expressed in games and virtual worlds. The canine species has made many appearances throughout the history of animation research, games, and virtual worlds. Rob Fulop, of the ‘90s game company PF Magic, created a series of games based on characters made of spheres, including the game “Dogz”. Bruce Blumberg of the MIT Media Lab developed a virtual interactive dog AI, and is also an active dog trainer. Dogs are a great subject for doing AI—they are easier to simulate than humans, and they are highly emotional, interactive, and engaging.

While prototyping There.com, Will and I developed a dog that would chase a virtual Frisbee tossed by your avatar (involving simultaneous mouse motion to throw the Frisbee and hitting the space key to release the Frisbee at the right time). Since I was interested in the “essence of dog energy”, I decided not to focus on the graphical representation of the dog so much as the motion, sound, and overall behavior of the dog. So, for this prototype, the dog had no legs: just a sphere for a body and a sphere for a head (each rendered as toon-shaded circles—flat coloring with an outline). These spheres could move slightly relative to each other, so that the head had a little bounce to it when the dog jumped.

The eyes were rendered as two black cartoon dots, and they would disappear when the dog blinked. Two ears, a tail, and a tongue were programmed, because they are expressive components. The ears were animated using forward dynamics, so they flopped around when the dog moved, and hung downward slightly from the force of gravity. I programmed some simple logic to make an oscillating, “panting” force in the tongue which would increase in frequency and force when the dog was especially active, tired, or nervous. I also programmed “ear perkiness” which would cause the ears to aim upwards (still with a little bit of droop) whenever a nearby avatar produced a chat that included the dog’s name. I programmed the dog to change its mood when a nearby avatar produced the chats “good dog” or “bad dog”. And this was just the beginning. Later, I added more AI allowing the dog to learn to recognize chat words, and to bond to certain avatars.

Despite the lack of legs and its utterly crude rendering, this dog elicited remarkable puppy-like responses in the people who watched it or played with it. The implication from this is that what makes a dog a dog is not merely the way it looks, but the way it acts—its essence as a distinctly canine space-time energy event. Recall the exaggerated proportions of the human communication homunculus. For this dog, ears, tail, and tongue constituted the vast majority of computational processing. That was on purpose; this dog was intended as a communicator above all else. Consider the visual language of tail wagging that I brought up earlier, a visual language which humans (especially dog-owners) have incorporated into their unconscious vocabularies. What lessons might we take from the subject of wag semiotics, as applied to the art and science of wagging on the internet?

Tail Wagging on the Internet

In the There.com dog, the tail was used as a critical indicator of its mood at any given moment, raising when the dog was alert or aroused, drooping when sad or afraid, and wagging when happy. Take wagging as an example. The body language information packet for tail wagging consisted of a Boolean value sent from the dog’s Brain to the dog’s Animated Body, representing simply “start wagging” or “stop wagging”. It is important to note that the dog’s Animated Body is constituted on clients (computers sitting in front of users), whereas the dog’s Brain resides on servers (arrays of big honkin’ computers in a data center that manage the “shared reality” of all users). The reason for this mind/body separation is to make sure that body language messaging (as well as overall emotional state, physical location and orientation of the dog, and other aspects of its dynamic state) are truthfully conveyed to all the users.

Clients are subject to animation frame rates lagging, internet messages dropping out, and other issues. All users whose avatars are hanging out in a part of the virtual world where the dog is hanging out need to see the same behavior; they are all “seeing the same dog”—virtually-speaking.

Screen Shot 2014-12-02 at 11.50.24 PM

It would be unnecessary (and expensive in terms of internet traffic) for the server to send individual wags several times a second to all the clients. Each client’s Animated Body code is perfectly capable of performing this repetitive animation. And, because of different rendering speeds among various clients, lag times, etc, the wagging tail on your client might be moving left-right-left-right, while the wagging tail on my client is moving right-left-right-left. In other words, they might be out of phase or wagging at slightly different speeds. These slight differences have almost no effect on the reading of the wag. “I am wagging my tail” is the point. That’s a Boolean message: one bit. The reason I am laboring over this point harkens back to the chapter on a Body Language Alphabet: a data-compressed message, easy to zip through the internet, is efficient for helping virtual worlds run smoothly. It is also inspired by biosemiotics: Mother Nature’s efficient message-passing protocol.

On Cuttlefish and Dolphins

The homunculus of Homo Sapiens might evolve into a more plastic form—maybe not on a genetic/species level, but at least during the lifetimes of individual brains, assisted by the scaffolding of culture and virtual world technology. This plasticity could approach strange proportions, even as our physical bodies remain roughly the same. As we spend more of our time socializing and interacting on the internet as part of the program to travel less to reduce greenhouse gases, our embodiment will naturally take on the forms appropriate to the virtual spaces we occupy. And these spaces will not necessarily mimic the spaces of the real world, nor will our embodiments always look and feel like our own bodies. And with these new virtual embodiments will come new layers of body language. Jaron Lanier uses the example of cephalopods as species capable of animated texturemapping and physical morphing used for communication and camouflage—feats that are outside the range of physical human expression.

vid

 

 

 

 

 

 

 

 

In reference to avatars, Lanier says, “The problem is that in order to morph, humans must design avatars in laborious detail in advance. Our software tools are not yet flexible enough to enable us, in virtual reality, to think ourselves into different forms. Why would we want to? Consider the existing benefits of our ability to create sounds with our mouths. We can make new noises and mimic existing ones, spontaneously and instantaneously. But when it comes to visual communication, we are hamstrung…We can learn to draw and paint, or use computer-graphics design software. But we cannot generate images at the speed with which we can imagine them” (Lanier 2006).

Screen Shot 2014-12-02 at 11.50.31 PM

Once we have developed the various non-humanoid puppeteering interfaces that would allow Lanier’s vision, we will begin to invent new visual languages for realtime communication. Lanier believes that in the future, humans will be truly “multihomuncular”.

Researchers from Aberdeen University and the Polytechnic University of Catalonia found that dolphins use discrete units of body language as they swim together near the surface of water. They observed efficiency in these signals, similar to what occurs in frequently-used words in human verbal language (Ferrer i Cancho and Lusseau 2009). As human natural language goes online, and as our body language gets processed, data-compressed, and alphabetized for efficient traversal over the internet, we may start to see more patterns of our embodied language that resemble those created by dolphins, and many other social species besides. The background communicative buzz of the biosphere may start to make more sense in the process of whittling our own communicative energy down to its essential features, and being able to analyze it digitally. With a universal body language alphabet, we might someday be able to animate our skin like cephalopods, or speak “dolphin”, using our tails, as we lope across the virtual waves.

Screen Shot 2014-12-02 at 11.50.37 PM