Thoughts on Biological Chemistry and Emergence

My dog was licking my face this morning – as he often does in the morning. Many people refuse to let dogs lick their faces. Understandable. I am one of the apparently few people who allow it. There are a few exceptions when I don’t like it, such as right after my dog has eaten stinky dog food. Otherwise, he is a very healthy, tidy and gentle (and smallish) dog. His breath is barely noticeable.

Dog’s lick people’s faces for a number of possible reasons; these are nicely explained in several articles, such as:

https://pets.thenest.com/dogs-lick-humans-faces-5892.html

https://shopus.furbo.com/blogs/knowledge/why-does-dog-lick-my-face

But the proposed reason that most intrigues me is that it is a form of chemical communication. Dogs have such a sophisticated sense of smell that they can actually gather information (dog-like information) about people they are licking. Licking can also have a calming effect on licker and lickee (if you are not a fan of dogs licking your face you may disagree, so just pretend that you’re a dog for a moment).

According to this article:

“Scientists believe that the major source of people’s positive reactions to pets comes from oxytocin, a hormone whose many functions include stimulating social bonding, relaxation and trust, and easing stress. Research has shown that when humans interact with dogsoxytocin levels increase in both species.”

Even more fascinating is a study that indicates that interacting with dogs can have health benefits for humans:

Beneficial Dog Bacteria Up-Regulate Oxytocin and Lower Risk of Obesity

So, having a dog can reduce obesity? That is certainly new to me!

Chemical Ecology

While my dog was licking my face and kicking up his oxytocin, and consequently making me release the same chemical into my bloodstream, I was thinking about how social animals regulate chemistry within their pack. (Similar with the visible/audible dimension: when my dog sends growling signals, I will sometimes get up and check out the window for intruders. He is modulating my behavior). So, I began to see more clearly how chemical exchange might be important for the cohesion of a group of social animals. I suspect there are many more chemicals involved in regulating the behaviors of pack animals – including humans.

And I realized that the orchestration of chemicals – not only in a single animal body – but among a group of animals – is largely invisible to us. But of course: chemicals are too small to see. They are molecules made of atoms. We experience their signaling effects as behaviors and notions. And we humans may have evolved such complex societal structures that we can hardly even recognize the chemical foundations of so much of our social behavior. This is the nature of emergence.

When a new level of emergence takes shape (for instance, when chemistry becomes complex enough to enable replication and variation and therefore genetic-based biology), new, larger structures take on their own agency and begin to regulate their sub-components in turn. Ancient chemistry didn’t just allow an apparatus to emerge that conveys information for replication (genetics); it also allowed a complex network of signaling between organelles, cells, organs, organisms, ecosystems, and societies. Each level gives rise (and gives way) to larger structures.

Emergence and Top-Down Effects

Emergence is a fascinating subject – not only because of the beauty of imagining simple components coming together to make a whole that is larger than the sum of its parts – but because that whole can attain autonomy; it can actually reach down and regulate those components that allowed it to come into existence in the first place. It’s possible that this top-down influence is an innate and necessary property of emergence.

If you are a fan of emergence, like me, you enjoy spinning narratives about how various levels of reality came into existence:

physics
chemistry
biology
intelligence
technology
super intelligence

The name of this blog is “Nature->Brain->Technology” – which is a nod to three of the levels in that list.

Dawkins’ book, The Selfish Gene – triggered new insights on genetics – and some lively debates. Dawkins coined the term “meme”. And I suspect he may have had a sense that the title of the book itself could turn into a meme. It brought forth ideas about how genes are powerful agents that cause an upward cascade of effects, making us do what we do: from the perspective of the selfish gene, we humans are “lumbering robots” whose purpose is to simply ensure its replication. Everything else is an illusion of human purpose. But it may be more subtle than this. Are genes the only things that are “selfish”? Could there be a lower level of selfishness going on?

My new insight from building oxytocin with my dog is that there is another layer of emergence involved, which is more fundamental to genes, and which gave rise to genes. My insight was echoed by an article called “Forget the selfish gene — the evolution of life is driven by the selfish ribosome“, which states:

“The selfish ribosome model closes a big theoretical gap between, on the one hand, the simple biological molecules that can form on mud flats, oceanic thermal vents or via lightning, and on the other hand LUCA, or the Last Universal Common Ancestor, a single-celled organism.”

Anything that smells of Eve is suspect. It’s more likely that there was a sort of distributed “Eve Soup” with a lot of pseudo-replication happening over a very long period of time. It is possible that the origin of life cannot be pinpointed to a single time and space…specifically because it is emergent.

Besides face-licking, there are probably many more phenomena that we have low-dimensional explanations for. They may someday be revealed as the effects of various selfish agents operating on various levels. Emergence is a scientific tool – a conceptual framework – that helps reveal otherwise invisible forces in nature.

For instance: why do we yawn?

The physiological purpose of a yawn remains a mystery. “The real answer so far is we don’t really know why we yawn,”

It may be more productive to stop looking for “the purpose”, and to look at it through the wide lens of emergence.

Advertisement

Thoughts on the Evolution of Communication

My dog and I engage in a lot of signaling. But it is not always deliberate, and it is not always conscious, and it is not always a two-way process.

In the morning, Otto licks my bald head. He can probably smell what I have been dreaming. I hold him and we have a nice cuddle. Just one of our many routines. He looks at me and I look at him. He is always checking me out. In the process of getting to know each other over several years we have come to read each other’s signals – our body language, interactions, responses, vocalizations…and smells.

image from http://projectdolittle.com/

Semiosis emerges in the process. If there is a coupling of signals – a mutually-reinforcing signaling loop – two-way communication emerges. It is not always conscious – for either of us. Sometimes, a mutually-reinforcing signaling process which I was previously unaware of becomes apparent to me. When this happens, I become an active agent in that semiosis.

Otto is so intensely attentive to me – my routines (and deviations from them). He probably tunes-in to many more of my signals than I do to his. But then again, I am a human: I generate a lot of signal. Does he see this as “communication?” It is not clear: his brain is a dog brain, and mine is a human brain. We don’t share the same word for this experience (he only knows a few English words, and “communication” isn’t one of them).

I can be sure of one thing: we share a lot of signaling. And, as members of two highly-social species, we both like that.

I would conclude from this that communication among organisms in general (the biosemiosis that has emerged on Earth over the last few billion years) came about pretty much the same way that Otto and I established our own little world of emergent semiosis. As life evolved, trillions of coupled signaling channels reinforced each other over time and became more elaborate. Eventually, this signaling became conscious and intentional.

And so here we are: human communication has reached a level of sophistication such that I can type these words – and you can read them. And we can share the experience – across time and space.

The Body Language of a Happy Lizard

lizardhappy-dog-running-by-500px-600x350I love watching my dog greet us when we come home after being out of the house for several hours. His body language displays a mix of running in circles, panting, bobbing his head up and down, wagging his tail vigorously, wagging his body vigorously, yapping, yipping, barking, doing the down-dog, shaking off, and finally, jumping into our laps. All of this activity is followed by a lot of of licking.

There was a time not long ago when people routinely asked, “do animals have intelligence?” and “do animals have emotions?” People who are still asking whether animals have intelligence and emotions seriously need to go to a doctor to get their mirror neurons polished. We realize now that these are useless, pointless questions.

Deconstructing Intelligence

self-cars-300x190The change of heart about animal intelligence is not just because of results from animal research: it’s also due to a softening of the definition of intelligence. People now discuss artificial intelligence at the dinner table. We often hear ourselves saying things like “your computer wants you to change the filename”, or “self-driving cars in the future will have to be very intelligent”.

The concept of intelligence is working its way into so many non-human realms, both technological and animal. We talk about the “intelligence of nature”, the “wisdom of crowds”, and other attributions of intelligence that reside in places other than individual human skulls.

imgres-1

Can a Lizard Actually Be “Happy”? 

I want to say a few things about emotions.

The problem with asking questions like “can a lizard be happy?” is in the dependency of words, like “happy”, “sad”, and jealous”. It is futile to try to fit a complex dynamic of brain chemistry, neural firing, and semiosis between interacting animals into a box with a label on it. Researchers doing work on animal and human emotion should avoid using words for emotions. Just the idea of trying to capture something as visceral, somatic, and, um…wordless as an emotion in a single word is counterproductive. Can you even claim that you are feeling one emotion at a time? No: emotions ebb and flow, they overlap, they are fluid – ephemeral. Like memory itself, as soon as you start to study your own emotions, they change.

And besides; words for emotions differ among languages. While English may be the official language of science, it does not mean that its words for emotions are more accurate.

Alas…since I’m using words to write this article (!) I have to eat my words. I guess I would have to give the following answer the question, “can a lizard be happy?”

Yes. Kind of.

The thing is: it’s not as easy to detect a happy lizard as it is to detect a happy dog. Let’s compare these animals:

HUMAN        DOG         COW           BIRD         LIZARD         WORM

This list is roughly ordered by how similar the animal is to humans in terms of intelligent body language. Dogs share a great deal of the body language that we associate with emotions. Dogs are especially good at expressing shame. (Do cats feel less shame than dogs? They don’t appear to show it as much as dogs, but we shouldn’t immediately jump to conclusions because we can’t see it in terms of familiar body language signals).

3009107.largeOn the surface, a cow may appear placid and relaxed…in that characteristic bovine way. But an experienced veterinarian or rancher can easily detect a stressed-out cow. As we move farther away from humans in this list of animals, the body language cues become harder and harder to detect. In the simpler animals, do we even know if these emotions exist at all? Again…that may be the wrong question to ask.

happy-worm

It would be wrong of me to assume that there are no emotional signals being generated by an insect, just because I can’t see them.

ants communicating via touch

Ant body language is just not something I am familiar with. The more foreign the animal, the more difficult it is for us humans to attribute “intelligence” or “emotion” to it.

Zoosemiotics may help to disambiguate these problematic definitions, and place the gaze where it may be more productive.

I would conclude that we need to continue to remove those anthropocentric biases that have gotten in the way of science throughout our history.

8212f1d8d4ab1d159c6e0837439524c3When we have adequately removed those biases regarding intelligence and emotion, we may more easily see the rich signaling that goes on between all animals on this planet. We will begin to see more clearly a kind of super-intelligence that permeates the biosphere. Our paltry words will step aside to reveal a bigger vista.

Dinosaur_615I have never taken LSD or ayahuasca, but I’ve heard from those that have that they have seen this super-intelligence. Perhaps these chemicals are one way of removing that bias, and taking a peek at that which binds us with all of nature.

But short of using chemicals….I guess some good unbiased science, an open mind, and a lot of compassion for our non-human friends can help us see farther – to see beyond our own body language.

The Tail Wagging the Brain

 

weird_cover_image

(This is chapter 8 of Virtual Body Language). Extra images have been added for this post. An earlier variation of this blog post was published here.)

The Brain That Changes ItselfThe brain can rewire its neurons, growing new connections, well into old age. Marked recovery is often possible even in stroke victims, porn addicts, and obsessive-compulsive sufferers, several years after the onset of their conditions. Neural Plasticity is now proven to exist in adults, not just in babies and young children going through the critical learning stages of life. This subject has been popularized with books like The Brain that Changes Itself (Doidge 2007).

Like language grammar, music theory, and dance, visual language can be developed throughout life. As visual stimuli are taken in repeatedly, our brains reinforce neural networks for recognizing and interpreting them.

biophiliaSome aspects of visual language are easily learned, and some may even be instinctual. Biophilia is the human aesthetic appreciation of biological form and all living systems (Kellert and Wilson 1993). The flip side of aesthetic biophilia is a readiness to learn disgust (of feces and certain smells) or fear (of large insects or snakes, for instance). Easy recognition of the shapes of branching trees, puffy clouds, flowing water, plump fruit, and subtle differences in the shades of green and red, have laid a foundation for human aesthetic response in all things, natural or manufactured. Our biophilia is the foundation of much of our visual design, even in areas as urban as abstract painting, web page layout, and office furniture design.

The Face

eyebrow accentsWe all have visual language skills that make us especially sensitive to eyebrow motion, eye contact, head orientation, and mouth shape. Our sensitivity to facial expression may even influence other, more abstract forms of visual language, making us responsive to some visual signals more than others—because those face-specific sensitivities gave us an evolutionary advantage as a species so dependent on social signaling. This may have been reinforced by sexual selection.

Even visual features as small as the pupil of an eye contribute to the emotional reading of a face—usually unconsciously. Perhaps the evolved sensitivity to small black dots as information-givers contributed to our subsequent invention of small-yet-critical elements in typographical alphabets.

evolution-of-smiley-2

We see faces in clouds, trees, and spaghetti. Donald Norman wrote a book called, Turn Signals are the Facial Expressions of Automobiles (1992). Recent studies using fMRI scans show that car aficionados use the same brain modules to recognize cars that people in general use to recognize faces (Gauthier et al. 2003).

Screen Shot 2014-12-02 at 11.47.48 PMSomething about the face appears to be baked into the brain.

18394bo715ci0jpg

Dog Smiling

We smile with our faces. Dogs smile with their tails. The next time you see a Doberman Pinscher with his ears clipped and his tail removed (docked), imagine yourself with your smile muscles frozen dead and your eyebrows shaved off.

Like humans perceiving human smiles, it is likely that the canine brain has neural wiring highly tuned to recognize and process a certain kind of visual energy: an oscillating motion of a linear shape occurring near the backside of another dog. The photoreceptors go to work doing edge detection and motion detection so they can send efficient signals to the brain for rapid space-time pattern recognition.

dachsund-110113-400x300

Tail wagging is apparently a learned behavior: puppies don’t wag until they are one or two months old. Recent findings in dog tail wagging show that a wagger will favor the right side as a result of feeling fundamentally positive about something, and will favor the left side when there are negative overtones in feeling (Quaranta et al. 2007). This is not something humans have commonly known until recently. Could it be that left-right wagging asymmetry has always been a subtle part of canine-to-canine body language?

higgs-eyesI often watch intently as my terrier mix, Higgs, encounters a new dog in the park whom he has never met. The two dogs approach each other—often moving very slowly and cautiously. If the other dog has its tail down between its legs and its ears and head held down, and is frequently glancing to the side to avert eye contact, this generally means it is afraid, shy, or intimidated. If its tail is sticking straight up (and not wagging) and if its ears are perked up and the hair on the back is standing on end, it could mean trouble. But if the new stranger is wagging its tail, this is a pretty good sign that things are going to be just fine. Then a new phase of dog body language takes over. If Higgs is in a playful mood, he’ll start a series of quick motions, squatting his chest down to assume the “play bow”, jumping and stopping suddenly, and watching the other dog closely (probably to keep an eye on the tail, among other things). If Higgs is successful, the other dog will accept his invitation, and they will start running around the park, chasing each other, and having a grand old time. It is such a joy to watch dogs at play.

dogs_color

So here is a question: assuming dogs have an instinctive—or easily-learned—ability to process the body language of other dogs, like wagging tails, can the same occur in humans for processing canine body language? Did I have to learn to read canine body language from scratch, or was I was born with this ability? I am after all a member of a species that has been co-evolving with canines for more than ten thousand years, sometimes in deeply symbiotic relationships. So, perhaps I, Homo Sapiens, already have a biophilic predisposition to canine body language.

Animals_in_Translation_(book_cover)Recent studies in using dogs as therapy for helping children with autism have shown remarkable results. These children become more socially and emotionally connected. According to autistic author Temple Grandin, this is because animals, like people with autism, do not experience ambivalent emotion; their emotions are pure and direct (Grandin, Johnson 2005). Perhaps canines helped to modulate, buffer, and filter human emotions throughout our symbiotic evolution. They may have helped to offset a tendency towards neurosis and cognitive chaos.

Vestigial Response

image5931Having a dog in the family provides a continual reminder of my affiliation with canines, not just as companions, but as Earthly relatives. On a few rare occasions while sitting quietly at home working on something, I remember hearing an unfamiliar noise somewhere in the house. But before I even knew consciously that I was hearing anything, I felt a vague tug at my ears; it was not an entirely comfortable feeling. This may have happened at other times, but I probably didn’t notice.

I later learned that this is a leftover vestigial response that we inherited from our ancestors.

The ability for some people to wiggle their ears is due to muscles that are basically useless in humans, but were once used by our ancestors to aim their ears toward a sound. Remembering the feeling of that tug on my ears gives me a feeling of connection to my ancestors’ physical experiences.

In a moment we will consider how these primal vestiges might be coming back into modern currency. But I’m still in a meandering, storytelling, pipe-smoking kind of mood, so hang with me just a bit longer and then we’ll get back to the subject of avatars.

This vestigial response is called ear perking. It is shared by many of our living mammalian relatives, including cats. I remember once hanging out and playing mind-games with a cat. I was sitting on a couch, and the cat was sitting in the middle of the floor, looking away, pretending to ignore me (but of course—it’s a cat).

dims.vetstreet

I was trying to get the cat to look at me or to acknowledge me in some way. I called its name, I hissed, clicked my tongue, and clapped my hands. Nothing. I scratched the upholstery on the couch. Nothing. Then I tore a small piece of paper from my notebook, crumpled it into a tiny ball, and discreetly tossed it onto the floor, behind the cat, outside of its field of view. The tiny sound of the crumpled ball of paper falling to the floor caused one of the cat’s ears to swivel around and aim towards the sound. I jumped up and yelled, “Hah—got you!” The cat’s higher brain quickly tugged at its ear, pulling it back into place, and the cat continued to serenely look off into the distance, as if nothing had ever happened.

earsmatrix

Cat body language is harder to read than dog body language. Perhaps that’s by design (I mean…evolution). Dogs don’t seem to have the same talents of reserve and constraint. Dog expressions just flop out in front of you. And their vocabulary is quite rich. Most world languages have several dog-related words and phrases. We easily learn to recognize the body language of dogs, and even less familiar social animals. Certain forms of body language are processed by the brain more easily than others. A baby learns to respond to its mother’s smile very soon after birth. Learning to read a dog’s tail motions may not be so instinctive, but the plastic brain of Homo Sapiens is ready to adapt. Learning to read the signs that your pet turtle is hungry would be much harder (I assume). At the extreme: learning to read the signals from an enemy character in a complicated computer game may be a skill only for a select few elite players.

1883d6jlb7tb0jpgThis I’m sure of: if we had tails, we’d be wagging them for our dogs when they are being good (and for that matter, we’d also be wagging them for each other :D ). It would not be easy to add a tail to the base of a human spine and wire up the nerves and muscles. But if it could be done, our brains would easily and happily adapt, employing some appropriate system of neurons to the purpose of wagging the tail—perhaps re-adapting a system of neurons normally dedicated to doing The Twist. While it may not be easy to adapt our bodies to acquire such organs of expression, our brains can easily adapt. And that’s where avatars come in.

Furries

Screen Shot 2014-12-02 at 11.48.04 PMThe Furry Species has proliferated in Second Life. Furry Fandom is a subculture featuring fictional anthropomorphic animal characters with human personalities and human-like attributes. Furry fandom already had a virtual head start—people were role-playing with their online “fursonas” before Second Life existed (witness FurryMUCK, a user-extendable online text-based role-playing game started in 1990). Furries have animal heads, tails, and other such features.

While many Second Life avatars have true animal forms (such as the “ferals” and the “tinies”), many are anthropomorphic: walking upright with human postures and gaits. This anthropomorphism has many creative expressions, ranging from quaint and cute to mythical, scary, and kinky.

Furry anthropomorphism in Second Life is appropriate in one sense: there is no way to directly change the Second Life avatar into a non-human form. The underlying avatar skeleton software is based solely on the upright-walking human form. I know this fact in an intimate way, because I spent several months digging into the code in an attempt to reconstitute the avatar skeleton to allow open-ended morphologies (quadrupeds, etc.) This proved to be difficult. And no surprise: the humanoid avatar morphology code, and all its associated animations—procedural and otherwise—had been in place for several years. It serves a similar purpose to a group of genes in our DNA called Hox genes.

6800872f6

They are baked deep into our genetic structure, and are critical to the formation of a body plan. Hox genes specify the overall structure of the body and are critical in embryonic development when the segmentation and placement of limbs and other body parts are first established. After struggling to override the effects of the Second Life “avatar Hox genes”, I concluded that I could not do this on my own. It was a strategic surgical process that many core Linden Lab engineers would have to perform. Evolution in virtual worlds happens fast. It’s hard to go back to an earlier stage and start over.

Despite the anthropomorphism of the Second Life avatar (or perhaps because of this constraint), the Linden scripting language (LSL) and other customizing abilities have provided a means for some remarkably creative workarounds, including packing the avatar geometry into a small compact form called a “meatball”, and then surrounding it with a custom 3D object, such as a robot or a dragon or some abstract form, complete with responsive animations and particle system effects.

furriesPerhaps furry residents prefer having this constraint of anthropomorphism; it fits with their nature as hybrids. Some furry residents have customized tail animations and use them to express mood and intent. I wouldn’t be surprised if those who have been furries for many years have dreams in which they are living and expressing in their Furry bodies—communicating with tails, ears, and all. Most would agree that customizing a Furry with a wagging-tail animation is a lot easier than undergoing surgery to attach a physical tail.

But as far as the brain is concerned, it may not make a difference.

Where Does my Virtual Body Live?

Virtual reality is not manifest in computer chips, computer screens, headsets, keyboards, mice, joysticks, or head-mounted displays. Nor does it live in running software. Virtual reality manifests in the brain, and in the collective brains of societies. The blurring of real and virtual experiences is a theme that Jeremy Bailenson and his team at Stanford’s Virtual Human Interaction Lab have been researching.

vhil-logo

Virtual environments are now being used for research in social sciences, as well as for the treatment of many brain disorders. An amputee who has suffered from phantom pain for years can be cured of the pain through a disciplined and focused regimen of rewiring his or her body image to effectively “amputate” the phantom limb. Immersive virtual reality has been used recently to treat this problem (Murray et al. 2007). Previous techniques using physical mirrors have been replaced with sophisticated simulations that allow more controlled settings and adjustments. When the brain’s body image gets tweaked away from reality too far, psychological problems ensue, such as anorexia. Having so many super-thin sex-symbol avatars may not be helping the situation. On the other hand, virtual reality is being used in research to treat this and other body image-related disorders.

BradDeGraf-01A creative kind of short-term body image shifting is nothing new to animators, actors, and puppeteers, who routinely tweak their own body images in order to express like ostriches or hummingbirds or dogs. When I was working for Brad deGraf, one of the early pioneers in real-time character animation, I would frequent the offices of Protozoa, a game company he founded in the mid-90s.

I was hired to develop a tool for modeling 3D trees which were to be used in a game called “Squeezils”, featuring animals that scurry among the branches. I remember the first time I went to Protozoa. I was waiting in the lobby to meet Brad, and I noticed a video monitor at the other end of the room. An animation was playing that had a couple of 3D characters milling about. One of them was a crab-like cartoon character with huge claws, and the other was a Big Bird-like character with a very long neck. After gazing at these characters for a while, it occurred to me that both of these characters were moving in coordination—as if they were both being puppeteered by the same person. My curiosity got the best of me and I started wandering around the offices, looking for the puppet master. I peeked around the corner. In the next room were a couple of motion capture experts, testing their hardware. On the stage was a guy wearing motion capture gear. When his arms moved, so did the huge claws of the crab—and so did the wimpy wings of the tall bird. When his head moved, the eyes of the crab looked around, and the bird’s head moved around on top of its long neck.

Brad deGraf and puppeteer/animator Emre Yilmaz call this “…performance animation…a new kind of jazz. Also known as digital puppetry…it brings characters to life, i.e. ‘animates’ them, through real-time control of the three-dimensional computer renderings, enabled by fast graphics computers, live motion sampling, and smart software” (deGraf and Yilmaz 1999). When applying human-sourced motion to exaggerated cartoon forms, the human imagination is stimulated: “motion capture” escapes its negative association with droll, unimaginative literal recording. It inspires the human controller to think more like a puppeteer than an actor. Puppeteering is more out-of-body than acting. Anything can become a puppet (a sock, a salt shaker, a rubber hose).

Screen Shot 2014-12-02 at 11.48.15 PM deGraf and Yilmaz make this out-of-body transference possible by “re-proportioning” the data stream from the human controller to fit non-human anatomies.

Screen Shot 2014-12-02 at 11.48.22 PM

Research by Nick Yee found that people’s behaviors change as a result of having different virtual representations, a phenomenon he calls the Proteus Effect (Yee 2007). Artist Micha Cardenas spent 365 hours continuously as a dragon in Second Life. She employed a Vicon motion capture system to translate her motions into the virtual world. This project, called “Becoming Dragon”, explored the limits of body modification, “challenging” the one-year transition requirement that transgender people face before gender confirmation surgery. Cardenas told me, “The dragon as a figure of the shapeshifter was important to me to help consider the idea of permanent transition, rejecting any simple conception of identity as tied to a single body at a single moment, and instead reflecting on the process of learning a new way of moving, walking, talking and how that breaks down any possible concept of an original or natural state for the body to be in” (Cardenas 2010).

Screen Shot 2014-12-02 at 11.48.30 PM With this performance, Cardenas wanted to explore not only the issues surrounding gender transformation, but the larger question of how we experience our bodies, and what it means to inhabit the body of another being—real or simulated. During this 365-hour art-performance, the lines between real and virtual progressively blurred for Cardenas, and she found herself “thinking” in the body of a dragon. What interests me is this: how did Micha’s brain adapt to be able to “think” in a different body? What happened to her body image?

The Homunculus

Dr. Wilder Penfield was operating on a patient’s brain to relieve symptoms of epilepsy. In the process he made a remarkable discovery: each part of the body is associated with a specific region in the brain. He had discovered the homunculus: a map of the body in the brain. The homunculus is an invisible cartoon character. It can only be “seen” by carefully probing parts of it and asking the patient what he or she is feeling, or by watching different parts of the body twitch in response. Most interesting is the fact that some parts of the homunculus are much larger than others—totally out of proportion with normal human anatomy. There are two primary “homunculi” (sensory and motor) and their distorted proportions correspond to the relative differences in how much of the brain is dedicated to the different regions.

1421_Sensory_Homunculus

For instance, eyes, lips, tongue, and hands are proportionately large, whereas the skull and thighs are proportionately small. I would not want to encounter a homunculus while walking down the street or hanging out at a party—homunculi are not pretty—in fact, I find them quite frightening. Indeed they have cropped up in haunting ways throughout the history of literature, science, and philosophy.

But happily, for purposes of my research, any homunculus I encounter would be very good at nonverbal communication, because eyes, mouth, and hands are major expressive organs of the body. As a general rule, the parts of our body that receive the most information are also the parts that give the most. What would a “concert pianist homunculus” look like? Huge fingers. How about a “soccer player homunculus”? Gargantuan legs and tiny arms.

Screen Shot 2014-12-02 at 11.49.33 PM

The Homuncular Avatar

Avatar code etches embodiment into virtual space. A “communication homunculus” was sitting on my shoulder while I was working at There.com and arguing with some computer graphics guys about how to engineer the avatar. Chuck Clanton and I were both advocates for dedicating more polygons and procedural animation capability to the avatar’s hands and face. But polygons are graphics-intensive (at least they were back then), and procedural animation takes a lot of engineering work. Consider this: in order to properly animate two articulated avatar hands, you need at least twenty extra joints, on top of the approximately twenty joints used in a typical avatar animation skeleton. That’s nearly twice as many joints.

The argument for full hand articulation was as follows: like the proportions of cortex dedicated to these communication-intensive areas of the body, socially-oriented avatars should have ample resources dedicated to face and hands.

Screen Shot 2014-12-02 at 11.49.46 PM When the developers of Traveler were faced with the problem of how to squeeze the most out of the few polygons they could render on the screen at any given time, they decided to just make avatars as floating heads—because their world centered on vocal communication (telephone meets avatar). The developers of ActiveWorlds, which had a similar polygonal predicament, chose whole avatar bodies (and they were quite clunky by today’s standards).

These kinds of choices determine where and how human expression will manifest.

Non-Human Avatars

Avatar body language does not have to be a direct prosthetic to our corporeal expression. It can extend beyond the human form; this theme has been a part of avatar lore since the very beginning. What are the implications for a non-human body language alphabet? It means that our body language alphabet can (and I would claim already has begun to) include semantic units, attributes, descriptors, and grammars that encompass a superset of human form and behavior.

The illustration below shows pentadactl morphology of various vertebrate limbs. This has implications for a common underlying code used to express meta-human forms and motions.

Screen Shot 2014-12-02 at 11.49.55 PM

A set of parameters (bone lengths, angle offsets, motor control attributes, etc.) can be specified, along with gross morphological settings (i.e., four limbs and a tail; six limbs and no head; no limbs and no head, etc.) Once these parameters are provided in the system to accommodate the various locomotion behaviors and forms of body language, they can be manipulated as genes, using interactive evolution interfaces or genetic algorithms, or simply tweaked directly in a customization interface.

But this morphological space of variation doesn’t need to stop at the vertebrates. We’ve seen avatars in the form of fishes, floating eyeballs, cartoon characters, abstract designs—you name it. Artists, in the spirit of performance artist Stelarc, are exploring the expressive possibilities of avatars and remote embodiment. Micha Cardenas, Max Moswitzer, Jeremy Owen Turner and others articulate the very aesthetics of embodiment and avatarhood—the entire possible expression-space. What are the possibilities of having a visual (or even aural) manifestation of yourself in an alternate reality? As I mentioned before, my Second Life avatar is a non-humanoid creature, consisting of a cube with tentacles.

Screen Shot 2014-12-02 at 11.50.03 PM

On each side of the cube are fractals based on an image of my face. This cube floats in space where my head would normally be. Attached to the bottom of the cube are several green tentacles that hang like those of a jellyfish. This avatar was built by JoannaTrail Blazer, based on my design. In the illustration, Joanna’s avatar is shown next to mine.

I chose a non-human form for a few reasons: One reason is that I prefer to avoid the uncanny valley, and will go to extremes in avoiding droll realism, employing instead the visual tools of abstraction and symbolism. By not trying to replicate the image of a real human, I can sidestep the problem of my avatar “not looking like me” or “not looking right”. My non-human avatar also allows me to explore the realm of non-human body language. On the one hand, I can express yes and no by triggering animations that cause the cube to oscillate like a normal head. But I could also express happiness or excitement by triggering an animation that causes the tentacles to flair out and oscillate, or to roll sensually like an inverted bouquet of cat tails. Negative emotions could be represented by causing the tentacles to droop limply (reinforced by having the cube-head slump downward). While the cube-head mimics normal human head motions, the tentacles do not correspond to any body language that I am physically capable of generating. They could tap other sources, like animal movement, and basic concepts like energy and gravity, expanding my capacity for expression.

Virtual Dogs

Screen Shot 2014-12-02 at 11.50.15 PMI want to come back to the topic of canine body language now, but this time, I’d like to discuss some of the ways that sheer dogginess in the gestalt can be expressed in games and virtual worlds. The canine species has made many appearances throughout the history of animation research, games, and virtual worlds. Rob Fulop, of the ‘90s game company PF Magic, created a series of games based on characters made of spheres, including the game “Dogz”. Bruce Blumberg of the MIT Media Lab developed a virtual interactive dog AI, and is also an active dog trainer. Dogs are a great subject for doing AI—they are easier to simulate than humans, and they are highly emotional, interactive, and engaging.

While prototyping There.com, Will and I developed a dog that would chase a virtual Frisbee tossed by your avatar (involving simultaneous mouse motion to throw the Frisbee and hitting the space key to release the Frisbee at the right time). Since I was interested in the “essence of dog energy”, I decided not to focus on the graphical representation of the dog so much as the motion, sound, and overall behavior of the dog. So, for this prototype, the dog had no legs: just a sphere for a body and a sphere for a head (each rendered as toon-shaded circles—flat coloring with an outline). These spheres could move slightly relative to each other, so that the head had a little bounce to it when the dog jumped.

The eyes were rendered as two black cartoon dots, and they would disappear when the dog blinked. Two ears, a tail, and a tongue were programmed, because they are expressive components. The ears were animated using forward dynamics, so they flopped around when the dog moved, and hung downward slightly from the force of gravity. I programmed some simple logic to make an oscillating, “panting” force in the tongue which would increase in frequency and force when the dog was especially active, tired, or nervous. I also programmed “ear perkiness” which would cause the ears to aim upwards (still with a little bit of droop) whenever a nearby avatar produced a chat that included the dog’s name. I programmed the dog to change its mood when a nearby avatar produced the chats “good dog” or “bad dog”. And this was just the beginning. Later, I added more AI allowing the dog to learn to recognize chat words, and to bond to certain avatars.

Despite the lack of legs and its utterly crude rendering, this dog elicited remarkable puppy-like responses in the people who watched it or played with it. The implication from this is that what makes a dog a dog is not merely the way it looks, but the way it acts—its essence as a distinctly canine space-time energy event. Recall the exaggerated proportions of the human communication homunculus. For this dog, ears, tail, and tongue constituted the vast majority of computational processing. That was on purpose; this dog was intended as a communicator above all else. Consider the visual language of tail wagging that I brought up earlier, a visual language which humans (especially dog-owners) have incorporated into their unconscious vocabularies. What lessons might we take from the subject of wag semiotics, as applied to the art and science of wagging on the internet?

Tail Wagging on the Internet

In the There.com dog, the tail was used as a critical indicator of its mood at any given moment, raising when the dog was alert or aroused, drooping when sad or afraid, and wagging when happy. Take wagging as an example. The body language information packet for tail wagging consisted of a Boolean value sent from the dog’s Brain to the dog’s Animated Body, representing simply “start wagging” or “stop wagging”. It is important to note that the dog’s Animated Body is constituted on clients (computers sitting in front of users), whereas the dog’s Brain resides on servers (arrays of big honkin’ computers in a data center that manage the “shared reality” of all users). The reason for this mind/body separation is to make sure that body language messaging (as well as overall emotional state, physical location and orientation of the dog, and other aspects of its dynamic state) are truthfully conveyed to all the users.

Clients are subject to animation frame rates lagging, internet messages dropping out, and other issues. All users whose avatars are hanging out in a part of the virtual world where the dog is hanging out need to see the same behavior; they are all “seeing the same dog”—virtually-speaking.

Screen Shot 2014-12-02 at 11.50.24 PM

It would be unnecessary (and expensive in terms of internet traffic) for the server to send individual wags several times a second to all the clients. Each client’s Animated Body code is perfectly capable of performing this repetitive animation. And, because of different rendering speeds among various clients, lag times, etc, the wagging tail on your client might be moving left-right-left-right, while the wagging tail on my client is moving right-left-right-left. In other words, they might be out of phase or wagging at slightly different speeds. These slight differences have almost no effect on the reading of the wag. “I am wagging my tail” is the point. That’s a Boolean message: one bit. The reason I am laboring over this point harkens back to the chapter on a Body Language Alphabet: a data-compressed message, easy to zip through the internet, is efficient for helping virtual worlds run smoothly. It is also inspired by biosemiotics: Mother Nature’s efficient message-passing protocol.

On Cuttlefish and Dolphins

The homunculus of Homo Sapiens might evolve into a more plastic form—maybe not on a genetic/species level, but at least during the lifetimes of individual brains, assisted by the scaffolding of culture and virtual world technology. This plasticity could approach strange proportions, even as our physical bodies remain roughly the same. As we spend more of our time socializing and interacting on the internet as part of the program to travel less to reduce greenhouse gases, our embodiment will naturally take on the forms appropriate to the virtual spaces we occupy. And these spaces will not necessarily mimic the spaces of the real world, nor will our embodiments always look and feel like our own bodies. And with these new virtual embodiments will come new layers of body language. Jaron Lanier uses the example of cephalopods as species capable of animated texturemapping and physical morphing used for communication and camouflage—feats that are outside the range of physical human expression.

vid

 

 

 

 

 

 

 

 

In reference to avatars, Lanier says, “The problem is that in order to morph, humans must design avatars in laborious detail in advance. Our software tools are not yet flexible enough to enable us, in virtual reality, to think ourselves into different forms. Why would we want to? Consider the existing benefits of our ability to create sounds with our mouths. We can make new noises and mimic existing ones, spontaneously and instantaneously. But when it comes to visual communication, we are hamstrung…We can learn to draw and paint, or use computer-graphics design software. But we cannot generate images at the speed with which we can imagine them” (Lanier 2006).

Screen Shot 2014-12-02 at 11.50.31 PM

Once we have developed the various non-humanoid puppeteering interfaces that would allow Lanier’s vision, we will begin to invent new visual languages for realtime communication. Lanier believes that in the future, humans will be truly “multihomuncular”.

Researchers from Aberdeen University and the Polytechnic University of Catalonia found that dolphins use discrete units of body language as they swim together near the surface of water. They observed efficiency in these signals, similar to what occurs in frequently-used words in human verbal language (Ferrer i Cancho and Lusseau 2009). As human natural language goes online, and as our body language gets processed, data-compressed, and alphabetized for efficient traversal over the internet, we may start to see more patterns of our embodied language that resemble those created by dolphins, and many other social species besides. The background communicative buzz of the biosphere may start to make more sense in the process of whittling our own communicative energy down to its essential features, and being able to analyze it digitally. With a universal body language alphabet, we might someday be able to animate our skin like cephalopods, or speak “dolphin”, using our tails, as we lope across the virtual waves.

Screen Shot 2014-12-02 at 11.50.37 PM