Do tech companies really need more vaginas, dark skin, and gray beards?

We need to get to the bottom of the issue about diversity in the software industry.

It’s not that software companies simply need to hire more people who possess vaginas, dark skin, or grey beards…to reach some kind of quota, or to make their About page look hip. It’s that software companies need to embrace diversity in ways of thinking, life experience, socio-economic backgrounds, ways of building things, and ways of setting priorities. This might result in some outwardly-visible diversity, as a by product. But in my opinion, that’s not the point.

Software increasingly runs our lives – EVERYONE’S LIVES – including people who possess vaginas, dark skin, and gray beards. One should not assume that all those young overpaid white males who would sooner send you a Slack message than look you in the eye are going to know how to build the tools that are running more and more of our lives….

…in a country that is becoming more diverse, not less – a fact that the United States Bollocks in Chief is clearly not happy about.

Building Diversity Where it Matters

Sure, there tends to be more diversity in design, business, and marketing departments, but these aspects of a tech company generally get established after the DNA of the company has been forged. The DNA of a software company is typically established when wealthy white male venture capitalists invest in wealthy white male programmers who (sometimes) become more wealthy, after which time they start new companies using their wealth. They hire their wealthy white male programmer friends (who can work without salary in exchange for shares – thereby becoming more likely to acquire more wealth).

Follow the money.

A slightly more diverse company is then built around this core of wealthy white males. Then a slick, mobile-friendly web page is erected, featuring high-res photos of gleeful African Americans and Chinese women. Maybe an Indian. And (occasionally) the token graybeard.

Dynastic Privilege

It’s the same phenomenon that drives wealth inequality in our country. Unchecked capitalism is fueling an oligarchy that is inhibiting the American Dream for those who find themselves on the losing end of financial opportunity.

Did I just change the subject from tech company diversity to wealth inequality in the United States? No: it’s the same subject.

“I believe dynastic privilege is one of the major contributors to the lack of diversity in tech” – Adam Pisoni

So, instead of talking about skin color, gender, and age, we should be talking about the deeper underlying cultural and economic forces that make it so hard for tech companies to change their DNA.

Please reply with your comments. Agree or disagree. Either way, I’d love to hear your thoughts!

Here’s one way to evolve an artificial intelligence

This picture illustrates an idea for how to evolve an AI system. It is derived from the sensor-brain-actuator-world model.

Machine learning algorithms have been doing some impressive things. Simply by crawling through massive oceans of data and finding correlations, some AI systems are able to make unexpected predictions and reveal insights.

Neural nets and evolutionary algorithms constitute a natural pairing of technologies for designing AI systems. But evolutionary algorithms require selection criteria that can be difficult to design. One solution is to use millions of human observers as a Darwinian fitness force to guide an AI towards an optimal state.

Clarifications

Since there is so much discussion (and confusion) on AI these days I want make some clarifications.

  • This has nothing to do with consciousness or self. This AI is disembodied.
  • The raw data input is (not) curated. It has no added interpretation.
  • Any kind of data can be input. The AI will ignore most of it at first.
  • The AI presents its innards to humans. I am calling these “simulations”.
  • The AI algorithm uses some unspecified form of machine learning.
  • The important innovation here is the ability to generate “simulations”.

Mothering

The humanist in me says we need to act as the collective Mother for our brain children by providing continual reinforcement for good behavior and discouraging bad behavior. As a world view emerges in the AI, and as an implicit code of morals comes into focus, the AI will “mature”. Widely-expressed fears of AI run amok could be partially alleviated by imposing a Mothering filter on the AI as it comes of age.

Can Anything Evolve without Selection?

I suppose it is possible for an AI to arrive at every possible good idea, insight, and judgement just by digesting the constant data spew from humanity. But without an implicit learning process (such as back-propagation and other feedback mechanisms used in training AI), the AI cannot truly learn in an ecosystem of continual feedback.

Abstract Simulations 

Abstraction in Modernist painting is about generalizing the visual world into forms and colors that substitute detail for overall impressions. Art historians have charted the transition form realism to abstraction – a kind of freeing-up and opening-up of vision.

Imagine now a new path leading from abstraction to realism. And it doesn’t just apply to images: it also applies to audible signals, texts, movements, and patterns of behavior.

Imagine an AI that is set up like the illustration above coming alive for the first time. The inner-life of newborn infant is chaotic, formless, and devoid of meaning, with the exception of reactions to a mother’s smile, her scent, and her breasts.

A newborn AI would produce meaningless simulations. As the first few humans log in to give feedback, they will encounter mostly formless blobs. But eventually, some patterns may emerge – with just enough variation for the human judges to start making selective choices: “this blob is more interesting than that blob”.

As the young but continual stream of raw data accumulates, the AI will start to build impressions and common themes, like what Deep Dream does as it collects images and finds common themes and starts riffing on those themes.

http://theghostdiaries.com/10-most-nightmarish-images-from-googles-deepdream/

The important thing about this process is that it can self-correct if it starts to veer in an unproductive direction – initially with the guidance of humans and eventually on its own. It also maintains a memory of bad decisions, and failed experiments – which are all a part of growing up.

Takeaway

If this idea is interesting to you, just Google “evolving AI” and you will find many many links on the subject.

As far as my modest proposal: the takeaway I’d like to leave you with is this:

Every brain on earth builds inner-simulations of the world and plays parts of those simulations constantly as a matter of course. The simple animals have extremely simple models of reality. We humans have insanely complex models – which often get us into trouble. Trial simulations generated by an evolving AI would start pretty dumb, but with more sensory exposure, and human guidance, who knows what would emerge!

It would be irresponsible to launch AI programs without mothering. The evolved brains of most complex mammals naturally expect this. Our AI brain children are naturally derived from a mammalian brain. Mothering will allow us to evolve AI systems that don’t turn into evil monsters.

Science writers who say machines have feelings…lack intelligence.

I saw an article by Peter Dockrill with the headline, “Artificial intelligence should be protected by human rights, says Oxford mathematician”.

The subtitle is: “Machines Have Feelings Too”.

Regarding the potential dangers of robots and computers, Peter asks: “But do robots need protection from us too?” Peter is apparently a “science and humor writer”. I think he should stick with just one genre.

Just more click-bait.

There are too many articles on the internet with headlines like this. They are usually covered with obnoxious, eye-jabbing ads, flitting in front of my face like giant colorful moths. It’s a carnival – through and through.

I could easily include any number of articles about the “terrifying” future of AI, “emotional machines”, “robot ethics”, and other cartoon-like dilutions of otherwise thoughtful well-crafted science fiction.

Good science fiction is better than bad science journalism.

Screen Shot 2016-07-09 at 10.30.04 PM

Here’s Ben Goldacre:

Screen Shot 2016-06-24 at 9.30.15 PM

Now, back to this silly subject of machines having feelings:

Some of my previous articles express my thoughts on the future of AI, such as:

No Rafi. The Brain is not a Computer

The Singularity is Just One in a Series

Why Nick Bostrom is Wrong About the Dangers of Artificial Intelligence

Intelligence is NOT One-Dimensional

homunculusbI think we should be working to fix our own emotional mess, instead of trying to make vague, naive predictions about machines having feelings. Machines will – eventually – have something analogous to animal motivation and human states of mind, but by then the human world will look so different that the current conversation will be laughable.

Right now, I am in favor of keeping the “feelings” on the human side of the equation.

We’re still too emotionally messed up to be worrying about how to tend to our machines’ feelings. Let’s fix our own feelings first before giving them to our machines. We still have that choice.

And now, more stupidity from Meghan Neal:

“Computers are already faster than us, more efficient, and can do our jobs better.”

Wow Meghan, you sure do like computers, don’t you?

I personally have more hope, respect, and optimism for our species.

In this article, Meghan makes sweeping statements about machines with feelings, including how “feeling” computers are being used to improve education.

The “feeling” robots she is referring to are machines with a gimmick – they are brain-dead automatons with faces attached to them. Many savvy futurists suggest that true AI will not result from humans trying to make machines act like humans.  That’s anthropomorphism. Programming pre-defined body language in an unthinking robot makes for interesting and insightful experimentation in human-machine interaction. But please! Don’t tell me that these machines have “feelings”.

Screen Shot 2016-07-09 at 3.44.18 PMThis article says: “When Nao is sad, he hunches his shoulders forward and looks down. When he’s happy, he raises his arms, angling for a hug. When frightened, Nao cowers, and he stays like that until he is soothed with some gentle strokes on his head.”

 

Pardon me while I projectile vomit.

Any time you are trying to compare human intelligence with computers, consider what Marvin once said:

Screen Shot 2016-06-24 at 10.38.04 PM

The Information EVOLUTION

I remember several decades ago learning that we were at the beginning of an information revolution. The idea, as I understood it, was that many things are moving towards a digital economy; even wars will become information-based.

The information revolution takes over where the industrial revolution left off.

I am seeing an even bigger picture emerging – it is consistent with the evolution of the universe and Earth’s biosphere.

Screen Shot 2016-06-16 at 10.20.47 AM

At the moment, I can hear a bird of prey (I think it’s a falcon) that comes around this neighborhood every year about this time and makes its call from the tree tops. When I think about the amount of effort that birds make to produce mating calls, and other kinds of communication, I am reminded of how much importance information plays in the biological world. The variety and vigor of bird song is amazing. From an evolutionary point of view, one has to assume that there is great selective pressure to create such energy in organized sound.

money+gorilla+teeth+omg+weird+primatesThis is just a speck of dust in comparison to the evolution of communication in our own species, for whom information is a major driver in our activities. Our faces have evolved to give and receive a very high bandwidth of information between each other (Compare the faces of primates to those of less complex animals and notice the degree to which the face is optimized for giving and receiving information).

Our brains have grown to massive proportions (relatively-speaking) to account for the role that information plays in the way our species survives on the planet.

Now: onto the future of information…

Beaming New Parts to the Space Station

Screen Shot 2016-06-16 at 10.29.58 AM

Guess which is more expensive:

  1. Sending a rocket to the space station with a new part to repair an old one.
  2. Beaming up the instructions to build the part on an on-board 3D printer.

You guessed it.

And this is where some people see society going in general. 3D printing will revolutionize society in a big way. Less moving atoms, More moving bits.

To what degree will the manipulation of bits become more important than the manipulation of atoms?

Not Just a Revolution: Evolution

My sense is that the information revolution is not merely one in a series of human eras: it is the overall trend of life on Earth. We humans are the agents of the latest push in this overall trend.

Some futurists predict that nanotechnology will make it possible to infuse information processing into materials, giving rise to programmable matter. Ray Kurzweil predicts that the deep nano-mingling of matter and information will be the basis for a super-intelligence that can spread throughout the universe.

Okay, whatever.

For now, let’s ride this information wave and try to use the weightlessness of bits to make life better for all people (and all life-forms) on Earth – not just a powerful few.

No Rafi. The brain is not a computer.

Rafi Letzter wrote an article called “If you think your brain is more than a computer, you must accept this fringe idea in physics“.

Screen Shot 2016-06-11 at 12.50.53 PM

The article states the view of computer scientist Scott Aaronson: “…because the brain exists inside the universe, and because computers can simulate the entire universe given enough power, your entire brain can be simulated in a computer.”

Who the fuck said computers can simulate the entire universe?

That is a huge assumption. It’s also wrong.

We need to always look close at the assumptions that people use to build theories. If it can be proven that computers can simulate the entire universe, then this theory will be slightly easier to swallow.

By the way, a computer cannot simulate the entire universe because it would have to simulate itself simulating itself simulating itself.

The human brain is capable of computation, and that’s why humans are able to invent computers.

The very question as to whether the brain “is a computer” is wrong-headed. Does the brain use computation? Of course it does (among other things). Is the brain a computer? Of course it isn’t.

The Singularity is Just One in a Series

I’m reading Kurzweil’s The Singularity is Near.

It occurs to me that the transition that the human race is about to experience is similar to other major transitions that are often described as epochs – paradigm-shifts – in which a new structure emerges over a previous structure. There are six key epochs that Kurzweil describes. (The first four are not unlike epochal stages described by Terrance Deacon and others.)

  1. Physics and Chemistry
  2. Biology and DNA
  3. Brains
  4. Technology
  5. Human Intelligence Merges with Human Technology
  6. Cosmic Intelligence

When a new epoch comes into being, the agents of that new epoch don’t necessarily eradicate, overcome, usurp, reduce, or impede the agents of the previous epoch. Every epoch stands on the shoulders of the last epoch.This is one reason not to fear the Singularity…as if it is going to destroy us or render us un-human. In fact, epoch number 5 may allow us to become more human (a characterization that we could only truly make after the fact – not from our current vantage point).

I like to think of “human” as a verb: as a shift from animal to post-human, because it characterizes our nature of always striving for something more.

animal to posthuman

There are debates raging on whether the Singularity is good or bad for humanity. One way to avoid endless debate is to do the existential act: to make an attempt at determining the fate of humanity, rather than sit passively and make predictions.  As Alan Kay famously said, “the best way to predict the future is to invent it”. We should try to guide the direction of the next epoch as much as we can while we are still the ones in charge.

In a previous article I wrote that criticizes some predictions by Nick Bostrom, I compare our upcoming epochal shift to a shift that happened in the past, when multi-cellular beings evolved. Consider:

Maybe Our AI Will Evolve to Protect Us And the Planet

tree-of-lifeBillions of years ago, single cells decided to come together in order to make bodies, so they could do more using teamwork. Some of these cells were probably worried about the bodies “taking over”. And oh did they! But, these bodies also did their little cells a favor: they kept them alive and provided them with nutrition. Win-win baby!

I am not a full-fledged Singularitarian. I prefer to stay agnostic as long as I can. Its not just a human story. Our Singularity is just the one that is happening to us at the moment.

Similarly, the emergence of previous epochs may have been experienced as Singularities to those that came before.