Will Super Smart Robots take over the World?

 

 

Will super smart robots take over the world? I know this is probably something keeping you at night. I did some back of napkin calculations to gain some insight to this burning question.

Power Consumption vs. Clock Speed vs. Cores

In order to displace us, super smart robots are going to need to optimize their efficiency. They'll need to store energy, move around, and think with an efficiency at least as great as ours. We are well optimized for our environment with quite a practical balance between computational power sufficient for survival, energy storage, and an ability to modify the environment to our needs.

Starting with computational efficiency, clock speed versus power consumption varies from nearly linear to cubic in the more advanced designs. To put it another way, power consumption goes up or down with the cube of the processor speed  - in a modern CPU, running at half the clock can run at as little as one eighth the power.

Power consumption per CPU core is of course linear; if you have two processors running the same program, they will consume twice the power of one of those processors running the program. So, if you can structure your code to utilize available cores effectively, you should be able to achieve greater performance for less power. For example, a dual core Pentium might be able to perform the same processing at half clock and one quarter power as a single Pentium running at full clock.

If you follow this logic to the extreme conclusion, the ideal solution is clearly to have as many processors as physically possible  – even billions - running as slow as possible yet generating results quickly enough to still be useful. This will maximize your computation per watt, which is clearly what you want to do if you want your robot to have a reasonable operational capacity. 

How Fast is Fast Enough?

From the point of view of the graphics community, quickly enough might be defined as 24 Hz for film, or 30 or 60 Hz for games; these numbers correspond to the human flicker fusion response rate. The significance of the flicker fusion rate from a survival perspective is clear, given the round trip reaction time of seeing something, and initiating a motor response. A startle eye-blink occurs at around 100 to 200mS after a stimulus, with the minimum measurable reaction time being 10mS. The flicker fusion rate just faster than the speed at which we process visual data at a survival-sufficient rate. Completing computations at around 100Hz seems like a sufficient rate for human like performance; so we can argue that performing at that speed would be good for our super robot too; faster will consume more power; slower, and the robot won't be able to keep up with us and won't be super.

http://hypertextbook.com/facts/2006/reactiontime.shtml

What is Good Energy Density?

The next question is what is a reasonable energy density? A super robot should be able to carry around as much energy per unit weight as we do. If they have a lower energy density, they won't be able to run as long as we do without refueling. If their energy density is better, they'd be in good shape. The best batteries hold between 0.5 and 1 MJ/kg.

http://en.wikipedia.org/wiki/Lithium_ion_battery

Fat, which is where we store our energy, holds roughly 38 MJ/kg, somewhere between 50 and 100 times more.

http://hypertextbook.com/facts/2004/PingZhang.shtml

http://en.wikipedia.org/wiki/Energy_density

Gasoline and similar fuels have a similar energy density as fat, but have got obvious issues of production and supply that limit availability over the long term. So let’s go with fat as a current, plausible, easily manufactured by the individual robot, ideal energy source for conversion of energy to computation. Fat is manufactured through a very simple process, and makes efficient use of the most common elements found in the galaxy, Hydrogen, Oxygen, and Carbon.

http://earthguide.ucsd.edu/virtualmuseum/ita/03_2.shtml

How Many Cores?

For reference, note that the neuron is basic computational node in the brain, and that the brain has on the order of 100 billion neurons. The typical power consumption of the human brain is roughly 20W, about 20% of the total energy consumption of the human body. Operations per second is not a reasonable thing to work out for the human brain since a neuronal computation is not comparable to what a CPU does, but I will point out that the computation is super-parallel, and that as we cram more slower CPU cores into the super robot brain (more is better, until no more can be packed in) , the nature of the computation performed will change, and will probably come to approximate what the human brain does.

http://hypertextbook.com/facts/2001/JacquelineLing.shtml

Nature, having run a long term optimization project on us, has probably settled on the configuration of the human as the best-to-date tradeoff between computational power, energy consumption, waste heat, and power delivery and storage, and we can probably use that as a benchmark for an ideal robot.

The Cylons are Us

Super robots would face all the survival issues that we do - energy creation, storage, and consumption. They would face the same issues of infrastructure we do in terms of occupying space for our physical existence and for mechanisms of energy production such as farms. They would probably have to be just like us, since we've already evolved to optimally fill our niche on the Earth. If they were smarter than us, they would need to move out of the optimal state; they'd have to trade off computation, mobility, or numbers, all of which would leave us our niche.

The only variation that get a just-like-us robot an edge would be to over-clock for short periods of time at the expense of markedly increased energy consumption. So in conclusion, I think we don't really have to worry about it, at least until batteries can store more energy than fat.

Useful Conclusions

  1. More parallelism and less clock is excellent for optimizing how we use generated power. This doesn’t immediately lead to burning less coal, but it most likely does. It also means that computing can become even more pervasive in an efficient way.
  2. Super smart robots that can exceed human capabilities can exist, but not in sufficient numbers to displace us. They can either approach human capabilities and power consumption and thus match our numbers, or they can exceed our capabilities and decrease their numbers by a cubic proportion, and thus not pose a threat to our dominance.

 

Author's note: This article is really about why multi-core processing is an excellent and natural thing, and what the limits of multi-core processing are. It's presented in a humorous-to-me context.

AI___Robotics/robots/philosophy/science/writing

Content by Nick Porcino (c) 1990-2011