The practice of evolutionary algorithms involves a mundane yet inescapable phase, namely, finding parameters that work well. How big should the population be? How many generations should the algorithm run? What is the (tournament selection) tournament size? What probabilities should one assign to crossover and mutation? All these nagging questions need good answers if one is to embrace success. Through an extensive series of experiments over multiple evolutionary algorithm implementations and problems we show that parameter space tends to be rife with viable parameters. We aver that this renders the life of the practitioner that much easier, and cap off our study with an advisory digest for the weary.
Wanna learn more? The full paper is here.
Excerpt from my book Machine Nature
“He wanted to dream a man: he wanted to dream him with minute integrity and insert him into reality.” This was the goal of the silent man who came from the South, in Jorge Luis Borges’s short story, “The Circular Ruins.” From Pygmalion, Frankenstein, and the Golem to Star Trek’s Lieutenant Commander Data, the dream of administering the breath of life has fascinated humankind since antiquity.
We’ve seen how human-made systems can be made to evolve, to learn, to adapt, and to develop, as well as to exhibit a host of other characteristics that are usually not associated with machines, but rather with living beings. Can our creations one day take a life of their own? This question moved from the realm of science fiction to that of science with the advent of the field known as artificial life. The term was coined by Christopher G. Langton, organizer of the first artificial life conference, which took place in Los Alamos in 1987.
“Artificial Life,” wrote Langton (in the proceedings of the second conference), “is a field of study devoted to understanding life by attempting to abstract the fundamental dynamical principles underlying biological phenomena, and recreating these dynamics in other physical media — such as computers — making them accessible to new kinds of experimental manipulation and testing.” While biological research is essentially analytic, trying to break down complex phenomena into their basic components, artificial life is synthetic, attempting to construct phenomena from their elemental units, as such adding powerful new tools to the scientific toolkit. This is, however, only part of the field’s mission. As put forward by Langton “In addition to providing new ways to study the biological phenomena associated with life here on Earth, life-as-we-know-it, Artificial Life allows us to extend our studies to the larger domain of the ‘bio-logic’ of possible life, life-as-it-could-be, whatever it might be made of and wherever it might be found in the universe.”
Before talking about artificial life, shouldn’t we try to define what life is? Well … No. I’ll steer clear of this issue since it is in fact quite a controversy in science; as things stand today, there is no agreed-upon scientific definition of Life. For now, we’ll just have to accept its being one of those “you-know-it-when-you-see-it” qualities: Your dog is obviously alive, while your washing machine is obviously not. The question is, then, can we create something that is “obviously alive”?
This question seems reasonably clear, or is it now? You’d think that it’s the “life” part of “artificial life” that eludes our definition. Well, there’s a further subtlety: What exactly does the “artificial” part mean? If you look up “artificial” in the dictionary (Merriam-Webster online), you’ll find a number of definitions. So let’s see which one sits well with “artificial life.” Artificial might mean “lacking in natural or spontaneous quality < an artificial smile > < an artificial excitement > .” This can’t be it. An extraterrestrial might be unnatural and unspontaneous, and yet obviously alive; artificial life cannot be about life that lacks in natural or spontaneous quality. What about “imitation, sham < artificial flavor >”? This is no good: By definition our putative “artificially alive” creature is going to be an imitation in some sense; the point is, in what sense, and how good an imitation (“That’s not a real dog? I never would’ve guessed in a million years!”). Saying that artificial life is synonymous with imitation life doesn’t get us very far. We’re obviously trying to imitate life, in the proverbial “imitation is the sincerest flattery” sense.
I’m not merely engaging here in armchair philosophy, but rather trying to arrive at the essence of what “artificial life” means. What seems to be missing in the two definitions of the previous paragraph is the creation aspect. So let’s try what is actually the first definition appearing under “artificial”: “man-made < an artificial limb > < artificial diamonds > .” Ah! now we’re cooking. This seems to be the right one. It accords perfectly with the definition given by Langton in the proceedings of the first artificial-life conference: “The study of man-made systems that exhibit behaviors characteristic of natural living systems.”
Artificial life is thus life created by humans rather than by Nature. Simple. Well ... I hate to be so fussy, but “human-made” can mean at least three different things. One way to create life is through the union of a male, known as “daddy, ” and a female, known as “mommy,” thus giving life to a male or a female known as “baby.” You might be frowning now, thinking to yourself that it’s rather silly of me to even mention this since this is quite obviously not artificial life. This is the natural way of creating life, whatever that may mean. But then again, what about artificial insemination? This involves the introduction of semen into the uterus or oviduct by other than natural means, yet no one would claim that this produces artificial babies. Nonetheless, there is a definite intervention by humans, thus rendering this process somewhat less than 100 percent natural.
We often invoke the term “human-made” when speaking of objects such as cars. This image of artificial life might involve some kind of production line, where heads, arms, feet, and torsos are assembled into complete beings, after which the proverbial switch is pulled, thus breathing life into them. (The assembly line need by no means produce but humanoid life; it could in fact produce a range of beings, from artificial bacteria to artificial whales.) This is the most common image where fiction is concerned (Victor Frankenstein creating a humanoid monster, for one).
There is yet a third way by which life may be created by humans: through the process of evolution, and most likely open-ended at that (as we discussed in Chapter 12). This raises an interesting question: While we may sow the seeds of life, setting off such an open-ended process, whatever emerges — numerous generations later — might be far removed from our original design; just how “human-made,” then, is this form of life?
My intention in the somewhat philosophical discussion above has been to show you just how intricate this seemingly simple term — artificial life — really is. The concept of “artificial” is quite elusive where life is concerned, and even if we agree on emphasizing the “creation” aspect, there are a number of fundamentally different modes of creation.
Artificial life might in fact be an oxymoron. After all, how can life be artificial? If something is truly alive — assuming we can somehow agree on this fact — then what’s artificial about it? Even if we take what could arguably be considered the most artificial route of creation, that of the assembly line, once we’re done, the creature is no longer artificially alive; it’s alive — period. This takes us right back to Langton’s definition of artificial life, life-as-it-could-be, “whatever it might be made of and wherever it might be found in the universe.” Whether flesh-and-blood, man-woman-made, or nuts-and-bolts, factory-made, life is life. Perhaps rather than speak of artificial life, which is somewhat problematic, we should talk about life created by humans. In fact, even “created” might be too strong a word (think of the evolutionary scenario for one). Let’s settle for human-induced life. This emphasizes the relevant difference between Nature and humans, namely, the manner by which life arrives on the scene; the end result though is — in both cases — bona fide life.
Life may be many things: perhaps “a tale told by an idiot, full of sound and fury, signifying nothing” (Shakespeare), or maybe “colour and warmth and light, and a striving evermore for these” (Julian Grenfell), or indeed “a glorious cycle of song, a medley of extemporanea” (Dorothy Parker). At the heart of artificial life research lies the belief that whatever life is, it is not about carbon; life is not about the medium but about the mediated. It is a process that we do not yet understand in full, but which we may nonetheless be able to create, or perhaps we should say re-create: After all, Nature has beaten us to it.
Let’s get down to earth now and consider some of the issues involved in inducing life. As I’ve discussed above there are at least three ways of going about this. We might imagine some far-future extension of current medical practice (such as artificial insemination) that will result in a new form of life. Since this involves many technical biological and medical details, I think I’ll leave it at that for the present discussion.
The second way to induce life is to produce a full-blown living being. As I briefly mentioned in Chapter 11, there is much ongoing research on mimicking Nature’s gadgets, building such devices as eyes, ears, and hearts. While many of these are intended to serve as prostheses for humans, some are also used in robots. Perhaps at some point in the future we’ll be in possession of enough parts to construct an entire being. This might in fact come sooner rather than later: While speaking of “inducing life” usually tends to evoke in us images of humanoid life, let’s not be Homo sapien chauvinists. As I’ve mentioned time and again, constructing the equivalent of even a single-celled organism would be a huge achievement (not to mention a beetle or a fly), and this might come about sooner than we expect. (John Wyndham’s short story Female of the Species provides an amusingly gruesome vision of this production-line scenario. When visited by two inspectors of the Society for the Suppression of the Maltreatment of Animals, Doctor Dixon — the Frankenstein-like protagonist — explains: “The crux of this is that I have not, as you are suspecting, either grafted, or readjusted, nor in any way distorted living forms. I have built them.”)
And then there’s the third way of inducing life, by creating the necessary conditions for open-ended evolution to take place. In Chapter 1 we noted that evolution rests on four principles:
Tierra inventor Thomas Ray wrote, “I would consider a system to be living if it is self-replicating, and capable of open-ended evolution.” (The Tierran world was in fact set up to discover not how self-replication arrives on the scene, but what happens after it does, namely, how does a diverse ecosystem come to evolve.)
The study of self-replicating structures in human-made (or human-induced) systems began in the late 1940s, when John von Neumann — one of the twentieth century’s most eminent mathematicians and physicists — posed the question of whether a machine can self-replicate (produce copies of itself). He wrote that, “living organisms are very complicated aggregations of elementary parts, and by any reasonable theory of probability or thermodynamics highly improbable. That they should occur in the world at all is a miracle of the first magnitude; the only thing which removes, or mitigates, this miracle is that they reproduce themselves. Therefore, if by any peculiar accident there should ever be one of them, from there on the rules of probability do not apply, and there will be many of them, at least if the milieu is reasonable.”
Von Neumann was not interested in building an actual machine, but rather in studying the theoretical feasibility of self-replication from a mathematical point of view. He succeeded in proving (mathematically) that machines can self-replicate, laying down along the way a number of fundamental principles involved in this process. During the decade following his work (in the 1950s), when the basic genetic mechanisms had begun to unfold, it turned out that Nature had “adopted” von Neumann’s principles. (It is quite fascinating to see how his results predated the actual biological findings.)
The study of self-replication has been taking place now for more than half a century. Much of this work is in fact quite separate from artificial life and is motivated by the desire to understand the fundamental principles involved in this process. This research might better our understanding of self-replication in Nature, as well as find many technological applications. There is much talk today of nanotechnology, where self-replication is of vital import. You’d like to be able to build one tiny machine, which would than sally forth and multiply. For example, you’d inject a small nanomachine into your body to fight off some mean virus, and this nanomachine would be able to self-replicate, thereby increasing the size of your internal army. One of my favorite application examples is the self-replicating lunar factory, which is not drawn from some science-fiction novel but was actually proposed by NASA researchers in 1980. Imagine planting a “seed” factory on the moon that would then self-replicate to populate a large surface, using local lunar material. This multitude of factories could manufacture necessary products for lunar settlers or for shipping back to Earth. And all you have to do is plant the first one.
On our way to inducing life, self-replication is of crucial import. We know a bit more about this issue today than we did 50 years ago, though there is still no lack of unanswered questions, which is music to researchers’ ears. The next item on our life-inducing agenda is trying to come up with an open-ended context for our self-replicating critters (just as Ray set out to do with his Tierran world); this issue has both genotypic and phenotypic aspects (Chapter 12).
The phenotypic aspect of open-endedness concerns the environment. The grand challenges posed by an open-ended environment vis-à-vis its inhabitants are to be able to move around and to sense the surroundings to a degree sufficient to achieve the necessary maintenance of life and reproduction. What seems to us to be really difficult — as in playing chess — may in fact be quite easy once this essence of being and reacting is available. Remember, elephants don’t play chess (Chapter 11).
Chess is in fact quite an instructive example. It has been one of the holy grails in the field of artificial intelligence since the 1950s. In those early days a number of researchers had managed to come up with programs that were able to play a decent game, winning against average human players (though they were easily beaten by chess experts). The ruling opinion at the time was that very soon there would be a chess machine able to beat any human player. The problem, though, turned out to be harder than believed, demonstrating what is known as the fallacy of the first step: It’s easier to go from ignorance to mediocrity than it is to go from mediocrity to excellence (think of the difference between playing tennis and playing tennis well). Mediocre chess-playing computers were available as far back as the 1960s, though only very recently has a computer (IBM’s Deep Blue) been able to beat a world champion (Garry Kasparov).
It took 40 years to come up with a good chess-playing computer, and frankly — chess is easy! I’m not saying it doesn’t require a form of genius to excel at the game, nor am I belittling the arduous task faced by the designers of a chess-playing machine; I’m referring to the facileness of the chess environment; it is yet another illustrative example of non-open-endedness. Chess is defined by a very small number of well-known, fixed rules, and there’s really no dynamically changing environment to speak of (in this sense it is similar to our basketball environment of Chapter 12).
How do we increase the open-endedness of the environment? Thomas Ray took what is perhaps the first shot at providing an answer to this question with his simulated Tierra world. Another possibility is to subject our critter to the most complex environment known to date: ours. This is the route taken by adaptive-robotics researchers. In Chapter 4 we saw how real robots are subjected to a real-world environment (as opposed to simulated robots in a simulated computer environment); for this reason the approach is also known as situated or embodied robotics.
An argument that is often raised against embodied robotics is that it is too costly and too slow. You’d be much better off running the evolutionary process in a simulated environment within the computer, plucking out but the end result — the best simulated robot that has evolved — and implementing it in the real world. The problem is that often this does not work: When you go from the simulated to the real, the robot no longer functions properly. Our environment is full of many hidden complexities that often escape our notice, rendering it very hard to implement them in a computer; it’s just plain easier to use the real world.
Nature’s open-endedness manifests itself not only at the phenotypic level but also at the genotypic level; it can tinker with the genome so as to produce entirely novel designs, which give rise to new phenotypes better able to rough it. This point has also been receiving increased attention of late: How can we set up an evolutionary scenario in which fundamental genomic changes can occur? For example, in Chapter 4 we evolved only the behavior of the robots, their small, neural-network brains. Their body, on the other hand, did not change at all, which is nothing like natural evolution. Nature possesses the ability to bring about not only behavioral changes but also morphological modifications in her creatures. A number of researchers have recently begun looking into the possibility of doing this for robots as well, evolving both behavior and morphology. While quite rudimentary at the moment, this is yet another step toward increased open-endedness.
Next to the natural world a new universe has sprung up in the past few years, which is both complex and open-ended: the Internet. It is evolving at a breathtaking speed, already exhibiting enough complexity to merit the attention of scigineers. This is not surprising. The Internet’s evolution is mediated by self-proclaimed intelligent beings known as Homo sapiens. This process is more akin to Lamarckian evolution, where a beneficial survival trick can be immediately incorporated within the evolving population.
Will we someday see the rise of network life? Even as you read these lines, there are thousands and thousands of small programs — known as agents — roaming the network, seeking to find information that will appease their human masters. Currently they are quite limited, lacking in both intelligence and autonomy. Little by little, though, they might develop into more autonomous critters. This might come about by employing some of the techniques we discussed in this book, giving rise to what I call Egents, for Evolving Agents, and double AAgents, for Adaptive Agents. These agents will be denizens of the network universe, whereas we will not; it is they who will be in their element. We may have built the house, but we are not the ones living in it. I just hope those double AAgents will work for you, their master, and not for some unknown party behind the cyber curtain.
The borders between the living and the nonliving, between the Nature-made and the human-made appear to be constantly blurring.
As in dreams.
The silent man who came from the South eventually succeeded in dreaming a man and inserting him into reality. And what became of the dreamer? “With relief, with humiliation, with terror, he understood that he too was a mere appearance, dreamt by another.”
Excerpt from my book Machine Nature
Science and engineering have traditionally proceeded along separate tracks. The scientist is a detective who’s up against the mysteries of Nature: He analyzes natural processes, wishing to explain their workings, ultimately seeking to predict their future behavior. Scientists ask questions such as: What goes on inside the Sun? And how long will it keep on burning? How does the weather system work? And how can we predict whether it will rain tomorrow or not? What are the fundamental physical laws that underpin the workings of the known universe?
The engineer, on the other hand, is a builder: Faced with social and economic needs, she tries to create useful artifacts. Engineers ask questions such as: How can we build a car with a cruising speed of 150 kilometers per hour, a fuel consumption of 20 kilometers per liter, and a price tag of no more than $8000? How do we design a computer chip that is twice as fast as the fastest extant chip? How can we build an autonomous lawn mower? “To put it briefly,” wrote Lewis Wolpert in The Unnatural Nature of Science, “science produces ideas whereas technology results in the production of usable objects.” And if I may add my own little epigram, science is about making sense whereas engineering is about making cents ...
In a chapter entitled “Technology is not Science,” Wolpert discussed the differences between the two, noting that technology is very much older than science, and that science did almost nothing to aid technology until the nineteenth century. “Technology may well have used a series of ad hoc hypotheses and conjectures, but these were entirely directed to practical ends and not to understanding,” he wrote. Humans have been able to construct artifacts — such as tools and arms — and improve their existence via agriculture and animal domestication thousands of years before the arrival of modern science (in the sixteenth and seventeenth centuries). Though engineers have only recently begun to put science to use, scientists had always relied on the existing technology. To quote Wolpert: “Science by contrast has always been heavily dependent on the available technology, both for ideas and for apparatus. Technology has had a profound influence on science, whereas the converse has seldom been the case until quite recently.”
The emergence of technology long before science is not at all surprising. “The goals of the ordinary person in those times,” wrote Wolpert, “were practical ends such as sowing and hunting, and that practical orientation does not serve pure knowledge. Our brains have been selected to help us survive in a complex environment; the generation of scientific ideas plays no role in this process.” Thomas S. Kuhn considered science and technology in one of the most influential works in the philosophy of science, The Structure of Scientific Revolutions, writing: “Just how special that community must be if science is to survive and grow may be indicated by the very tenuousness of humanity’s hold on the scientific enterprise. Every civilization of which we have records has possessed a technology, an art, a religion, a political system, laws, and so on. In many cases those facets of civilization have been as developed as our own. But only the civilizations that descend from Hellenic Greece have possessed more than the most rudimentary science. The bulk of scientific knowledge is a product of Europe in the last four centuries. No other place and time has supported the very special communities from which scientific productivity comes.” So perhaps we should count ourselves lucky to have science at all!
During the twentieth century the use of scientific knowledge in advancing the state of the art of our technology has picked up quite dramatically. Today all but the simplest artifacts rest on strong scientific foundations, everything from computer chips to automobile tires, through T-shirts, sugarless bubble gum, and space shuttles.
Science and engineering go hand in hand nowadays, both drinking from and helping to fill the other’s fountain. We’ve seen how engineers not only apply our current scientific understanding of Nature in order to build better artifacts, but are indeed coming full circle, trying to make these objects more Naturelike. Biology serves as a source of inspiration, with processes such as evolution, learning, and ontogeny implemented in artificial media. Nature can even be directly co-opted for engineering purposes, as with the use of DNA molecules to solve problems in computing.
The betrothal of science and engineering, and the ensuing period of blissful courtship, have finally led, in my opinion, to marriage. I believe that the recent years have seen the rise of a new kind of professional (and profession): the scigineer, a combination of both scientist and engineer, holding a test tube in one hand and a proverbial slide rule in the other.
What is a scigineer? Let me go about explaining this by way of example. In Chapter 2, we saw how computer programs in the form of trees can be evolved, noting that evolution tends to produce “spaghetti” programs: huge trees with lots of weird branches and offshoots. If the program works to your satisfaction, you can of course simply go ahead and use it; if you want to understand what makes it tick, though, then you’re in a position that’s rather like that of a biologist trying to decode our own program (the human genome). We even noted that when you delve into these evolved programs, you frequently find loads of “junk”: computer code that is of no use at all, a situation which is similar to Nature. Our genomes also contain junk code: unused portions of our DNA program.
The scigineer has two hats — that of a scientist and that of an engineer — which she constantly alternates. First, she puts on the engineer’s hat, picks up her slide rule, and sets the stage, say, for the evolution of computer programs; then, she puts on the scientist’s hat and the white coat, setting out to analyze the creatures (programs) that have emerged in her artificial universe.
The robots of Chapter 4 also constitute a case in point. They are an artifact created by the scigineer, who subjects them to an environment in which they evolve and learn. We saw how they can come to avoid obstacles, but exactly how do they accomplish this? Though we’re talking about an artifact — an object created by humans — it has evolved into something that we do not fully comprehend. Even though as stage designers we seem to have a privileged position, the actors have taken their own routes so as to better themselves. The scigineer must now take out his scientific toolbox in order to analyze this little robotic creature, just as a scientist analyzes a cockroach. Though such a current-day robot is still a far cry from a cockroach, it’s already complex enough to require the donning of a white coat.
Let me give you another well-known example, that of the Tierra world. Tierra is a virtual universe — embedded within a computer — that was set up in an attempt to explore the idea of open-ended evolution. It comprises computer programs that can evolve; unlike those of Chapter 2, however, where an explicit goal (and hence fitness criterion) is imposed by the user (for example, compute taxes), the Tierran creatures receive no such guidance. Rather, they compete for the natural resources of their computerized environment: time and space. You may remember from Chapter 6 that a standard computer consists of two major elements, the processor — that actually runs the program and the memory — the storehouse that acts as a repository for programs. These two components represent Tierra’s natural resources, and — just as in Nature — they are limited: The processor can only run one program at a given moment, and the memory can contain no more than a certain number of programs. This gives rise to a fierce battle for survival, the Tierran creatures having to vie for the processor’s precious time and for a place in the jungle known as memory. Failure means death: A program that is unsuccessful in procuring these resources disappears from the evolutionary stage.
Tierra was invented not by a computing scientist but by an ecologist, Thomas Ray, who had worked for years in the Costa Rican rain forest before turning from natural evolution to digital evolution. Ray inoculated his Tierran world with a single organism — a self-replicating program called the “ancestor,” which was able to co-opt the processor to produce copies of itself elsewhere in memory. This organism, a program written by Ray himself, was the only engineered (human-made) creature in Tierra. The replication process is not perfect: Errors, or mutations, may occur, thus driving the evolutionary process. Ray then set his system loose and witnessed the emergence of an ecosystem-in-a-bottle, right there inside his computer, including organisms of various sizes, and such beasties as parasites and hyperparasites. Ray wrote that “much of the evolution in the system consists of the creatures discovering ways to exploit one another. The creatures invent their own fitness functions through adaptation to their biotic environment.”
Large programs such as the ancestor have several instructions that form part of their “body”; these program instructions are used to copy the organism from one memory location to another, thus effecting replication. The evolved parasites are small creatures (programs) that use the replication instructions of such larger organisms to self-replicate. In this manner they proliferate rapidly in the memory jungle without the need for the excess replication code. As in Nature, the evolved ecology exhibits a delicate balance: If all large creatures were to disappear, then the parasites would die, having no replication code to appropriate. Tierra had even managed to outdo its creator, who wrote: “Comparison to the creatures that have evolved shows that the one I designed is not a particularly clever one.”
Ray first engineered this world, which he then proceeded to analyze as a scientist: “Trained biologists will tend to view synthetic life in the same terms that they have come to know organic life. Having been trained as an ecologist and evolutionist, I have seen in my synthetic communities, many of the ecological and evolutionary properties that are well known from natural communities.” (If you’re interested in learning how the humble Tierran beginnings ultimately lead to the rise of the “TechnoCore” artificial intelligences [AIs], I recommend The Rise of Endymion — the final volume of Dan Simmons’s Hyperion tetralogy.)
In April 1998, while leafing through the weekly issue of Science, I was surprised to find two out-of-the-ordinary articles. Science, one of the top two scientific journals (the other being Nature), publishes almost exclusively hard-core scientific papers in physics, chemistry, biology, and the like. If your paper is good enough to grace Science’s pages, then it’s probably about the natural world — the object of scientific study. Yet in browsing this particular issue, I suddenly came across a couple of articles that dealt with an artificial world, created entirely by humans: the World Wide Web. One article looked into the efficiency of search tools, while the other studied patterns of behavior as information foragers move from one hyperlinked document to the next. It’s almost as if we were talking about a tropical jungle.
This is a cogent example of scigineering that is totally unrelated to biology or biological inspiration. Here is a universe created entirely by humans, which has become so complex — much more so than a car or an elevator — and so interesting in and of itself, that it merits the attention of scientists — and the consecration of Science. We’ve engineered the World Wide Web, and then we turn to study this brave new world. The era of scigineering is upon us.
The rival journal, Nature, waited until August 1999 to finally “give in.” In an article entitled “Genome Complexity, Robustness and Genetic Interactions in Digital Organisms,” Richard Lenski, Charles Ofria, Travis Collier, and Christoph Adami explored the effects of genetic mutations in both simple and complex digital organisms, which inhabited the artificial, Tierra-like world called “Avida.” Commenting on their work, Inman Harvey from the Centre for the Study of Evolution at the University of Sussex cautioned that “considerable debate can be expected before a consensus is reached on just what is necessary for results from a synthesized world to be seen as relevant to the natural world.” The scigineer might study her world and glean much about it, but she must be cautious in applying her conclusions to the world at large.
The scigineer has one up on the scientist in that he can render his world easier to analyze, whereas a scientist must make do with what Nature affords him. Evolutionists would love to have the entire Tree of Life at their disposal, including all the lost species, yet this is but wishful thinking; geological reality, alas, is harsher on them, revealing but bits and pieces of the whole story. With artificial worlds, though, wishes are granted: You can easily save the entire evolutionary history of your artificial creatures to later analyze it at your leisure.
Remember from the previous chapter how the protagonist of Permutation City describes the result of the Autoverse experiment — the result of billions of years of evolution in this artificial world? He says: “All demonstrably [my emphasis] descended from a single organism which lived three billion years ago ...” In this artificial planet, one can demonstrate that all the organisms have descended from a single origin since the entire evolutionary trace is available. The scigineer might not possess perfect knowledge of his engineered world, but he at least has the power — unlike scientists — to render his analysis job easier.
In reminiscing about his illustrious career, Isaac Newton remarked: “I do not know what I may appear to the world; but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.”
We’re no longer content to walk the shores of Nature’s oceans of truth, finding whatever pebbles may have been laid for us. We’re now creating new oceans, and with them we beget new shores to walk.
Excerpt from my book Machine Nature
Tyger! Tyger! burning bright
In the forests of the night,
What immortal hand or eye
Could frame thy fearful symmetry?
Who indeed framed the tyger? Two hundred years after William Blake wrote the beautiful opening stanza of The Tyger, we have a better idea of how tigers come about: through the process of evolution. There is neither a Master Plan, nor a "hand of god," nor any ultimate goal; the driving force is the short-term objective of survival, with the process consisting of the slow accumulation over millenia of numerous small — yet profitable — variations. In the forests of evolution burn many a creature, with nature's immortal evolutionary hand slowly framing the fearful symmetry of the tiger.
Natural evolution is an open-ended process, and is thus distinguished from artificial evolution which is guided, admitting a "hand of god": the (human) user who defines the problem to be solved. When we apply evolution with a well-defined goal in mind — such as designing a bridge, constructing a robotic brain, or developing a computer program — what we are doing is akin to animal husbandry. Farmers have been using the power of evolution for hundreds of years, in effect doing evolutionary computation on domestic animals. In order to "design," say, a faster horse they mate swift stallions with speedy mares, seeking to see even faster offspring emerge from this coupling. This is quite similar to the use of evolutionary techniques discussed in this book: the farmer defines the fitness criterion (say, speed) and performs the selection process by hand (by choosing the fastest individuals in the equine population); he then lets nature work out the genetic details involved in the coupling act. It's rather interesting to note that farmers and breeders had started using this method long before either evolution or genetics came under the scrutiny of science.
Farmers can start out with slow horses and evolve fast ones — but can they evolve tigers? Engineers can start out with bad bridges and evolve good ones — but can they evolve a town like Cambridge? Robotics researchers can evolve robots that manage to amble decently — but can they evolve a robotic housemaid. Programmers can evolve computer programs that solve various well-defined problems — but can they evolve truly intelligent software? Natural evolution has done it all: complex organisms, sophisticated structures, intelligent beings; and it did so by being open-minded, ready to accommodate any improvement that came along.
The Merriam-Webster online dictionary (www.m-w.com) defines "open-ended" as something that is "not rigorously fixed: as a : adaptable to the developing needs of a situation b: permitting or designed to permit spontaneous and unguided responses." Open-endedness is thus the flip side of guidedness, and it is a crucial aspect of natural evolution. Since nature has no specific goal in mind she can easily change course so as to face the winds of change, and in so doing she explores numerous designs out of what is essentially an infinitude of possibilities. Man, on the other hand, even when using evolutionary techniques does have an ultimate goal in mind — be it a retractable bridge or a program that computes taxes.
When we apply evolutionary techniques the ingredients are all there: a (possibly huge) population of individuals, survival of the fittest, and the equivalent of genetic operators. Yet the hand of god is ever-present in the background: at every step of the way an individual's fate is decided in accordance with its ability to perform in the arena set up by the puppet master; and the master wants his puppets to do some very specific tricks. This places a fundamental a priori limit on what evolution can achieve: if we set about to find fast horses, then we might succeed in doing so — but we'll not suddenly see the emergence of tigers.
Nature's open-endedness runs deeper, though, than the mere absence of a goal and a god — of a teleology and a master. It has to do with her ability not only to play the game but indeed to change the rules altogether. Let me drive this point home by way of a sportive example. The game of basketball is played on a court 90 feet long by 50 feet wide between two opposing teams of five players, who score by tossing an inflated ball through a raised goal. The rules are well known and rigid, with changes being rare, minor (for example, adding the three-point shot), and human-mediated (say, the NBA committee). This scenario is analogous to that of guided evolution: a human designer sets the stage (or in this case court) that gives rise to a (fiercely competitive) evolutionary process, from which but one kind of creature may emerge: basketball players. The process is not open-ended since there is a precisely defined goal (scoring more points), with virtually immutable rules. Though superb basketball players can (and do) evolve, this arena does not give rise to first-rate opera singers.
Playing nature's "basketball" game is quite different. For one thing, there is no clear objective; at best, one can speak of a very basic goal, that of coming out of the match alive. What's more — and this is where open-endedness comes into play — nature keeps changing the rules of the game, both in time and in space. Being seven foot tall might be good at a certain place and time, whereas elsewhere or else-when it might be downright deleterious. And sometimes the rules are such that having a superb tenor voice is a match winner. Nature's game of basketball is more of a meta-game, where you want to score more points — but have to figure out how points are scored.
In Chapter 1 we discussed an important distinction in nature, that between genotype and phenotype. An organism's genotype is its genetic constitution, the DNA chain that contains the instructions necessary for the making of the individual. The phenotype is the mature organism that emerges through execution of the instructions written in the genotype. It is the phenotype that engages in the battle for survival, whereas it is the genotype — safely cached in each cell of the organism — that accrues the evolutionary benefits.
Setting a specific goal — as with artificial, guided evolution — means that there is a highly restricted environment; the basketball-player phenotype faces an environment in which it is demanded to perform a very specific task: playing basketball. Natural environments are not only much more complex but also highly dynamic — the phenotypes must face ever-changing circumstances.
Nature's open-endedness manifests itself not only at the phenotypic level but also at the genotypic level: not only can the rules of the playground change, but so too can the rules for making players. The genome of a red ant is quite different from that of an orangutan (though as both are branches of the Tree of Life, they also bear many similarities). As we've seen, artificial-evolution scenarios to date are limited, being goal-oriented, with but very little maneuverability in changing the genetic makeup. A bridge genome will always produce a bridge — perhaps a superb one at that — but never a skyscraper. Nature, though, can tinker with the genome, thus changing the underlying construction plan, so as to produce entirely different beings, including skyscrapers (giraffes) and towns (ant colonies). This is a crucial aspect of her open-endedness.
We've seen how artificial evolution is used to design complex objects, which stretch — or overstretch — our classical engineering techniques. The results are often quite impressive and at times those who use them are even reputed to cry out: "Wow, I'd have never come up with such a solution." But this is still at the level of evolving super bridges or superb basketball players; moreover, it might even be limited at that since an entirely novel bridge design or a new form of basketball player might necessitate genomic tinkering that is beyond the system's reach. Can something truly astounding — something entirely new — emerge out of an artificially set stage? In my mind this is one of our grandest challenges, and it may still be many years in the coming. I like to think of this defy as that of building a system that Knocks Your Socks OFF. Following the time-honored tradition in computing science of coining acronyms, this might be dubbed the KYS OFF challenge — which leads me to wonder whether such a system would kiss us off ...
As our artifacts become more and more complex, so does their design become more arduous. One way out is to employ the powerful process of open-ended evolution. But wait a minute — by definition, that would mean ... removing the designer from the equation! Then who controls the design process — who's the boss? It seems that you can't have your cake and eat it too — something has to give. With guided evolution the guide — or designer — maintains a great deal of control over his system, and though he'll often be overwhelmed by the results obtained, his socks will remain firmly in place. Open-ended evolution might indeed knock your socks off, but at the price of giving up some of that precious control that we've grown used to.
Strangely enough then, it is less design, meaning more open-endedness, that increases our design power. Uhm ... did I just say less design? Actually, you have to set up the stage so as to be more open-ended, that is, you have to design the system to exhibit ... less design! That's the essence of the KYS OFF challenge, which only nature has met so far — but then again, she's been at it for the past threenand a half billion years. (While I've been concentrating my discussion of the open-ended versus the guided on evolution this is by no means the only process of interest. Learning, for one, augments a system's open-endedness.)
Open-ended goes hand in hand with less control — though with the potential of more spectacular results. Parents usually want their children to grow up to be independent and able to think for themselves. But in many ways child-raising is open-ended — with no guarantees: What if the child decides to be a rock star? (Result: horrified parents.) Or a doctor? (Result: delighted parents.) In Chapter 4 we discussed the application of biological processes, such as evolution and learning, in the field of adaptive robotics. We saw that one of the central goals is that of attaining more autonomous robots; I doubt, however, that we're ready to see them declare autonomy ...
With an open-ended process not only do you not control the precise shape that the final outcome will take, you're not even sure what this outcome will be. When we look at nature's magnificent products with awe and with envy, we should always keep in mind the billions of years that their production necessitated. If you set off such a process and then patiently wait for a couple of billion years, you might find — a posteriori — lots of wonderful devices, such as eyes, toes, flowers, brains, and wings. You might be quite happy with this plethora of gadgets that will bring you fame and fortune. But this process is open-ended, which means you don't know in advance what the final products will be. In fact, it might not even get off the ground: it took nature almost three billion years before things really started to pick up and the Tree of Life began to grow. What if it never gets off the ground? Or if you simply get tired of waiting?
The possibility of creating an artificial scenario in which open-ended evolution takes place is at the heart of Greg Egan's excellent science fiction novel Permutation City. Explaining to the researcher her mission, the protagonist says: "I want you to construct a seed for a biosphere ... I want you to design a pre-biotic environment — a planetary surface, if you'd like to think of it that way — and one simple organism which you believe would be capable, in time, of evolving into a multitude of species and filling all the potential ecological niches.'' Having succeeded in creating such a biospheric seed, evolution is then set lose in this artificial universe known as the Autoverse, to work its magic over the eons: "We've given the Autoverse a lot of resources; seven thousand years, for most of us, has been about three billion for Planet Lambert.'' And the outcome? "There are six hundred and ninety million species currently living on Planet Lambert. All obeying the laws of the Autoverse. All demonstrably descended from a single organism which lived three billion years ago — and whose characteristics I expect you know by heart. Do you honestly believe that anyone could have designed all that? " The answer is no; be it in an artificial or a natural world, open-ended evolution will knock your socks off ...
If we give up our control, can't things get out of hand, leading to a system run amok? This is a tough question, which needs to be addressed on a case-per-case basis. We should come up with fail-safes (à la Asimov's three laws of robotics). We might well wish to place checks and bounds (for example, limit the robots' autonomy). We'd like to maintain the possibility of pulling the plug if things get downright ugly. But this issue is by no means an open-and-shut case: how does one juggle between control and autonomy — between guided and open-ended? This issue will probably gain more prominence as our technology advances, enabling us to build systems that are somewhat less controlled — and less controllable.
Which brings me back to William Blake and the closing lines of "The Tyger,'' where the "Could'' of the first stanza has been conspicuously replaced by "Dare":
Tyger! Tyger! burning bright
In the forests of the night,
What immortal hand or eye
Dare frame thy fearful symmetry?
Dare we frame a tyger?
by Moshe Sipper
We can plainly see why nature is prodigal in variety, though niggard in innovation.
This beautiful statement was written by Charles Darwin in Origin of Species. As a corollary, I might add that, given nature’s prodigal resources, she needn’t be too smart, and — to paraphrase a famed Darwinist — can be seen to trudge along quite blindly.
Yet, in the field of Evolutionary Computation we practice the opposite of this Darwinian tenet, demonstrating at every conference, journal, and whatnot how cleverly prodigal we are in innovation (be it theoretical, algorithmic, or applicative), what an inventive evolutionary system we have designed, using — more often than not — quite niggardly means.
Might this be not the practice of evolutionary computation, but something else? A thing that tastes like evolution, feels like it, maybe even has that familiar smell of evolution — but isn’t?
In light of the argument pleaded before us, perchance the fruitful endeavor deserves a new name, similar yet distinct? Should the jury vote yea, I propose Volutionary Computation, deriving from “volution” (“a rolling or revolving motion”). After all, the metaphorical ball rolls in the search space, and if the system has been set up smartly — it shall end up being on a roll.
(Moreover, volutionary rolls off the tongue, now, doesn’t it?)
Copyright © 2016 by Moshe Sipper
Excerpt from my book Machine Nature
An eagle flaps its wings; a Boeing 747 doesn’t. A dolphin wiggles its body and jiggles its fins — a submarine just has a motor in the back. A dog walks on legs; a Mercedes-Benz rolls on wheels. A rose runs on water and light; a flashlight runs on batteries. A tiger develops in a womb from a single cell to a magnificent multicellular beast — a toy tiger is constructed full blown in a factory. A piano player goes through years of intensive training, learning to hone her talent; a piano learns nothing. Homo sapiens have evolved by means of natural selection; watches are designed by watchmakers.
Engineers and Nature have usually taken distinct routes in their creation of complex objects, differing both in the final artifacts produced as well as in the design process itself. And the recent movement that seeks inspiration in Nature has come up not only with novel objects but also with entirely new ways of designing objects. Thus, current-day robots may possess legs, fins, or wings; electronic circuits may develop in a manner akin to that of multicellular living beings; watches can heal themselves; computers can learn to play a mean game of backgammon; and bridges can be evolved.
Having visited several lands in the Terra Nova of computing, and having acquired along the way many new colorful approaches, we shall now use these colors in the remainder of the book to paint the big picture. In this chapter I’d like to take a closer look at the main differences between human’s work and that of Nature, specifically focusing on how these relate to our current engineering efforts. When does it pay to be biological, and when is it better to use the traditional, by-ole-logic way? As a concrete example I’ll consider two different kinds of flying machines: birds and airplanes. When engineers set about to design an airplane, they proceed in what is known as a top-down approach: They start with the general issues and questions (the top) and go all the way down to the nitty-gritty. At the top there is the decision — usually made by senior management — to build a new airplane. Next comes the requirements analysis phase, which basically answers the question: What should this new machine be able to do? It might be required, among others, to carry up to 600 passengers, to take off and land on short runways, and to handle severe weather. Having defined the problem, it is time for the engineers to enter in force, their job being to find a solution; now that we know what we want, it’s time to see how we go about building it.
The design process continues in a top-down fashion, breaking the big problem into smaller and smaller subproblems; one doesn’t jump immediately to the nuts-and-bolts level. This breaking-down process might be done by identifying key parts — such as the cockpit, the fuselage, the engines, and the wings — and assigning their design to different teams (which obviously must cooperate among themselves. After all, there is but a single final object being built: the airplane). Each such key design problem is further divided into smaller subproblems; the wings team, for example, will be considering flaps, spoilers, ailerons, and other such beasties. The design process is by no means simply a forward march; often one must go back to the drawing board since the part in question doesn’t function as it should. This back-and-forth process ends up with a design specification — a complete plan of the airplane (such a complex object might require years of design work). Now it’s time to fabricate the machine, a task which in itself may be quite elaborate for such an artifact. It might, in fact, require a separate design process since in all likelihood new fabrication techniques for the new airplane will have to be developed.
Engineering designers thus start out with a clear top-level goal in mind, then work their way downward toward the most minute details, ultimately coming up with a comprehensive solution. Nature works quite differently. For one thing, Nature has no explicit, a priori goal; Nature does not embark upon a lengthy R & D project whose final objective is the construction of a bird. Nature employs evolution, and evolution is shortsighted: The only goal, the only thing that matters, is immediate survival. Nature, if any designer at all, is a blind one at that. The ability to fly emerges over eons since it confers some advantage to the animals that possess it. Thus, when speaking of evolution’s goal, one can at best describe it as an implicit, short-term one: survival. (In The Blind Watchmaker Richard Dawkins proposed a way by which wings might have evolved. His scenario starts out with wingless animals that leap between tree boughs. Small flaps of skin that help extend the jump or break the fall — by acting as an airfoil — will bestow an immediate survival benefit upon their owner. Little by little, over the course of many generations, the accumulation of small, ever-better modifications to these flaps might end up in full-fledged wings.)
Evolution is further distinguished from engineering in that it is a bottom-up process: Its “products” emerge from the myriad of interactions that take place in the biosphere. There is no top-down process that starts out with a major, far-sighted goal that is then broken down successively into smaller and smaller subgoals, until they become doable. There are just numerous interactions, both among organisms, as well as between them and the elements, out of which emerge all the wonderful devices we see around us (and in us), such as wings, eyes, feet, nervous systems, and rock stars.
Nature’s open-ended, short-sighted, bottom-up style as opposed to engineering’s guided, far-sighted, top-down approach is the crux of the difference between the two. It entails several other distinctions between the engineering enterprise and Nature’s workings.
Engineers usually seek not only to create a widget that works, such as an airplane or a coffee machine, but indeed one that works well; often they evoke terms such as “efficient” and “optimal” to describe their desired product. Nature, on the other hand, cares nothing for these qualities; designs need neither be the best, nor the fastest, nor the most efficient; rather, Nature’s after “just-do-the-trick” solutions, namely, ones that can survive. If an organism has even the slightest advantage over its confreres, then that’s all it takes — it’ll be the winner in the survival race and its genes will pass on to the next generation.
“But how then,” you might be asking yourself, “has Nature come up with all those marvelous designs we see out there — such things as seeing gadgets, delicate manipulators, and thinking machines, which are still way beyond our current engineering capabilities?” First off, let’s not forget that Nature has had a bit of a head start — 3.5 billion years to be precise. This figure should not be brushed aside lightly: It is a huge amount of time, practically impossible for us to grasp. As noted by Charles Darwin in the Origin of Species: “The mind cannot possibly grasp the full meaning of the term of a hundred million years; it cannot add up and perceive the full effects of many slight variations, accumulated during an almost infinite number of generations.” Our inability to grasp such a vast period of time is not so surprising if you think about the environment in which our minds have evolved to function. During most of our evolutionary history, there was no survival value in being able to comprehend the expanse of a million years (nor, for that matter, of a millionth of a second). It is only very recently (no more than a few thousand years) that we have begun dealing with such huge numbers, our minds coming to appreciate time out of mind. While for engineers time is of the essence, for Nature the essence is time.
In coming up with her flying machine, Nature thus spent a little more than the few years engineers spend in designing a Boeing 747. The chirping critters we see today outside the window are superb beasts, yet their beginnings — the ancestral forms that flew the Earth millions and millions of years ago — were probably much less impressive. It’s hard to match our current engineering achievements with those of Nature, but then again, it might also be somewhat unfair. We should probably compare our current-day devices not with modern flora and fauna, but rather with Nature’s first attempts, those that had been in existence so many millions of years ago (and which are now — for the most part — extinct).
Nature not only takes her time but also makes use of a huge amount of resources. Charles Darwin remarked that the evolutionary process goes on “for millions on millions of years; and during each year on millions of individuals of many kinds ...” While an engineer usually tries to cut costs wherever possible, Nature is lavishly wasteful. She works by trial and error, indeed lots of trials and lots of errors. Charles Darwin quoted Milne Edwards as quipping that “nature is prodigal in variety, but niggard in innovation.” There are many more extinct species than surviving ones, or, as Richard Dawkins said: “however many ways there may be of being alive, it is certain that there are vastly more ways of being dead …”
Evolution is basically a forward process: Any new entity must be immediately functional, or else it dies out. As we’ve seen above, engineers can (and often do) go back to the drawing board in order to fix a flawed design. Nature, on the other hand, cannot move backward; there is no drawing board to go back to, no possibility of deciding, “Well, this new wing design isn’t so good, so let’s go back to the old one and try to improve it in another way.” In Nature, no good means no life (as in dead).
Another difference between engineered devices and natural ones has to do with “leftovers.” In human-made systems essentially every single part is accounted for and serves some purpose; if not, then it is removed without further ado. Nature, on the other hand, tends to accumulate junk, her motto being: “If it’s not harmful then it’s none of my business.” Why waste effort on removing innocuous parts? Modern creatures thus carry vestiges of past epochs that might have served some purpose at one time, but which are totally useless today (our tail bones, for example).
Let’s take stock of what we’ve gleaned so far about the biological versus the by-ole-logic. When engineers design a product, they have a clear goal in mind; they proceed in a top-down manner, seeking to create an artifact that is — as much as possible — the best solution to the problem at hand. Nature, on the other hand, has but a single, short-term goal in mind, survival; she relies on the process of evolution to “design” her products, slowly proceeding in a bottom-up manner, sparing no expense and taking no heed of her extravagant wastefulness. With respect to expenditure one might say that engineers are like Ebenezer Scrooge whereas Nature is like Santa Claus. In a nutshell, Nature designs by evolution while engineers design, well … by design.
Nature has come up not only with ingenious solutions to specific problems — for example, structural designs such as eyes or wings — but indeed has found (and founded) entirely new processes to aid in the emergence of complex organisms. Two of the most important ones are ontogeny (the development of a multicellular organism from a single mother cell) and learning.
Engineers and computing scientists have been turning of late more and more toward Nature, wishing to learn from her ways and means. In building novel artifacts they seek inspiration in a wide range of phenomena, from general processes such as evolution, ontogeny, and learning to more specific natural inventions, such as immune systems, eyes, and ears.
Why are we so enthralled by the biological? After all, the by-ole-logic way is methodical and precise while the biological is so much “mushier.” Think of (or in my case imagine) that sleek, black Porsche 911, comfortably reposing in your garage — a triumph of modern engineering. Since every step of its design and construction involved traditional engineering techniques, we know exactly what it is capable of, and of what it is incapable: how fast it can go, its fuel efficiency, its ability to withstand shocks, its maneuverability along curves, its braking distance, and so on. Contrast this with Nature’s creations, where we are often at loss to answer such questions as: Does it work; if so, why? If not, why not? Does it work well? Does it work well all the time? How far can we push the system? What are its limits? We know how to answer such questions when it comes to a Porsche, whereas a dung beetle presents us with a far more difficult case.
You could argue that a dung beetle is a problem for biologists, whereas we’re interested in a “hard” engineering problem, building Porsches. The problem is that once we move from the by-ole-logic to the biological, using techniques such as those described in this book, we find ourselves on murkier grounds. Consider the robots discussed in Chapter 4, whose brains consist of artificial neural networks that emerge by means of evolution. We find ourselves faced with an engineered machine — the robot — for which we are very hard put to answer all those questions of the previous paragraph (we’ll elaborate on this issue when we talk about scigineering).
It might seem that I come to bury the biological, not to praise it: Why use those mushy, biologically inspired techniques to build Porsches when we have such good, well-known classical methodologies? Well, despite appearances to the contrary, most of our engineering achievements to date are quite simple, at least in comparison to Nature’s. A Porsche is less complicated by far than a dung beetle; in fact, I’d probably be risking very little in claiming that a Porsche is simpler than any one cell of your body! Our engineering techniques have worked wonders in erecting modern civilization, but our appetites keep growing; technology feeds upon itself by creating new niches that bring about new needs and desires for more technology.
The more elaborate our artifacts become, the more difficult it is to find solutions by using only traditional computing and engineering techniques. That’s when we supplement the by-ole-logic with the biological. Notice my use of the term supplement: We’re not rushing to chuck the ole techniques; rather, we want to eat the cake and have it too, combining the by-ole-logic and the biological. There’s no point in being a traditionalist or a Young Turk just for its own sake; the goal is to build better artifacts, whatever the means.
And just what good is the biological to engineers? We’ve been answering this question throughout most of this book; let’s try to summarize some of the benefits we’ve encountered. As I’ve just remarked, technology keeps getting more and more complex, which means that our traditional methodologies run up against a wall much sooner than before; more and more often they are overstretched to their limit — and then some. That’s when we start considering the biological, which often permits us to make do with but a partial design — to be completed through evolution, learning, and other biologically inspired techniques. (Incidentally, even automobile companies have recently started employing techniques such as evolutionary computation and artificial neural networks to design certain parts of their cars.)
When the by-ole-logic is stretched to the limit, it’s worth trying the biological, though one must remember that it is not a panacea. I hope I’ve managed to convey the intricacy of applying these techniques in the preceding chapters. It’s not easy to get a good bridge to evolve or to have a robot learn to walk.
Another salient difference between Nature’s devices and those of human has to do with their robustness. This term means different things in different domains, but it basically boils down to the ability to cope with a wide range of circumstances. Place a cockroach in virtually any imaginable terrain, and it’ll have no problem in walking the Earth; a robot, on the other hand, has a much harder time breaking new ground. (As we saw in Chapter 4, the robotic soccer teams played much better at their home institutes than at the match site, having grown accustomed to the home terrain.) You can suffer a severe blow and still keep on ticking; the same cannot be said of your Porsche. Plants have an uncanny ability to grow toward the light, wherever it may be. A computer recognition system has a much harder time than a human in identifying a previously bearded man who suddenly shows up clean-shaven. From bacteria to brains, there are endless examples of just how robust natural creatures are, a quality that we’d like to instill in our artifacts.
Nature places its creatures in a continual lifetime struggle for survival. Moreover, every living creature today comes from a long line of distinguished ancestors that had one thing in common: They were survivors (at least long enough to engender a dynasty). Small wonder they’ve evolved to be so versatile. After all, robustness is decidedly a boon to survival.
To emphasize just what it means to pass through the evolutionary sieve, let me recount a short tale. The 11 o’clock news announces the founding of a new airline company whose rates are three times cheaper than the cheapest of airlines. How do they manage? Simple: no humans! At Robo Airways every job — onboard personnel, reservation clerks, ground crews — is handled by computers and robots. Would you fly the robotic skies? I’d bet the company would go bankrupt very quickly for one major reason: No one would want to fly without a human pilot aboard. Why is that? After all, any modern-day aircraft has an automatic, onboard pilot that performs much of the drudgery of piloting, and you don’t have to stretch your imagination too far to envisage a fully automated flight system. What’s so special about a human pilot? Well, it’s not so much the piloting abilities as the pilot’s humanness. Obviously, there is a psychological angle that comes into play; a human pilot being much more similar to us than a machine. Let’s dig a little deeper, though.
According to robotics researcher Rodney A. Brooks, an examination of the evolution of life on Earth reveals that most of the time was spent developing basic intelligence. He wrote that: “This suggests that problem solving behavior, language, expert knowledge and application, and reason, are all rather simple once the essence of being and reacting are available. That essence is the ability to move around in a dynamic environment, sensing the surroundings to a degree sufficient to achieve the necessary maintenance of life and reproduction. This part of intelligence is where evolution has concentrated its time — it is much harder.” Playing chess, reading newspapers, and piloting airplanes are very recent skills that piggyback on our versatile brains, which have evolved over millions and millions of years. The title of Brooks’s paper — “Elephants Don’t Play Chess” — nicely captures this idea: While not able to play chess, elephants are nonetheless robust and intelligent, and able to survive and reproduce in a complex, dynamic environment.
When Nature comes up with a new product line, it is immediately subjected to the most grueling series of tests ever invented: evolution. That’s why we can trust the human pilot much more than we can the automatic one: Piloting skills are but a mere add-on to a powerful system whose design has been millions of years in the making. Or, consider another example: Any human can tell the difference between a baby and a doll, our visual system having evolved to be able to keenly distinguish our kin. Yet with Dean, the housemaid robot of Chapter 4, this is far from obvious. How can we be sure it won’t confuse one with the other (with the consequences being anything from comic to disastrous)?
The biological approach to engineering is a powerful sword to be wielded when the old tools fail, or when they yield unsatisfactory solutions. Applying processes such as evolution and learning does have its price, though, since we’ve seen how lavish the biological tends to be. We do have, however, the benefit of very fast artifacts, such as computers; thus, the biological, when applied to engineering, need not necessarily take millions of years (as with natural evolution) or years (as with human learning). Moreover, the biological approach has the potential of yielding more robust solutions, ones that do not fold with the slightest breeze. And let’s not forget that another possible biological approach to engineering is to seek inspiration not in Nature’s grand processes but rather mimic some of her solutions, examples of which are artificial retinas and artificial cochleae.
As I’ve remarked above we need not replace the by-ole-logic with the biological but rather combine the two, thus enjoying the best of all possible worlds. And when opting for the biological, we don’t necessarily have to remain 100 percent faithful to Nature; we can even at times take a bio-illogic path. Let me give just one example, that of Darwinian versus Lamarckian evolution.
The Chevalier de Lamarck was an eighteenth-century intellectual who argued in favor of evolution many years before Darwin. In this he was right. What he got wrong was the mechanism, now known as Lamarckism, or Lamarckian evolution, which is based on two principles: the principle of use and disuse and the inheritance of acquired characteristics. The first principle asserts that those parts of an organism’s body that are used grow larger, and those that are not used tend to wither away. The second principle states that such acquired characteristics are then inherited by future generations. Thus, a bodybuilder bequeaths his developed muscular physique to his children. Or, consider the following story about giraffes: The early ones had rather short necks and so they strained desperately to reach high leaves on trees. These mighty efforts resulted in longer neck muscles and bones, which they passed on to their offspring; each generation of giraffes thus stretched its neck a bit, a head start which it passed on to its offspring.
Lamarckian evolution seems reasonable. In fact, it seems rather enticing: Wouldn’t it be great to have — from day one — all those acquired characteristics of your ancestors? Alas, that’s not how things work, and so the Darwinian theory of evolution has supplanted the Lamarckian theory. The giraffe does not directly pass its long neck — acquired during its lifetime — to its offspring. Darwinism is more roundabout: Some giraffes are genetically predisposed to develop into mature animals with long necks. These will then have an advantage (however slight) over others since they will be able to reach higher leaves. Thus, they will stand a better chance of surviving and leaving offspring, which will in turn inherit the genetic predisposition (which might then be further enhanced through favorable mutations).
While the biological theory of evolution has shifted from Lamarckism to Darwinism, this does not preclude the use of Lamarckian evolution in artificial settings. It can greatly accelerate evolution since a good acquired trait can be immediately incorporated into the genome. There is still a debate as to the use and usefulness of artificial Lamarckian evolution, though my intention here has simply been to show that we need not remain 100% faithful to Nature.
The biological blazes new trails that lead to fascinating lands. But the lesson to take home is that whether by-ole-logic, bio-logic, or bio-illogic, what matters is the end result: By hook or by crook, just get it to work.