Excerpt from my book Machine Nature
“He wanted to dream a man: he wanted to dream him with minute integrity and insert him into reality.” This was the goal of the silent man who came from the South, in Jorge Luis Borges’s short story, “The Circular Ruins.” From Pygmalion, Frankenstein, and the Golem to Star Trek’s Lieutenant Commander Data, the dream of administering the breath of life has fascinated humankind since antiquity. We’ve seen how human-made systems can be made to evolve, to learn, to adapt, and to develop, as well as to exhibit a host of other characteristics that are usually not associated with machines, but rather with living beings. Can our creations one day take a life of their own? This question moved from the realm of science fiction to that of science with the advent of the field known as artificial life. The term was coined by Christopher G. Langton, organizer of the first artificial life conference, which took place in Los Alamos in 1987. “Artificial Life,” wrote Langton (in the proceedings of the second conference), “is a field of study devoted to understanding life by attempting to abstract the fundamental dynamical principles underlying biological phenomena, and recreating these dynamics in other physical media — such as computers — making them accessible to new kinds of experimental manipulation and testing.” While biological research is essentially analytic, trying to break down complex phenomena into their basic components, artificial life is synthetic, attempting to construct phenomena from their elemental units, as such adding powerful new tools to the scientific toolkit. This is, however, only part of the field’s mission. As put forward by Langton “In addition to providing new ways to study the biological phenomena associated with life here on Earth, life-as-we-know-it, Artificial Life allows us to extend our studies to the larger domain of the ‘bio-logic’ of possible life, life-as-it-could-be, whatever it might be made of and wherever it might be found in the universe.” Before talking about artificial life, shouldn’t we try to define what life is? Well … No. I’ll steer clear of this issue since it is in fact quite a controversy in science; as things stand today, there is no agreed-upon scientific definition of Life. For now, we’ll just have to accept its being one of those “you-know-it-when-you-see-it” qualities: Your dog is obviously alive, while your washing machine is obviously not. The question is, then, can we create something that is “obviously alive”? This question seems reasonably clear, or is it now? You’d think that it’s the “life” part of “artificial life” that eludes our definition. Well, there’s a further subtlety: What exactly does the “artificial” part mean? If you look up “artificial” in the dictionary (Merriam-Webster online), you’ll find a number of definitions. So let’s see which one sits well with “artificial life.” Artificial might mean “lacking in natural or spontaneous quality < an artificial smile > < an artificial excitement > .” This can’t be it. An extraterrestrial might be unnatural and unspontaneous, and yet obviously alive; artificial life cannot be about life that lacks in natural or spontaneous quality. What about “imitation, sham < artificial flavor >”? This is no good: By definition our putative “artificially alive” creature is going to be an imitation in some sense; the point is, in what sense, and how good an imitation (“That’s not a real dog? I never would’ve guessed in a million years!”). Saying that artificial life is synonymous with imitation life doesn’t get us very far. We’re obviously trying to imitate life, in the proverbial “imitation is the sincerest flattery” sense. I’m not merely engaging here in armchair philosophy, but rather trying to arrive at the essence of what “artificial life” means. What seems to be missing in the two definitions of the previous paragraph is the creation aspect. So let’s try what is actually the first definition appearing under “artificial”: “man-made < an artificial limb > < artificial diamonds > .” Ah! now we’re cooking. This seems to be the right one. It accords perfectly with the definition given by Langton in the proceedings of the first artificial-life conference: “The study of man-made systems that exhibit behaviors characteristic of natural living systems.” Artificial life is thus life created by humans rather than by Nature. Simple. Well ... I hate to be so fussy, but “human-made” can mean at least three different things. One way to create life is through the union of a male, known as “daddy, ” and a female, known as “mommy,” thus giving life to a male or a female known as “baby.” You might be frowning now, thinking to yourself that it’s rather silly of me to even mention this since this is quite obviously not artificial life. This is the natural way of creating life, whatever that may mean. But then again, what about artificial insemination? This involves the introduction of semen into the uterus or oviduct by other than natural means, yet no one would claim that this produces artificial babies. Nonetheless, there is a definite intervention by humans, thus rendering this process somewhat less than 100 percent natural. We often invoke the term “human-made” when speaking of objects such as cars. This image of artificial life might involve some kind of production line, where heads, arms, feet, and torsos are assembled into complete beings, after which the proverbial switch is pulled, thus breathing life into them. (The assembly line need by no means produce but humanoid life; it could in fact produce a range of beings, from artificial bacteria to artificial whales.) This is the most common image where fiction is concerned (Victor Frankenstein creating a humanoid monster, for one). There is yet a third way by which life may be created by humans: through the process of evolution, and most likely open-ended at that (as we discussed in Chapter 12). This raises an interesting question: While we may sow the seeds of life, setting off such an open-ended process, whatever emerges — numerous generations later — might be far removed from our original design; just how “human-made,” then, is this form of life? My intention in the somewhat philosophical discussion above has been to show you just how intricate this seemingly simple term — artificial life — really is. The concept of “artificial” is quite elusive where life is concerned, and even if we agree on emphasizing the “creation” aspect, there are a number of fundamentally different modes of creation. Artificial life might in fact be an oxymoron. After all, how can life be artificial? If something is truly alive — assuming we can somehow agree on this fact — then what’s artificial about it? Even if we take what could arguably be considered the most artificial route of creation, that of the assembly line, once we’re done, the creature is no longer artificially alive; it’s alive — period. This takes us right back to Langton’s definition of artificial life, life-as-it-could-be, “whatever it might be made of and wherever it might be found in the universe.” Whether flesh-and-blood, man-woman-made, or nuts-and-bolts, factory-made, life is life. Perhaps rather than speak of artificial life, which is somewhat problematic, we should talk about life created by humans. In fact, even “created” might be too strong a word (think of the evolutionary scenario for one). Let’s settle for human-induced life. This emphasizes the relevant difference between Nature and humans, namely, the manner by which life arrives on the scene; the end result though is — in both cases — bona fide life. Life may be many things: perhaps “a tale told by an idiot, full of sound and fury, signifying nothing” (Shakespeare), or maybe “colour and warmth and light, and a striving evermore for these” (Julian Grenfell), or indeed “a glorious cycle of song, a medley of extemporanea” (Dorothy Parker). At the heart of artificial life research lies the belief that whatever life is, it is not about carbon; life is not about the medium but about the mediated. It is a process that we do not yet understand in full, but which we may nonetheless be able to create, or perhaps we should say re-create: After all, Nature has beaten us to it. Let’s get down to earth now and consider some of the issues involved in inducing life. As I’ve discussed above there are at least three ways of going about this. We might imagine some far-future extension of current medical practice (such as artificial insemination) that will result in a new form of life. Since this involves many technical biological and medical details, I think I’ll leave it at that for the present discussion. The second way to induce life is to produce a full-blown living being. As I briefly mentioned in Chapter 11, there is much ongoing research on mimicking Nature’s gadgets, building such devices as eyes, ears, and hearts. While many of these are intended to serve as prostheses for humans, some are also used in robots. Perhaps at some point in the future we’ll be in possession of enough parts to construct an entire being. This might in fact come sooner rather than later: While speaking of “inducing life” usually tends to evoke in us images of humanoid life, let’s not be Homo sapien chauvinists. As I’ve mentioned time and again, constructing the equivalent of even a single-celled organism would be a huge achievement (not to mention a beetle or a fly), and this might come about sooner than we expect. (John Wyndham’s short story Female of the Species provides an amusingly gruesome vision of this production-line scenario. When visited by two inspectors of the Society for the Suppression of the Maltreatment of Animals, Doctor Dixon — the Frankenstein-like protagonist — explains: “The crux of this is that I have not, as you are suspecting, either grafted, or readjusted, nor in any way distorted living forms. I have built them.”) And then there’s the third way of inducing life, by creating the necessary conditions for open-ended evolution to take place. In Chapter 1 we noted that evolution rests on four principles:
Tierra inventor Thomas Ray wrote, “I would consider a system to be living if it is self-replicating, and capable of open-ended evolution.” (The Tierran world was in fact set up to discover not how self-replication arrives on the scene, but what happens after it does, namely, how does a diverse ecosystem come to evolve.) The study of self-replicating structures in human-made (or human-induced) systems began in the late 1940s, when John von Neumann — one of the twentieth century’s most eminent mathematicians and physicists — posed the question of whether a machine can self-replicate (produce copies of itself). He wrote that, “living organisms are very complicated aggregations of elementary parts, and by any reasonable theory of probability or thermodynamics highly improbable. That they should occur in the world at all is a miracle of the first magnitude; the only thing which removes, or mitigates, this miracle is that they reproduce themselves. Therefore, if by any peculiar accident there should ever be one of them, from there on the rules of probability do not apply, and there will be many of them, at least if the milieu is reasonable.” Von Neumann was not interested in building an actual machine, but rather in studying the theoretical feasibility of self-replication from a mathematical point of view. He succeeded in proving (mathematically) that machines can self-replicate, laying down along the way a number of fundamental principles involved in this process. During the decade following his work (in the 1950s), when the basic genetic mechanisms had begun to unfold, it turned out that Nature had “adopted” von Neumann’s principles. (It is quite fascinating to see how his results predated the actual biological findings.) The study of self-replication has been taking place now for more than half a century. Much of this work is in fact quite separate from artificial life and is motivated by the desire to understand the fundamental principles involved in this process. This research might better our understanding of self-replication in Nature, as well as find many technological applications. There is much talk today of nanotechnology, where self-replication is of vital import. You’d like to be able to build one tiny machine, which would than sally forth and multiply. For example, you’d inject a small nanomachine into your body to fight off some mean virus, and this nanomachine would be able to self-replicate, thereby increasing the size of your internal army. One of my favorite application examples is the self-replicating lunar factory, which is not drawn from some science-fiction novel but was actually proposed by NASA researchers in 1980. Imagine planting a “seed” factory on the moon that would then self-replicate to populate a large surface, using local lunar material. This multitude of factories could manufacture necessary products for lunar settlers or for shipping back to Earth. And all you have to do is plant the first one. On our way to inducing life, self-replication is of crucial import. We know a bit more about this issue today than we did 50 years ago, though there is still no lack of unanswered questions, which is music to researchers’ ears. The next item on our life-inducing agenda is trying to come up with an open-ended context for our self-replicating critters (just as Ray set out to do with his Tierran world); this issue has both genotypic and phenotypic aspects (Chapter 12). The phenotypic aspect of open-endedness concerns the environment. The grand challenges posed by an open-ended environment vis-à-vis its inhabitants are to be able to move around and to sense the surroundings to a degree sufficient to achieve the necessary maintenance of life and reproduction. What seems to us to be really difficult — as in playing chess — may in fact be quite easy once this essence of being and reacting is available. Remember, elephants don’t play chess (Chapter 11). Chess is in fact quite an instructive example. It has been one of the holy grails in the field of artificial intelligence since the 1950s. In those early days a number of researchers had managed to come up with programs that were able to play a decent game, winning against average human players (though they were easily beaten by chess experts). The ruling opinion at the time was that very soon there would be a chess machine able to beat any human player. The problem, though, turned out to be harder than believed, demonstrating what is known as the fallacy of the first step: It’s easier to go from ignorance to mediocrity than it is to go from mediocrity to excellence (think of the difference between playing tennis and playing tennis well). Mediocre chess-playing computers were available as far back as the 1960s, though only very recently has a computer (IBM’s Deep Blue) been able to beat a world champion (Garry Kasparov). It took 40 years to come up with a good chess-playing computer, and frankly — chess is easy! I’m not saying it doesn’t require a form of genius to excel at the game, nor am I belittling the arduous task faced by the designers of a chess-playing machine; I’m referring to the facileness of the chess environment; it is yet another illustrative example of non-open-endedness. Chess is defined by a very small number of well-known, fixed rules, and there’s really no dynamically changing environment to speak of (in this sense it is similar to our basketball environment of Chapter 12). How do we increase the open-endedness of the environment? Thomas Ray took what is perhaps the first shot at providing an answer to this question with his simulated Tierra world. Another possibility is to subject our critter to the most complex environment known to date: ours. This is the route taken by adaptive-robotics researchers. In Chapter 4 we saw how real robots are subjected to a real-world environment (as opposed to simulated robots in a simulated computer environment); for this reason the approach is also known as situated or embodied robotics. An argument that is often raised against embodied robotics is that it is too costly and too slow. You’d be much better off running the evolutionary process in a simulated environment within the computer, plucking out but the end result — the best simulated robot that has evolved — and implementing it in the real world. The problem is that often this does not work: When you go from the simulated to the real, the robot no longer functions properly. Our environment is full of many hidden complexities that often escape our notice, rendering it very hard to implement them in a computer; it’s just plain easier to use the real world. Nature’s open-endedness manifests itself not only at the phenotypic level but also at the genotypic level; it can tinker with the genome so as to produce entirely novel designs, which give rise to new phenotypes better able to rough it. This point has also been receiving increased attention of late: How can we set up an evolutionary scenario in which fundamental genomic changes can occur? For example, in Chapter 4 we evolved only the behavior of the robots, their small, neural-network brains. Their body, on the other hand, did not change at all, which is nothing like natural evolution. Nature possesses the ability to bring about not only behavioral changes but also morphological modifications in her creatures. A number of researchers have recently begun looking into the possibility of doing this for robots as well, evolving both behavior and morphology. While quite rudimentary at the moment, this is yet another step toward increased open-endedness. Next to the natural world a new universe has sprung up in the past few years, which is both complex and open-ended: the Internet. It is evolving at a breathtaking speed, already exhibiting enough complexity to merit the attention of scigineers. This is not surprising. The Internet’s evolution is mediated by self-proclaimed intelligent beings known as Homo sapiens. This process is more akin to Lamarckian evolution, where a beneficial survival trick can be immediately incorporated within the evolving population. Will we someday see the rise of network life? Even as you read these lines, there are thousands and thousands of small programs — known as agents — roaming the network, seeking to find information that will appease their human masters. Currently they are quite limited, lacking in both intelligence and autonomy. Little by little, though, they might develop into more autonomous critters. This might come about by employing some of the techniques we discussed in this book, giving rise to what I call Egents, for Evolving Agents, and double AAgents, for Adaptive Agents. These agents will be denizens of the network universe, whereas we will not; it is they who will be in their element. We may have built the house, but we are not the ones living in it. I just hope those double AAgents will work for you, their master, and not for some unknown party behind the cyber curtain. The borders between the living and the nonliving, between the Nature-made and the human-made appear to be constantly blurring. As in dreams. The silent man who came from the South eventually succeeded in dreaming a man and inserting him into reality. And what became of the dreamer? “With relief, with humiliation, with terror, he understood that he too was a mere appearance, dreamt by another.” Sweet dreams.
0 Comments
Excerpt from my book Machine Nature
Science and engineering have traditionally proceeded along separate tracks. The scientist is a detective who’s up against the mysteries of Nature: He analyzes natural processes, wishing to explain their workings, ultimately seeking to predict their future behavior. Scientists ask questions such as: What goes on inside the Sun? And how long will it keep on burning? How does the weather system work? And how can we predict whether it will rain tomorrow or not? What are the fundamental physical laws that underpin the workings of the known universe? The engineer, on the other hand, is a builder: Faced with social and economic needs, she tries to create useful artifacts. Engineers ask questions such as: How can we build a car with a cruising speed of 150 kilometers per hour, a fuel consumption of 20 kilometers per liter, and a price tag of no more than $8000? How do we design a computer chip that is twice as fast as the fastest extant chip? How can we build an autonomous lawn mower? “To put it briefly,” wrote Lewis Wolpert in The Unnatural Nature of Science, “science produces ideas whereas technology results in the production of usable objects.” And if I may add my own little epigram, science is about making sense whereas engineering is about making cents ... In a chapter entitled “Technology is not Science,” Wolpert discussed the differences between the two, noting that technology is very much older than science, and that science did almost nothing to aid technology until the nineteenth century. “Technology may well have used a series of ad hoc hypotheses and conjectures, but these were entirely directed to practical ends and not to understanding,” he wrote. Humans have been able to construct artifacts — such as tools and arms — and improve their existence via agriculture and animal domestication thousands of years before the arrival of modern science (in the sixteenth and seventeenth centuries). Though engineers have only recently begun to put science to use, scientists had always relied on the existing technology. To quote Wolpert: “Science by contrast has always been heavily dependent on the available technology, both for ideas and for apparatus. Technology has had a profound influence on science, whereas the converse has seldom been the case until quite recently.” The emergence of technology long before science is not at all surprising. “The goals of the ordinary person in those times,” wrote Wolpert, “were practical ends such as sowing and hunting, and that practical orientation does not serve pure knowledge. Our brains have been selected to help us survive in a complex environment; the generation of scientific ideas plays no role in this process.” Thomas S. Kuhn considered science and technology in one of the most influential works in the philosophy of science, The Structure of Scientific Revolutions, writing: “Just how special that community must be if science is to survive and grow may be indicated by the very tenuousness of humanity’s hold on the scientific enterprise. Every civilization of which we have records has possessed a technology, an art, a religion, a political system, laws, and so on. In many cases those facets of civilization have been as developed as our own. But only the civilizations that descend from Hellenic Greece have possessed more than the most rudimentary science. The bulk of scientific knowledge is a product of Europe in the last four centuries. No other place and time has supported the very special communities from which scientific productivity comes.” So perhaps we should count ourselves lucky to have science at all! During the twentieth century the use of scientific knowledge in advancing the state of the art of our technology has picked up quite dramatically. Today all but the simplest artifacts rest on strong scientific foundations, everything from computer chips to automobile tires, through T-shirts, sugarless bubble gum, and space shuttles. Science and engineering go hand in hand nowadays, both drinking from and helping to fill the other’s fountain. We’ve seen how engineers not only apply our current scientific understanding of Nature in order to build better artifacts, but are indeed coming full circle, trying to make these objects more Naturelike. Biology serves as a source of inspiration, with processes such as evolution, learning, and ontogeny implemented in artificial media. Nature can even be directly co-opted for engineering purposes, as with the use of DNA molecules to solve problems in computing. The betrothal of science and engineering, and the ensuing period of blissful courtship, have finally led, in my opinion, to marriage. I believe that the recent years have seen the rise of a new kind of professional (and profession): the scigineer, a combination of both scientist and engineer, holding a test tube in one hand and a proverbial slide rule in the other. What is a scigineer? Let me go about explaining this by way of example. In Chapter 2, we saw how computer programs in the form of trees can be evolved, noting that evolution tends to produce “spaghetti” programs: huge trees with lots of weird branches and offshoots. If the program works to your satisfaction, you can of course simply go ahead and use it; if you want to understand what makes it tick, though, then you’re in a position that’s rather like that of a biologist trying to decode our own program (the human genome). We even noted that when you delve into these evolved programs, you frequently find loads of “junk”: computer code that is of no use at all, a situation which is similar to Nature. Our genomes also contain junk code: unused portions of our DNA program. The scigineer has two hats — that of a scientist and that of an engineer — which she constantly alternates. First, she puts on the engineer’s hat, picks up her slide rule, and sets the stage, say, for the evolution of computer programs; then, she puts on the scientist’s hat and the white coat, setting out to analyze the creatures (programs) that have emerged in her artificial universe. The robots of Chapter 4 also constitute a case in point. They are an artifact created by the scigineer, who subjects them to an environment in which they evolve and learn. We saw how they can come to avoid obstacles, but exactly how do they accomplish this? Though we’re talking about an artifact — an object created by humans — it has evolved into something that we do not fully comprehend. Even though as stage designers we seem to have a privileged position, the actors have taken their own routes so as to better themselves. The scigineer must now take out his scientific toolbox in order to analyze this little robotic creature, just as a scientist analyzes a cockroach. Though such a current-day robot is still a far cry from a cockroach, it’s already complex enough to require the donning of a white coat. Let me give you another well-known example, that of the Tierra world. Tierra is a virtual universe — embedded within a computer — that was set up in an attempt to explore the idea of open-ended evolution. It comprises computer programs that can evolve; unlike those of Chapter 2, however, where an explicit goal (and hence fitness criterion) is imposed by the user (for example, compute taxes), the Tierran creatures receive no such guidance. Rather, they compete for the natural resources of their computerized environment: time and space. You may remember from Chapter 6 that a standard computer consists of two major elements, the processor — that actually runs the program and the memory — the storehouse that acts as a repository for programs. These two components represent Tierra’s natural resources, and — just as in Nature — they are limited: The processor can only run one program at a given moment, and the memory can contain no more than a certain number of programs. This gives rise to a fierce battle for survival, the Tierran creatures having to vie for the processor’s precious time and for a place in the jungle known as memory. Failure means death: A program that is unsuccessful in procuring these resources disappears from the evolutionary stage. Tierra was invented not by a computing scientist but by an ecologist, Thomas Ray, who had worked for years in the Costa Rican rain forest before turning from natural evolution to digital evolution. Ray inoculated his Tierran world with a single organism — a self-replicating program called the “ancestor,” which was able to co-opt the processor to produce copies of itself elsewhere in memory. This organism, a program written by Ray himself, was the only engineered (human-made) creature in Tierra. The replication process is not perfect: Errors, or mutations, may occur, thus driving the evolutionary process. Ray then set his system loose and witnessed the emergence of an ecosystem-in-a-bottle, right there inside his computer, including organisms of various sizes, and such beasties as parasites and hyperparasites. Ray wrote that “much of the evolution in the system consists of the creatures discovering ways to exploit one another. The creatures invent their own fitness functions through adaptation to their biotic environment.” Large programs such as the ancestor have several instructions that form part of their “body”; these program instructions are used to copy the organism from one memory location to another, thus effecting replication. The evolved parasites are small creatures (programs) that use the replication instructions of such larger organisms to self-replicate. In this manner they proliferate rapidly in the memory jungle without the need for the excess replication code. As in Nature, the evolved ecology exhibits a delicate balance: If all large creatures were to disappear, then the parasites would die, having no replication code to appropriate. Tierra had even managed to outdo its creator, who wrote: “Comparison to the creatures that have evolved shows that the one I designed is not a particularly clever one.” Ray first engineered this world, which he then proceeded to analyze as a scientist: “Trained biologists will tend to view synthetic life in the same terms that they have come to know organic life. Having been trained as an ecologist and evolutionist, I have seen in my synthetic communities, many of the ecological and evolutionary properties that are well known from natural communities.” (If you’re interested in learning how the humble Tierran beginnings ultimately lead to the rise of the “TechnoCore” artificial intelligences [AIs], I recommend The Rise of Endymion — the final volume of Dan Simmons’s Hyperion tetralogy.) In April 1998, while leafing through the weekly issue of Science, I was surprised to find two out-of-the-ordinary articles. Science, one of the top two scientific journals (the other being Nature), publishes almost exclusively hard-core scientific papers in physics, chemistry, biology, and the like. If your paper is good enough to grace Science’s pages, then it’s probably about the natural world — the object of scientific study. Yet in browsing this particular issue, I suddenly came across a couple of articles that dealt with an artificial world, created entirely by humans: the World Wide Web. One article looked into the efficiency of search tools, while the other studied patterns of behavior as information foragers move from one hyperlinked document to the next. It’s almost as if we were talking about a tropical jungle. This is a cogent example of scigineering that is totally unrelated to biology or biological inspiration. Here is a universe created entirely by humans, which has become so complex — much more so than a car or an elevator — and so interesting in and of itself, that it merits the attention of scientists — and the consecration of Science. We’ve engineered the World Wide Web, and then we turn to study this brave new world. The era of scigineering is upon us. The rival journal, Nature, waited until August 1999 to finally “give in.” In an article entitled “Genome Complexity, Robustness and Genetic Interactions in Digital Organisms,” Richard Lenski, Charles Ofria, Travis Collier, and Christoph Adami explored the effects of genetic mutations in both simple and complex digital organisms, which inhabited the artificial, Tierra-like world called “Avida.” Commenting on their work, Inman Harvey from the Centre for the Study of Evolution at the University of Sussex cautioned that “considerable debate can be expected before a consensus is reached on just what is necessary for results from a synthesized world to be seen as relevant to the natural world.” The scigineer might study her world and glean much about it, but she must be cautious in applying her conclusions to the world at large. The scigineer has one up on the scientist in that he can render his world easier to analyze, whereas a scientist must make do with what Nature affords him. Evolutionists would love to have the entire Tree of Life at their disposal, including all the lost species, yet this is but wishful thinking; geological reality, alas, is harsher on them, revealing but bits and pieces of the whole story. With artificial worlds, though, wishes are granted: You can easily save the entire evolutionary history of your artificial creatures to later analyze it at your leisure. Remember from the previous chapter how the protagonist of Permutation City describes the result of the Autoverse experiment — the result of billions of years of evolution in this artificial world? He says: “All demonstrably [my emphasis] descended from a single organism which lived three billion years ago ...” In this artificial planet, one can demonstrate that all the organisms have descended from a single origin since the entire evolutionary trace is available. The scigineer might not possess perfect knowledge of his engineered world, but he at least has the power — unlike scientists — to render his analysis job easier. In reminiscing about his illustrious career, Isaac Newton remarked: “I do not know what I may appear to the world; but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.” We’re no longer content to walk the shores of Nature’s oceans of truth, finding whatever pebbles may have been laid for us. We’re now creating new oceans, and with them we beget new shores to walk. Excerpt from my book Machine Nature
Tyger! Tyger! burning bright In the forests of the night, What immortal hand or eye Could frame thy fearful symmetry? Who indeed framed the tyger? Two hundred years after William Blake wrote the beautiful opening stanza of The Tyger, we have a better idea of how tigers come about: through the process of evolution. There is neither a Master Plan, nor a "hand of god," nor any ultimate goal; the driving force is the short-term objective of survival, with the process consisting of the slow accumulation over millenia of numerous small — yet profitable — variations. In the forests of evolution burn many a creature, with nature's immortal evolutionary hand slowly framing the fearful symmetry of the tiger. Natural evolution is an open-ended process, and is thus distinguished from artificial evolution which is guided, admitting a "hand of god": the (human) user who defines the problem to be solved. When we apply evolution with a well-defined goal in mind — such as designing a bridge, constructing a robotic brain, or developing a computer program — what we are doing is akin to animal husbandry. Farmers have been using the power of evolution for hundreds of years, in effect doing evolutionary computation on domestic animals. In order to "design," say, a faster horse they mate swift stallions with speedy mares, seeking to see even faster offspring emerge from this coupling. This is quite similar to the use of evolutionary techniques discussed in this book: the farmer defines the fitness criterion (say, speed) and performs the selection process by hand (by choosing the fastest individuals in the equine population); he then lets nature work out the genetic details involved in the coupling act. It's rather interesting to note that farmers and breeders had started using this method long before either evolution or genetics came under the scrutiny of science. Farmers can start out with slow horses and evolve fast ones — but can they evolve tigers? Engineers can start out with bad bridges and evolve good ones — but can they evolve a town like Cambridge? Robotics researchers can evolve robots that manage to amble decently — but can they evolve a robotic housemaid. Programmers can evolve computer programs that solve various well-defined problems — but can they evolve truly intelligent software? Natural evolution has done it all: complex organisms, sophisticated structures, intelligent beings; and it did so by being open-minded, ready to accommodate any improvement that came along. The Merriam-Webster online dictionary (www.m-w.com) defines "open-ended" as something that is "not rigorously fixed: as a : adaptable to the developing needs of a situation b: permitting or designed to permit spontaneous and unguided responses." Open-endedness is thus the flip side of guidedness, and it is a crucial aspect of natural evolution. Since nature has no specific goal in mind she can easily change course so as to face the winds of change, and in so doing she explores numerous designs out of what is essentially an infinitude of possibilities. Man, on the other hand, even when using evolutionary techniques does have an ultimate goal in mind — be it a retractable bridge or a program that computes taxes. When we apply evolutionary techniques the ingredients are all there: a (possibly huge) population of individuals, survival of the fittest, and the equivalent of genetic operators. Yet the hand of god is ever-present in the background: at every step of the way an individual's fate is decided in accordance with its ability to perform in the arena set up by the puppet master; and the master wants his puppets to do some very specific tricks. This places a fundamental a priori limit on what evolution can achieve: if we set about to find fast horses, then we might succeed in doing so — but we'll not suddenly see the emergence of tigers. Nature's open-endedness runs deeper, though, than the mere absence of a goal and a god — of a teleology and a master. It has to do with her ability not only to play the game but indeed to change the rules altogether. Let me drive this point home by way of a sportive example. The game of basketball is played on a court 90 feet long by 50 feet wide between two opposing teams of five players, who score by tossing an inflated ball through a raised goal. The rules are well known and rigid, with changes being rare, minor (for example, adding the three-point shot), and human-mediated (say, the NBA committee). This scenario is analogous to that of guided evolution: a human designer sets the stage (or in this case court) that gives rise to a (fiercely competitive) evolutionary process, from which but one kind of creature may emerge: basketball players. The process is not open-ended since there is a precisely defined goal (scoring more points), with virtually immutable rules. Though superb basketball players can (and do) evolve, this arena does not give rise to first-rate opera singers. Playing nature's "basketball" game is quite different. For one thing, there is no clear objective; at best, one can speak of a very basic goal, that of coming out of the match alive. What's more — and this is where open-endedness comes into play — nature keeps changing the rules of the game, both in time and in space. Being seven foot tall might be good at a certain place and time, whereas elsewhere or else-when it might be downright deleterious. And sometimes the rules are such that having a superb tenor voice is a match winner. Nature's game of basketball is more of a meta-game, where you want to score more points — but have to figure out how points are scored. In Chapter 1 we discussed an important distinction in nature, that between genotype and phenotype. An organism's genotype is its genetic constitution, the DNA chain that contains the instructions necessary for the making of the individual. The phenotype is the mature organism that emerges through execution of the instructions written in the genotype. It is the phenotype that engages in the battle for survival, whereas it is the genotype — safely cached in each cell of the organism — that accrues the evolutionary benefits. Setting a specific goal — as with artificial, guided evolution — means that there is a highly restricted environment; the basketball-player phenotype faces an environment in which it is demanded to perform a very specific task: playing basketball. Natural environments are not only much more complex but also highly dynamic — the phenotypes must face ever-changing circumstances. Nature's open-endedness manifests itself not only at the phenotypic level but also at the genotypic level: not only can the rules of the playground change, but so too can the rules for making players. The genome of a red ant is quite different from that of an orangutan (though as both are branches of the Tree of Life, they also bear many similarities). As we've seen, artificial-evolution scenarios to date are limited, being goal-oriented, with but very little maneuverability in changing the genetic makeup. A bridge genome will always produce a bridge — perhaps a superb one at that — but never a skyscraper. Nature, though, can tinker with the genome, thus changing the underlying construction plan, so as to produce entirely different beings, including skyscrapers (giraffes) and towns (ant colonies). This is a crucial aspect of her open-endedness. We've seen how artificial evolution is used to design complex objects, which stretch — or overstretch — our classical engineering techniques. The results are often quite impressive and at times those who use them are even reputed to cry out: "Wow, I'd have never come up with such a solution." But this is still at the level of evolving super bridges or superb basketball players; moreover, it might even be limited at that since an entirely novel bridge design or a new form of basketball player might necessitate genomic tinkering that is beyond the system's reach. Can something truly astounding — something entirely new — emerge out of an artificially set stage? In my mind this is one of our grandest challenges, and it may still be many years in the coming. I like to think of this defy as that of building a system that Knocks Your Socks OFF. Following the time-honored tradition in computing science of coining acronyms, this might be dubbed the KYS OFF challenge — which leads me to wonder whether such a system would kiss us off ... As our artifacts become more and more complex, so does their design become more arduous. One way out is to employ the powerful process of open-ended evolution. But wait a minute — by definition, that would mean ... removing the designer from the equation! Then who controls the design process — who's the boss? It seems that you can't have your cake and eat it too — something has to give. With guided evolution the guide — or designer — maintains a great deal of control over his system, and though he'll often be overwhelmed by the results obtained, his socks will remain firmly in place. Open-ended evolution might indeed knock your socks off, but at the price of giving up some of that precious control that we've grown used to. Strangely enough then, it is less design, meaning more open-endedness, that increases our design power. Uhm ... did I just say less design? Actually, you have to set up the stage so as to be more open-ended, that is, you have to design the system to exhibit ... less design! That's the essence of the KYS OFF challenge, which only nature has met so far — but then again, she's been at it for the past threenand a half billion years. (While I've been concentrating my discussion of the open-ended versus the guided on evolution this is by no means the only process of interest. Learning, for one, augments a system's open-endedness.) Open-ended goes hand in hand with less control — though with the potential of more spectacular results. Parents usually want their children to grow up to be independent and able to think for themselves. But in many ways child-raising is open-ended — with no guarantees: What if the child decides to be a rock star? (Result: horrified parents.) Or a doctor? (Result: delighted parents.) In Chapter 4 we discussed the application of biological processes, such as evolution and learning, in the field of adaptive robotics. We saw that one of the central goals is that of attaining more autonomous robots; I doubt, however, that we're ready to see them declare autonomy ... With an open-ended process not only do you not control the precise shape that the final outcome will take, you're not even sure what this outcome will be. When we look at nature's magnificent products with awe and with envy, we should always keep in mind the billions of years that their production necessitated. If you set off such a process and then patiently wait for a couple of billion years, you might find — a posteriori — lots of wonderful devices, such as eyes, toes, flowers, brains, and wings. You might be quite happy with this plethora of gadgets that will bring you fame and fortune. But this process is open-ended, which means you don't know in advance what the final products will be. In fact, it might not even get off the ground: it took nature almost three billion years before things really started to pick up and the Tree of Life began to grow. What if it never gets off the ground? Or if you simply get tired of waiting? The possibility of creating an artificial scenario in which open-ended evolution takes place is at the heart of Greg Egan's excellent science fiction novel Permutation City. Explaining to the researcher her mission, the protagonist says: "I want you to construct a seed for a biosphere ... I want you to design a pre-biotic environment — a planetary surface, if you'd like to think of it that way — and one simple organism which you believe would be capable, in time, of evolving into a multitude of species and filling all the potential ecological niches.'' Having succeeded in creating such a biospheric seed, evolution is then set lose in this artificial universe known as the Autoverse, to work its magic over the eons: "We've given the Autoverse a lot of resources; seven thousand years, for most of us, has been about three billion for Planet Lambert.'' And the outcome? "There are six hundred and ninety million species currently living on Planet Lambert. All obeying the laws of the Autoverse. All demonstrably descended from a single organism which lived three billion years ago — and whose characteristics I expect you know by heart. Do you honestly believe that anyone could have designed all that? " The answer is no; be it in an artificial or a natural world, open-ended evolution will knock your socks off ... If we give up our control, can't things get out of hand, leading to a system run amok? This is a tough question, which needs to be addressed on a case-per-case basis. We should come up with fail-safes (à la Asimov's three laws of robotics). We might well wish to place checks and bounds (for example, limit the robots' autonomy). We'd like to maintain the possibility of pulling the plug if things get downright ugly. But this issue is by no means an open-and-shut case: how does one juggle between control and autonomy — between guided and open-ended? This issue will probably gain more prominence as our technology advances, enabling us to build systems that are somewhat less controlled — and less controllable. Which brings me back to William Blake and the closing lines of "The Tyger,'' where the "Could'' of the first stanza has been conspicuously replaced by "Dare": Tyger! Tyger! burning bright In the forests of the night, What immortal hand or eye Dare frame thy fearful symmetry? Dare we frame a tyger? |