| Mind-Culture Coevolution home

Originally published in Journal of Social and Biological Structures 13(1): 33-40, 1990
Reprinted with permission. Copyright © 1990 JAI Press, Inc. All Rights Reserved.

[Mind-Cultural Coevolution Home]

A Note on Why Natural Selection Leads to Complexity

William L. Benzon and David G. Hays

Abstract: While science has accepted biological evolution through natural selection, there is no generally agreed explanation for why evolution leads to ever more complex organisms. Evolution yields organismic complexity because the universe is, in its very fabric, inherently complex, as suggested by Ilya Prigogine's work on dissipative structures. Because the universe is complex, increments in organismic complexity yield survival benefits: (1) more efficient extraction of energy and matter, (2) more flexible response to vicissitudes, (3) more effective search. J.J. Gibson's ecological psychology provides a clue to the advantages of sophisticated information processing while the lore of computational theory suggests that a complex computer is needed efficiently to perform complex computations (i.e. sophisticated information processing).

Gibson's Ecological Psychology
Computational Capacity
The Complex Universe
Stochastic Drift and the Cost of Complexity
Conclusion: Beyond Simplicity


One of the more remarkable facts about our thinking on biological evolution is that we know so much about how it has happened without achieving wide acceptance for any explanation of why it occurs. Peter A. Corning cites half a dozen authorities who have emphasized the significance of this question (1983: 64). In the small, natural selection tends to eliminate the least fit and to allow more progeny to the most fit. In the large, natural selection tends to deliver larger and more complex species:

When we look at the gamut of increasing complexity that has evolved over geological time and how it has been molded by natural selection, and how it can be put together so that it works mechanically from generation to generation, we cannot avoid a feeling of awe. But we should not let this awe overcome our reason. It is crucial to examine these problems that might have microexplanations. (Bonner 1988:190)

The issue of complexity is at present controversial, but, we believe, unavoidable:

I was raised as a mycologist, a student of fungi and slime molds, and it was the norm to refer to them as lower plants, while angiosperms and gymnosperms are higher plants. But one is flirting with sin if one says a worm is a lower animal and a vertebrate is a higher animal, even though their fossil origins will be found in lower and higher strata. Perhaps plants are forever free of problems of undesirable egocentricity dogma, while all animals are too close to man. I do not know the answer to this subtle question, but in these pages I will treat all animals as I have always treated all plants; firmly, and without favor. (Bonner 1988:6)

Bonner takes the division of labor as a conceptual definition of complexity, and uses size and number of (a) species or (b) cell types as indicators of complexity. Over geological time, the size of the largest plants and animals on earth has increased slowly (graph, 1988:27, from Bonner 1965). Using morphological criteria, Bonner allows from 1 to 120 cell types per organism, from mycoplasma to vertebrates. Comparing a graph of weight vs. number of cell types (1988:123) with the "hypothetical" or "possible" evolutionary trees on pages 8 and 10, it is clear that the maximum number of cell types has increased through evolution.

We take it as fact that evolution leads to complexity. The question, posed in various ways by Corning and the authorities he cites, is "Why?" It is merely an empirical observation, not a logical necessity, that variation and selection lead to increasing complexity. If the increase in complexity with evolution is not required by logic, then a reason for it must be found within biology. Corning himself offers an explanation of why evolution moves toward greater complexity. He writes that "It is the selective advantages that arise from various synergistic effects that constitute the underlying cause of the apparently orthogenetic (or directional) aspect of evolutionary history, that is, the progressive emergence of complex, hierarchically organized systems."

The problem is not in understanding that synergism occurs only in complex systems; the problem is in accounting for the selective advantages of synergy. And Corning presupposes the selective advantages; he seems unaware that they are problematic. After all, selection only assures that organisms will be adapted. One could imagine a universe in which successive adaptations do not become more complex. What we need is a clear statement of why our universe is not that imagined universe.

The answer that we propose requires no new physical principles or mechanisms. Our proposal is essentially an exercise in conceptual clarification. The necessary clues come from the work of a perceptual psychologist, J. J. Gibson, who considered his theory ecological, and from the theory of computation.

Gibson's Ecological Psychology

Gibson's ecological psychology is grounded in the assertion that we cannot understand how perception works without understanding the structure of the environment in which the perceptual system must operate. In the context of his analysis of visual perception (1979) Gibson addressed an issue formulated most poignantly by Rene Descartes in his Meditations on First Philosophy (1641): How do I distinguish between a valid perception and an illusory image, such as a dream? The difference, Gibson (1979:256-257) tells us, between mere image and reality is that, upon repeated examination, reality shows us something new, whereas images only reiterate what we have seen before.

A surface is seen with more or less definition as the accommodation of the lens changes; an image is not. A surface becomes clearer when fixated; an image does not. A surface can be scanned; an image cannot. When the eyes converge on an object in the world, the sensation of crossed diplopia disappears, and when the eyes diverge, the "double image" reappears; this does not happen for an image in the space of the mind. . . . No image can be scrutinized -- not an afterimage, not a so-called eidetic image, not the image in a dream, and not even a hallucination. An imaginary object can undergo an imaginary scrutiny, no doubt, but you are not going to discover a new and surprising feature of the object this way.

Gibson presupposes an organism which is actively examining its environment for useful information. It can lift, touch, turn, taste, tear and otherwise manipulate the object so that its parts and qualities are exposed to many sensory channels. Under such treatment reality continually shows new faces. Dream, on the other hand, simply collapses. Dream objects are not indefinitely rich. They may change bafflingly into other objects, but in themselves they are finite.

With this simple remark, Gibson answered not only Descartes's question but also the first question in epistemology. If we want to assure ourselves of the reliability of our knowledge of the universe, we must first assure ourselves that we are observing it. Descartes realized that percepts do not contain within themselves markers indicating where they came from. If the percept is all you have, where can you find such a marker? Gibson suggests that the search for a marker is beside the point, in the same way that Alexander's solution to the Gordian knot problem made it clear that untying it won't work. Gibson's answer is like Alexander's sword, in that it rejects the stipulated terms of the problem. Reality is in the eye of the beholder, or more precisely, in the eye-hand coordination of the active observer. Therefore the answer has to be found in the actions available to the observer.

Reality is not perceived, it is enacted -- in a universe of great, perhaps unbounded, complexity.

Now, it follows immediately that the more capable observer can obtain more knowledge of the universe. To repeat the same acts of manipulation and observation gains nothing; to do justice to the universe's unlimited capacity to surprise, the observer needs a large repertoire of manipulative techniques and perceptual channels. Indeed, nothing assures us that we, with elaborate technology for experiment and observation, have exhausted nature's store of surprises. After billions of years of biological evolution, thousands of years of cultural evolution, and centuries of scientific and technical cumulation, we can still hope to add to our capability and learn about new kinds of phenomena.

Computational Capacity

Manipulation, perception, and the analysis of percepts are now well known to require massive computational power. The fabrication of even simple robots is a difficult problem for contemporary theory and practise. It is clear that observation is a function that imposes severe requirements on brains, as it is clear that manipulation imposes them on skeletons and musculature. From the theory of computation, we need one primary result, that complex computations require more components if they are to be finished in reasonable time. This follows from Grosch's law (Knight & Cerveny, 1983), which says that the power of computers varies with the square of their cost.

To see why, consider the problem of multiplication. In a very small computer, there is no hardware multiplier. Multiplication is performed with software; the computer successively adds (and shifts) the multiplicand. If the multiplier is 4725, the multiplication takes 4 + 7 + 2 + 5 = 18 additions and 3 shifts (actually, the work is done in binary notation; but the principle is the same). With a hardware multiplier, the product is obtained in one step. A multiplier may double the cost, or size, of the hardware. But plainly it makes the machine far more powerful. The quadratic increase of power with size may be the best explication of Corning's "synergy".

The components can be as small as technology can make them; a very large integrated circuit puts millions of components on a thumbnail-sized chip. But the components are logically independent, however small. The designer must account for each one, and for its place in the configuration. In brains, the unit that counts as a component may be a neuron or something much smaller (or larger). But whatever the component size, the brain with more components has by far the greater computing power -- until the plan of organization changes.

We have now arrived at our answer. More powerful computers can direct more extensive manipulations and analyze more powerful and more diverse channels of perception. Since the universe continues to reward increments of observational power with new kinds of knowledge, at least up to our present level and perhaps beyond, the more powerful computer gives the organism a wider range and depth of knowledge on which to act. It only remains to assert that the universe provides material rewards commensurate with the knowledge available to the claimant, and we are done. With every evolutionary step, the organism enhances its capacity to know the world and to claim the benefits on which survival depends. The quadratic law makes the enhancement worthwhile.

We happen to be looking at these principles in the most complicated instantiations, but they certainly apply to all living things. They apply, in the limit, to one-celled organisms. And they also apply, with an appropriate shift of the time scale, to plants, which must orient themselves to sources of light, water, and minerals. The temporal and spatial scales on which these principles operate vary enormously over the full range of living systems -- plant, animal, and cultural -- but the principles themselves are the same.

The Complex Universe

It is easy enough to assert that the universe is essentially complex, but what does that assertion mean? Biology is certainly accustomed to complexity. Biomolecules consist of many atoms arranged in complex configurations; organisms consist of complex arrangements of cells and tissues; ecosystems have complex pathways of dependency between organisms. These things, and more, are the complexity with which biology must deal. And yet such general examples have the wrong "feel;" they don't focus one's attention on what is essential. To use a metaphor, the complexity we have in mind is a complexity in the very fabric of the universe. That garments of complex design can be made of that fabric is interesting, but one can also make complex garments from simple fabrics. It is complexity in the fabric which we find essential.

We take as our touchstone the work of Ilya Prigogine, who won the Nobel prize for demonstrating that order can arise by accident (Prigogine and Stengers 1984; Prigogine 1980; Nicolis and Prigogine 1977). He showed that when certain kinds of thermodynamic systems get far from equilibrium order can arise spontaneously. These systems include, but are not limited to, living systems. In general, so-called dissipative systems are such that small fluctuations can be amplified to the point where they change the behavior of the system. These systems have very large numbers of parts and the spontaneous order they exhibit arises on the macroscopic temporal and spatial scales of the whole system rather than on the microscopic temporal and spatial scales of its very many component parts. Further, since these processes are irreversible, it follows that time is not simply an empty vessel in which things just happen. The passage of time, rather, is intrinsic to physical process.

We live in a world in which "evolutionary processes leading to diversification and increasing complexity" are intrinsic to the inanimate as well as the animate world (Nicolis and Prigogine 1977: 1; see also Prigogine and Stengers 1984: 297-298). That this complexity is a complexity inherent in the fabric of the universe is indicated in a passage where Prigogine (1980: xv) asserts "that living systems are far-from-equilibrium objects separated by instabilities from the world of equilibrium and that living organisms are necessarily 'large,' macroscopic objects requiring a coherent state of matter in order to produce the complex biomolecules that make the perpetuation of life possible." Here Prigogine asserts that organisms are macroscopic objects, implicitly contrasting them with microscopic objects.

Prigogine has noted that the twentieth century introduction of physical constants such as the speed of light and Planck's constant has given an absolute magnitude to physical events (Prigogine and Stengers 1984: 217-218). If the world were entirely Newtonian, then a velocity of 400,000 meters per second would be essentially the same as a velocity of 200,000 meters per second. That is not the universe in which we live. Similarly, a Newtonian atom would be a miniature solar system; but a real atom is quite different from a miniature solar system.

Physical scale makes a difference. The physical laws which apply at the atomic scale, and smaller, are not the same as those which apply to relatively large objects. That the pattern of physical law should change with scale, that is a complexity inherent in the fabric of the universe, that is a complexity which does not exist in a Newtonian universe. At the molecular level life is subject to the quantum mechanical laws of the micro-universe. But multi-celled organisms are large enough that, considered as homogeneous physical bodies, they exist in the macroscopic world of Newtonian mechanics. Life thus straddles a complexity which inheres in the very structure of the universe.

Stochastic Drift and the Cost of Complexity

It might be objected that the apparent trend toward complexity reflects nothing more than three billion years of stochastic drift. This objection admits that there are later species which are more complex than earlier species, but denies that any particular causal forces are at work. Obviously there is a deep and irreducible stochastic element in evolution. But, we will argue, this accounts only for the mechanism by which increases in organismic complexity are "proposed". It does not explain why some of the "proposals" become accepted, and then thrive.

For each increase in complexity there is a cost. The benefit conferred by the complexity must cover the cost if that complexity is to become stabilized. Stochastic drift tells us nothing about how these costs are covered.

Let us take a closer look at this line of reasoning. Assume that we have a species which has a certain level of complexity. Over a sufficiently long period of time that species may well evolve into a diverse collection of species. But, we suggest, it is certainly likely that most of the later species are neither more nor less complex than the ancestral species; they have evolved from it, but are only different in some way which doesn't affect the overall complexity of the species.

While it is logically possible that some of the later species will be simpler than the originating species, considerations of internal consistency in development can be invoked (e.g. Ayala 1972) to argue that this is relatively unlikely. The gene pool may well propose less complex species but, the argument goes, any loss of structure or function is likely to be so deleterious to the organism that such a species will not survive. As Bonner puts it (1988:66), "It is easier to add than to subtract". This argument tells us that, once an increase in complexity is accepted, it probably will not be lost. Bonner quotes the phrase "phylogenetic ratcheting" from Katz (1986). But this argument does not tell us anything about how an increase in complexity can be accepted.

That leaves us with the most interesting case, that where the gene pool "proposes" more complex species. We do not, of course, argue that such proposals are on any but a random basis. However, we do suggest that such proposals are likely to be extremely rare, for, in the systems which Prigogine studies, the fluctuations which do result in the emergence of greater order are extremely rare. Further, the process by which these fluctuations are amplified to produce emergent order is a costly one. The emergent system requires more energy than its less complex predecessor. The argument we are making is that only a complex universe will provide organisms with an environment which rewards a species investment in high-energy complexity. Stochastic processes are perfectly adequate for proposing occasional increases in complexity. But that is not enough. If the complexity is to be stabilized, then it must be paid for.

Consider an example, the evolution of endothermy in vertebrates. Endotherms are certainly more complex than ectotherms. (Bonner (1988:132-133) remarks that "No doubt fish have fewer cell types than mammals, but establishing this fact firmly might take a lifetime of sterile investigation.") They must have neural and hormonal mechanisms for regulating their metabolism so that body temperature can be maintained within narrow limits over fairly wide daily and seasonal changes in external temperature. Even such simple thermoregulative devices as fur and feathers represent an epidermal complexity which is not present in ectotherms.

The energetic cost of this complexity is quite substantial. As Bennett and Ruben (1979) point out, a "mammal or bird requires five to ten times more energy for its maintenance than a reptile or other vertebrate ectotherm of similar size and body temperature." They go on to argue that this energetic cost is paid for by the increased level of activity which endotherms can sustain. Endotherms can actively hunt and forage while many ectotherms just sit around waiting for a meal to happen along. Endothermic stamina is of obvious advantage in fleeing from predators, in territorial defense, and in pursuing a mate. These advantages, the argument goes, pay for the increased energy required for endothermy. These advantages all have to do with enabling an endothermic species more effectively to exploit its environment. The endotherm can take advantage of environmental complexity which is inaccessible to an ectotherm; it inhabits a more complex niche.

Finally, note that managing this complexity requires increased computational capacity. Temperature must be monitored and metabolism must be regulated so as to keep the temperature within range. Increased levels of behavioral complexity also require more subtle computation to support increased perceptual sophistication and more complex modes of action. There is thus a correlation between the level of computational complexity which the organism can support and the complexity of the niche which it can exploit.

Thus we are skeptical that mere stochastic drift is able adequately to account for evolutionary developments toward complexity such as the emergence of endothermy. While such developments presuppose a stochastic drift in the operations of the gene pool, a drift which will inevitably propose incremental increases in complexity, these developments also require a universe which is so structured that the cost of increased complexity can be covered. Only a complex universe contains potential niches of unbounded complexity, niches which can cover the costs of complexity.

Conclusion: Beyond Simplicity

Our argument, then, goes like this: The universe is indefinitely intricate. A more highly differentiated and integrated organism is a more powerful sensor and computer. Hence elaboration yields more information without limit. Greater information pays off with:

(1) more effective search,
(2) more efficient extraction of energy and matter,
(3) more flexible response to vicissitudes.

This argument offers no new evidence, and no new physical principles or mechanisms. Rather, in the spirit of thought experiments, it is really about conceptual clarification. We have simply asserted that the answer to the question "Why does natural selection lead to complexity?" must be found in the fact that natural selection operates in a complex universe.

In thinking about this question it is useful to recall that, when it comes time to formulate fundamental explanations, we tend to think of complexity as superficial and illusory. Our intellectual tradition tells us that the real stuff is always hidden and simple. The long philosophical tradition of the West contains a bias in favor of simplicity -- simplicity in theories first, but simple theories are only good enough for simple universes, or superficial aspects of complex ones. From the sixteenth century through the nineteenth, science obtained analytic solutions for a great many superficial problems, and complacently assumed that it was dealing with the whole of nature. In the twentieth century, complacency has vanished. The dynamics of fluids, of energy, and of life and the psyche can only be characterized by mapping discontinuities; analytic solutions are presently out of range. Structure arises at the discontinuities; the origins of structure are not to be found within the simplistic tradition.

Thus, we are asserting that we cannot understand evolution without giving up the inherent bias of our intellectual tradition. The classic quest for simplicity must give way to attempts to understand complexity; life can exist only far from equilibrium and in systems with a vast number of components. The intricacy of the universe before life appears is only implicit. Living things bring intricacy to the surface; later evolutionary steps take place in the context of life, and respond to its overt intricacy.


Ayala, F. J., 1972. The Autonomy of Biology as a Natural Science. In (A. D. Breck and W. Yourgrau, Eds) Biology, History, and Natural Philosophy . New York: Plenum Publishing Corp., pp. 1-16.

Bennett, A. F. and J. A. Ruben, 1979. Endothermy and Activity in Vertebrates. Science 206, 649-654.

Bonner, John Tyler, 1988. The Evolution of Complexity by Means of Natural Selection . Princeton, N.J.: Princeton University Press.

Corning, Peter A., 1983. The Synergism Hypothesis: A Theory of Progressive Evolution. New York: McGraw-Hill Book Company.

Gibson, J. J., 1979. The Ecological Approach to Visual Perception . Boston: Houghton Miflin.

Katz, M. J., 1986. "Is Evolution Random?" In Development as an Evolutionary Process , ed. R. A. Raff and E. C. Raff. Alan R. Liss, New York.

Knight, K. E., and R. P. Cerveny, 1983. "Grosch's Law". In (Anthony Ralston and Edwin D. Reilly, Jr.) Encyclopedia of Computer Science and Engineering, 2d ed. New York : Van Nostrand Reinhold Company, p. 668. "... computing power increases as the square of the cost, ... While Grosch never published his law, it became part of the oral tradition ..."

Nicolis, G., and Ilya Prigogine, 1977. Self-organization in Nonequilibrium Systems . New York: John Wiley & Sons.

Prigogine, Ilya, 1980. From Being to Becoming: Time and Complexity in the Physical Sciences . San Francisco: W. H. Freeman and Company.

Prigogine, Ilya, and Isabelle Stengers, 1984. Order out of Chaos . Boulder, Colorado: Shambhala Publications Inc.

[Mind-Culture Coevolution Home]