Emergence

Definition of terms and explanation of concepts

Moderator: Moderators

Post Reply
Rob
Scholar
Posts: 331
Joined: Thu Nov 17, 2005 10:47 am

Emergence

Post #1

Post by Rob »

Clayton wrote:Emergence, some say, is merely a philosophical concept, unfit for scientific consumption. Or, others predict, when subjected to empirical testing it will turn out to be nothing more than shorthand for a whole batch of discrete phenomena involving novelty, which is, if you will, nothing novel. Perhaps science can study emergences, the critics continue, but not emergence as such. (Clayton 2004: 577)

It's too soon to tell. But certainly there is a place for those, such s the scientist to whom this volume is dedicated, who attempt to look ahead, trying to gauge what are Nature's broadest patterns and hence where present scientific resources can best be invested. John Archibald Wheeler formulated an important motif of emergence in 1989:

Directly opposite the concept of universe as machine built on law is the vision of a world self-synthesized. On this view, the notes struck out on a piano by the observer-participants of all places and all times, bits though they are, in and by themselves constituted the great wide world of space and time and things. (Wheeler 1999: 314)

Wheeler summarized his idea--the observer-participant who is both the result of an evolutionary process and, in some sense, the cause of his own emergence--in two ways: ... in the maxim "It from bit." .... The maxim expresses the bold question that gives rise to the emergentist research program: Does nature, in its matter and its laws, manifest an inbuilt tendency to bring about increasing complexity? Is there an apparently inevitable process of complexification that runs from the period table of the elements through the explosive variations of evolutionary history to the unpredictable progress of human cultural history, and perhaps even beyond? (Clayton 2004: 577)

The emergence hypothesis requires that we proceed though at least four stages. The first stage involves rather straightforward physics--say, the emergence of classical phenomena from the quantum world (Zurek 1991, 2002) or the emergence of chemical properties through molecular structure (Earley 1981). In a second stage we move from the obvious cases of emergence in evolutionary history toward what may be the biology of the future: a new, law-based "general biology" (Kauffman 2000) that will uncover the laws of emergence underlying natural history. Stage three of the research program involves the study of "products of the brain" (perception, cognition, awareness), which the program attempts to understand not as unfathomable mysteries but as emergent phenomena that arise as natural products of the complex interactions of brain and central nervous system. Some add a fourth stage to the program, one that is more metaphysical in nature: the suggestion that the ultimate results, or the original causes, of natural emergence transcend or lie beyond Nature as a whole. Those who view stage-four theories with suspicion should note that the present chapter does not appeal to or rely on metaphysical speculations of this sort in making its case. (Clayton 2004: 578-579)

Defining terms and assumptions

The basic concept of emergence is not complicated, even if the empirical details of emergent processes are. We turn to Wheeler, again, for an opening formulation:

When you put enough elementary units together, you get something that is more than the sum of these units. A substance made of a great number of molecules, for instance, has properties such as pressure and temperature that no one molecule possesses. It may be a solid or a liquid or a gas, although no single molecule is solid or liquid or gas. (Wheeler 1998: 341)

Or, in the words of biochemist Arthur Peacocke, emergence takes place when "new forms of matter, and a hierarchy of organization of these forms ... appear in the course of time" and " these new forms have new properties, behaviors, and networks of relations" that must be used to describe them (Peacocke 1993: 62).

Clearly, no one-size-fits-all theory of emergence will be adequate to the wide variety of emergent phenomena in the world. Consider the complex empirical differences that are reflected in these diverse senses of emergence:

• temporal or spatial emergence
• emergence in the progression from simple to complex
• emergence in increasingly complex levels of information processing
• the emergence of new properties (e.g., physical, biological, psychological)
• the emergence of new causal entities (atoms, molecules, cells, central nervous system)
• the emergence of new organizing principles or degrees of inner organization (feedback loops, autocatalysis, "autopoiesis")
• emergence in the development of "subjectivity" (if one can draw a ladder from perception, through awareness, self-awareness, and self-consciousness, to rational intuition).

Despite the diversity, certain parameters do constrain the scientific study of emergence:

1. Emergence studies will be scientific only if emergence can be explicated in terms that the relevant sciences can study, check, and incorporate into actual theories.

2. Explanations concerning such phenomena must thus be given in terms of the structures and functions of stuff in the world. As Christopher Southgate writes, "An emergent property is one describing a higher level of organization of matter, where the description is not epistemologically reducible to lower-level concepts" (Southgate et al. 1999: 158).

3. It also follows that all forms of dualism are disfavored. For example, only those research programs count as emergentist which refuse to accept an absolute break between neurophysiological properties and mental properties. "Substance dualisms," such as the Cartesian delineation of reality into "matter" and "mind," are generally avoided. Instead, research programs in emergence tend to combine sustained research into (in this case) the connections between brain and "mind," on the one hand, with the expectation that emergent mental phenomena will not be fully explainable in terms of underlying causes on the other.

4. By definition, emergence transcends any single scientific discipline. At a recent international consultation on emergence theory, each scientist was asked to define emergence, and each offered a definition of the term in his or her own specific field of inquiry: physicists made emergence a product of tome-invariant natural laws; biologists presented emergence as a consequence of natural history; neuroscientists spoke primarily of "things that emerge from brains"; and engineers construed emergence in terms of new things that we can build or create. Each of these definitions contributes to, but none can be the sole source for, a genuinely comprehensive theory of emergence. (Clayton 2004: 579-580)

Physics to chemistry

(....) Things emerge in the development of complex physical systems that are understood by observation and cannot be derived from first principles, even given a complete knowledge of the antecedent states. One would not know about conductivity, for example, from a study of individual electrons alone; conductivity is a property that emerges only in complex solid state systems with huge numbers of electrons.... Such examples are convincing: physicists are familiar with a myriad of cases in which physical wholes cannot be predicted based on knowledge of their parts. Intuitions differ, though, on the significance of this unpredictability. (Clayton 2004: 580)

(....) [Such examples are] unpredictable even in principle -- if the system-as-a-whole is really more than the sum of its parts.

Simulated Evolutionary Systems

Computer simulations study the processes whereby very simple rules give rise to complex emergent properties. John Conway's program "Life," which simulates cellular automata, is already widely known.... Yet even in as simple a system as Conway's "Life," predicting the movement of larger structures in terms of the simple parts alone turns out to be extremely complex. Thus in the messy real world of biology, behaviors of complex systems quickly become noncomputable in practice.... As a result -- and, it now appears, necessarily -- scientists rely on explanations given in terms of the emerging structures and their causal powers. Dreams of a final reduction "downwards" are fundamentally impossible. Recycled lower-level descriptions cannot do justice to the actual emergent complexity of the natural world as it has evolved. (Clayton 2004: 582)

(....)

Ant colony behavior

Neural network models of emergent phenomena can model ... the emergence of ant colony behavior from simple behavioral "rules" that are genetically programmed into individual ants. (....) Even if the behavior of an ant colony were nothing more than an aggregate of the behaviors of the individual ants, whose behavior follows very simple rules,[2] the result would be remarkable, for the behavior of the ant colony as a whole is extremely complex and highly adaptive to complex changes in its ecosystem. The complex adaptive potentials of the ant colony as a whole are emergent features of the aggregated system. The scientific task is to correctly describe and comprehend such emergent phenomena where the whole is more than the sum of the parts. (Clayton 2004: 586-587)

Biochemistry

So far we have considered models of how nature could build highly complex and adaptive behaviors from relatively simple processing rules. Now we must consider actual cases in which significant order emerges out of (relative) chaos. The big question is how nature obtains order "out of nothing," that is, when the order is not present in the initial conditions but is produced in the course of a system's evolution.[3] What are some of the mechanisms that nature in fact uses? We consider four examples. (Clayton 2004: 587)

Fluid convection

The Benard instability is often cited as an example of a system far from thermodynamic equilibrium, where a stationary state becomes unstable and then manifests spontaneous organization (Peacocke 1994: 153). In the Bernard case, the lower surface of a horizontal layer of liquid is heated. This produces a heat flux from the bottom to the top of the liquid. When the temperature gradient reaches a certain threshold value, conduction no longer suffices to convey the heat upward. At that point convection cells form at right angles to th4e vertical heat flow. The liquid spontaneously organizes itself into these hexagonal structures or cells. (Clayton 2004: 587-588)

Differential equations describing the heat flow exhibit a bifurcation of the solutions. This bifurcation represents the spontaneous self-organization of large numbers of molecules, formally in random motion, into convection cells. This represents a particularly clear case of the spontaneous appearance of order in a system. According to the emergence hypothesis, many cases of emergent order in biology are analogous. (Clayton 2004: 588)

Autocatalysis in biochemical metabolism

Autocatalytic processes play a role in some of the most fundamental examples of emergence in the biosphere. These are relatively simple chemical processes with catalytic steps, yet they well express the thermodynamics of the far-from-equilibrium chemical processes that lie at the base of biology. (....) Such loops play an important role in metabolic functions. (Clayton 2004: 588)

Belousov-Zhabotinsky reactions

The role of emergence becomes clearer as one considers more complex examples. Consider the famous Belousov-Zhabotinsky reaction (Prigogine 1984: 152). This reaction consists of the oxidation of an organic acid (malonic acid) by potassium bromate in the presence of a catalyst such as cerium, manganese, or ferroin. From the four inputs into the chemical reactor more than 30 products and intermediaries are produced. The Belousov-Zhabotinsky reaction provides an example of a biochemical process where a high level of disorder settles into a patterned state. (Clayton 2004: 589)

(....) Put into philosophical terms, the data suggest that emergence is not merely epistemological but can also be ontological in nature. That is, it's not just that we can't predict emergent behaviors in these systems from a complete knowledge of the structures and energies of the parts. Instead, studying the systems suggests that structural features of the system -- which are emergent features of the system as such and not properties pertaining to any of its parts -- determine the overall state of the system, and hence as a result the behavior of individual particles within the system. (Clayton 2004: 589-590)

The role of emergent features of systems is increasingly evident as one moves from the very simple systems so far considered to the sorts of systems one actually encounters in the biosphere. (Clayton 2004: 590)

(....)

The biochemistry of cell aggregation and differentiation

We move finally to processes where a random behavior or fluctuation gives rise to organized behavior between cells based on self-organization mechanisms. Consider the process of cell aggregation and differentiation in cellular slime molds (specifically, in Dictyostelium discoideum). The slime mold cycle begins when the environment becomes poor in nutrients and a population of isolated cells joins into a single mass on the order of 10^4 cells (Prigogine 1984: 156). The aggregate migrates until it finds a higher nutrient source. Differentiation than occurs: a stalk or “foot” forms out of about one-third of the cells and is soon covered with spores. The spores detach and spread, growing when they encounter suitable nutrients and eventually forming a new colony of amoebas. (Clayton 2004: 589-591) [See Levinton 2001: 166; ]

Note that this aggregation process is randomly initiated. Autocatalysis begins in a random cell within the colony, which then becomes the attractor center. It begins to produce cyclic adenosine monophosphate (AMP). As AMP is released in greater quantities into extracellular medium, it catalyzes the same reaction in the other cells, amplifying the fluctuation and total output. Cells then move up the gradient to the source cell, and other cells in turn follow their cAMP trail toward the attractor center. (Clayton 2004: 589-591) (....)

Biology

Ilya Prigogine did not follow the notion of "order out of chaos" up through the entire ladder of biological evolution. Stuart Kauffman (1995, 2000) and others (Gell-Mann 1994; Goodwin 2001; see also Cowan et al. 1994 and other works in the same series) have however recently traced the role of the same principles in living systems. Biological processes in general are the result of systems that create and maintain order (stasis) through massive energy input from their environment. In principle these types of processes could be the object of what Kauffman envisions as "a new general biology," based on sets of still-to-be-determined laws of emergent ordering or self-complexification. Like the biosphere itself, these laws (if they indeed exist) are emergent: they depend on the underlying physical and chemical regularities but are not reducible to them. Kauffman (2000: 35) writes: (Clayton 2004: 592)

I wish to say that life is an expected, emergent property of complex chemical reaction networks. Under rather general conditions, as the diversity of molecular species in a reaction system increases, a phase transition is crossed beyond which the formation of collectively autocatalytic sets of molecules suddenly becomes almost inevitable [emphasis added]. (Clayton 2004: 593)

Until a science has been developed that formulates and tests physics-like laws at the level of biology [evo-devo is the closest we have so far come], the "new general biology" remains an as-yet-unverified, though intriguing, hypothesis. Nevertheless recent biology, driven by the genetic revolution on the one side and by the growth on the environmental sciences on the other, has made explosive advances in understanding the role of self-organizing complexity in the biosphere. Four factors in particular play a central role in biological emergence. (Clayton 2004: 593)

The role of scaling

As one moves up the ladder of complexity, macrostructures and macromechanisms emerge. In the formation of new structures, scale matters -- or, better put, changes in scale matter. Nature continually evolves new structures and mechanisms as life forms move up the scale of molecules (c. 1 Ångstrom) to neurons (c. 100 micrometers) to the human central nervous system (c. 1 meter). As new structures are developed, new whole-part relations emerge. (Clayton 2004: 593)

John Holland argues that different sciences in the hierarchy of emergent complexity occur at jumps of roughly three orders of magnitude in scale. By the point systems have become too complex for predictions to be calculated, one is forced to “move the description ‘up a level’” (Holland 1998: 201). The “microlaws” still constrain outcomes, of course, but additional basic descriptive units must also be added. This pattern of introducing new explanatory levels iterates in a periodic fashion as one moves up the ladder of increasing complexity. To recognize the pattern is to make emergence an explicit feature of biological research. As of now, however, science possesses only a preliminary understanding of the principles underlying this periodicity. (Clayton 2004: 593)

The role of feedback loops

The role of feedback loops, examined above for biochemical processes, become increasingly important from the cellular level upwards. (....) (Clayton 2004: 593)

The role of local-global interactions

In complex dynamical systems the interlocked feedback loops can produce an emergent global structure. (....) In these cases, “the global property -- [the] emergent behavior -- feeds back to influence the behavior of the individuals … that produced it” (Lewin 1999). The global structure may have properties the local particles do not have. (Clayton 2004: 594)

(....) In contrast …, Kauffman insists that an ecosystem is in one sense “merely” a complex web of interactions. Yet consider a typical ecosystem of organisms of the sort that Kauffman (2000: 191) analyzes ... Depending on one’s research interests, one can focus attention either on holistic features of such systems or on the interactions of the components within them. Thus Langston’s term “global” draws attention to system-level features and properties, whereas Kauffman’s “merely” emphasizes that no mysterious outside forces need to be introduced (such as, e.g., Rupert Sheldrake’s (1995) "morphic resonance"). Since the two dimensions are complementary, neither alone is scientifically adequate; the explosive complexity manifested in the evolutionary process involves the interplay of both systemic features and component interactions. (Clayton 2004: 595)

The role of nested hierarchies

A final layer of complexity is added in cases where the local-global structure forms a nested hierarchy. Such hierarchies are often represented using nested circles. Nesting is one of the basic forms of combinatorial explosion. Such forms appear extensively in natural biological systems (Wolfram 2002: 357ff.; see his index for dozens of further examples of nesting). Organisms achieve greater structural complexity, and hence increased chances of survival, as they incorporate discrete subsystems. Similarly, ecosystems complex enough to contain a number of discrete subsystems evidence greater plasticity in responding to destabilizing factors. (Clayton 2004: 595-596)

"Strong" versus "weak" emergence

The resulting interactions between parts and wholes mirror yet exceed the features of emergence that we observed in chemical processes. To the extent that the evolution of organisms and ecosystems evidences a "combinatorial explosion" (Morowitz 2002) based on factors such as the four just summarized, the hope of explaining entire living systems in terms of simple laws appears quixotic. Instead, natural systems made of interacting complex systems form a multileveled network of interdependency (cf. Gregersen 2003), and each level contributes distinct elements to the overall explanation. (Clayton 2004: 596-597)

Systems biology, the Siamese twin of genetics, has established many of the features of life’s “complexity pyramid” (Oltvai and Barabási 2002; cf. Barabási 2002). Construing cells as networks of genes and proteins, systems biologists distinguish four distinct levels: (1) the base functional organization (genome, transcriptome, proteome, and metabalome) [see below, Morowitz on the “dogma of molecular biology.”]; (2) the metabolic pathways built up out of these components; (3) larger functional modules responsible for major cell functions; and (4) the large-scale organization that arises from the nesting of the functional modules. Oltvai and Barabási (2002) conclude that “[the] integration of different organizational levels increasingly forces us to view cellular functions as distributed among groups of heterogeneous components that all interact within large networks.” Milo et al. (2002) have recently shown that a common set of “network motifs” occurs in complex networks in fields as diverse as biochemistry, neurobiology, and ecology. As they note, “similar motifs were found in networks that perform information processing, even though they describe elements as different as biomolecules within a cell and synaptic connections between neurons in Caenorhabditis elegans.” (Clayton 2004: 598)

Such compounding of complexity -- the system-level features of networks, the nodes of which are themselves complex systems -- is sometimes said to represent only a quantitative increase in complexity, in which nothing “really new” emerges. This view I have elsewhere labeled “weak emergence.” [This would be a form of philosophical materialism qua philosophical reductionism.] It is the view held by (among others) John Holland (1998) and Stephen Wolfram (2002). But, as Leon Kass (1999: 62) notes in the context of evolutionary biology, “it never occurred to Darwin that certain differences of degree -- produced naturally, accumulated gradually (even incrementally), and inherited in an unbroken line of descent -- might lead to a difference in kind …” Here Kass nicely formulates the principle involved. As long as nature’s process of compounding complex systems leads to irreducibly complex systems with structures and causal mechanisms of their own, then the natural world evidences not just weak emergence but also a more substantive change that we might label strong emergence. Cases of strong emergence are cases where the “downward causation” emphasized by George Ellis [see p. 607, True complexity and its associated ontology.] … is most in evidence. By contrast, in the relatively rare cases where rules relate the emergent system to its subvening system (in simulated systems, via algorithms; in natural systems, via “bridge laws”) weak emergence interpretation suffices. In the majority of cases, however, such rules are not available; in these cases, especially where we have reason to think that such lower-level rules are impossible in principle, the strong emergence interpretation is suggested. (Clayton 2004: 597-598)

Neuroscience, qualia, and consciousness

Consciousness, many feel, is the most important instance of a clearly strong form of emergence. Here if anywhere, it seems, nature has produced something irreducible -- no matter how strong the biological dependence of mental qualia (i.e., subjective experiences) on antecedent states of the central nervous system may be. To know everything there is to know about the progression of brain states is not to know what it’s like to be you, to experience your joy, your pain, or your insights. No human researcher can know, as Thomas Nagel (1980) so famously argued, "what it's like to be a bat." (Clayton 2004: 598)

Unfortunately consciousness, however intimately familiar we may be with it on a personal level, remains an almost total mystery from a scientific perspective. Indeed, as Jerry Fodor (1992) noted, "Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious. So much for the philosophy of consciousness." (Clayton 2004: 598)

Given our lack of comprehension of the transition from brain states to consciousness, there is virtually no way to talk about the "C" word without sliding into the domain of philosophy. The slide begins if the emergence of consciousness is qualitatively different from other emergences; in fact, it begins even if consciousness is different from the neural correlates of consciousness.Much suggests that both differences obtain. How far can neuroscience go, even in principle, in explaining consciousness? (Clayton 2004: 598-599)

Science’s most powerful ally, I suggest, is emergence. As we’ve seen, emergence allows one to acknowledge the undeniable differences between mental properties and physical properties, while still insisting on the dependence of the entire mental life on the brain states that produce it. Consciousness, the thing to be explained, is different because it represents a new level of emergence; but brain states -- understood both globally (as the state of the brain as a whole) and in terms of their microcomponents -- are consciousness’s sine qua non. The emergentist framework allows science to identify the strongest possible analogies with complex systems elsewhere in the biosphere. So, for example, other complex adaptive systems also “learn,” as long as one defines learning as “a combination of exploration of the environment and improvement of performance through adaptative change” (Schuster 1994). Obviously, systems from primitive organisms to primate brains record information from their environment and use it to adjust future responses to that environment. (Clayton 2004: 599)

Even the representation of visual images in the brain, a classically mental phenomenon, can be parsed in this way. Consider Max Velman’s (2000) schema … Here a cat-in-the-world and the neural representation of the cat are both parts of a natural system; no nonscientific mental “things” like ideas or forms are introduced. In principle, then, representation might be construed as merely a more complicated version of the feedback loop between a plant and its environment … Such is the “natural account of phenomenal consciousness” defended by (e.g.) Le Doux (1978). In a physicalist account of mind, no mental causes are introduced. Without emergence, the story of consciousness must be retold such that thoughts and intentions play no causal role. … If one limits the causal interactions to world and brains, mind must appear as a sort of thought-bubble outside the system. Yet it is counter to our empirical experience in the world, to say the least, to leave no causal role to thoughts and intentions. For example, it certainly seems that your intention to read this … is causally related to the physical fact of your presently holding this book [or browsing this web page, etc.,] in your hands. (Clayton 2004: 599-600)

Arguments such as this force one to acknowledge the disanologies between emergence of consciousness and previous examples of emergence in complex systems. Consciousness confronts us with a “hard problem” different from those already considered (Chalmers 1995: 201):

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

The distinct features of human cognition, it seems, depend on a quantitative increase in brain complexity vis-à-vis other higher primates. Yet, if Chalmers is right (as I fear he is), this particular quantitative increase gives rise to a qualitative change. Even if the development of conscious awareness occurs gradually over the course of primate evolution, the (present) end of that process confronts the scientist with conscious, symbol-using beings clearly distinct from those who preceded them (Deacon 1997). Understanding consciousness even as an emergent phenomenon in the natural world -- that is, naturalistically -- requires a theory of “felt qualities,” “subjective intentions,” and “states of experience.” Intention-based explanations and, it appears, a new set of sciences: the social or human sciences. By this point emergence has driven us to a level beyond the natural-science-based framework of the present book. New concepts, new testing mechanisms, and perhaps even new standards for knowledge are now required. From the perspective of physics the trail disappears into the clouds; we can follow it no further. (Clayton 2004: 600-601)

The five emergences

In the broader discussion the term “emergence” is used in multiple and incompatible senses, some of which are incompatible with the scientific project. Clarity is required to avoid equivocation between five distinct levels on which the term may be applied: (Clayton 2004: 601)

• Let emergence-1 refer to occurrences of the term within the context of a specific scientific theory. Here it describes features of a specified physical or biological system of which we have some scientific understanding. Scientists who employ these theories claim that the term (in a theory-specific sense) is currently useful for describing features of the natural world. The preceding pages include various examples of theories in which this term occurs. At the level of emergence-1 alone there is no way to establish whether the term is used analogously across theories, or whether it really means something utterly distinct in each theory in which it appears. (Clayton 2004: 601-602)

Emergence-2 draws attention to features of the world that may eventually become part of a unified scientific theory. Emergence in this sense expresses postulated connections or laws that may in the future become the basis for one or more branches of science. One thinks, for example, of the role of emergence in Stuart Kauffman’s notion of a new “general biology,” or in certain proposed theories of complexity or complexification. (Clayton 2004: 602)

Emergence-3 is a mata-scientific term that points out a broad pattern across scientific theories. Used in this sense, the term is not drawn from a particular scientific theory; it is an observation about a significant pattern that connects a range of scientific theories. In the preceding pages I have often employed the term in this fashion. My purpose has been to draw attention to common features of the physical systems under discussion, as in (e.g.) the phenomena of autocatalysis, complexity, and self-organization. Each is scientifically understood, each shares common features that are significant. Emergence draws attention to these features, whether or not the individual theories actually use the same label for the phenomena they describe. (Clayton 2004: 602)

Emergence-3 thus serves a heuristic function. It assists in the recognition of common features between theories. Recognizing such patterns can help to extend existing theories, to formulate insightful new hypotheses, or to launch new interdisciplinary research programes.[4] (Clayton 2004: 602)

Emergence-4 expresses a feature in the movement between scientific disciplines, including some of the most controversial transition points. Current scientific work is being done, for example, to understand how chemical structures are formed, to reconstruct the biochemical dynamics underlying the origins of life, and to conceive how complicated neural processes produce cognitive phenomena such as memory, language, rationality, and creativity. Each involves efforts to understand diverse phenomena involving levels of self-organization within the natural world. Emergence-4 attempts to express what might be shared in common by these (and other) transition points. (Clayton 2004: 602)

Here, however, a clear limitation arises. A scientific theory that explains how chemical structures are formed is perhaps unlikely to explain the origins of life. Neither theory will explain how self-organizing neural nets encode memories. Thus emergence-4 stands closer to the philosophy of science than it does to actual scientific theory. Nonetheless, it is the sort of philosophy of science that should be helpful to scientists.[5] (Clayton 2004: 602)

Emergence-5 is a metaphysical theory. It represents the view that the nature of the natural world is such that it produces continually more complex realities in a process of ongoing creativity. The present does not comment on such metaphysical claims about emergence.[6] (Clayton 2004: 603)

Conclusion

(….) Since emergence is used as an integrative ordering concept across scientific fields …. It remains, at least in part, a meta-scientific term. (Clayton 2004: 603)

Does the idea of distinct levels then conflict with “standard reductionist science?” No, one can believe that there are levels in Nature and corresponding levels of explanation while at the same time working to explain any given set of higher-order phenomena in terms of underlying laws and systems. In fact, isn’t the first task of science to whittle away at every apparent “break” in Nature, to make it smaller, to eliminate it if possible? Thus, for example, to study the visual perceptual system scientifically is to attempt to explain it fully in terms of the neural structures and electrochemical processes that produce it. The degree to which downward explanation is possible will be determined by long-term empirical research. At present we can only wager on the one outcome or the other based on the evidence before us. (Clayton 2004: 603)

Notes:

[2] Gordon (2000) disputes this claim: "One lesson from ants is that to understand a system like theirs, it is not sufficient to take the system apart. The behavior of each unit is not encapsulated inside that unit but comes from its connections with the rest of the system." I likewise break strongly with the aggregate model of emergence.

[3] Generally this seems to be a question that makes physicists uncomfortable ("Why, that's impossible, of course!"), whereas biologists tend to recognize in it one of the core mysteries in the evolution of living systems.

[4] For this reason, emergence-3 stands closer to the philosophy of science than do the previous two senses. Yet it is a kind of philosophy of science that stands rather close to actual science and that seeks to be helpful to it. [The goal of all true "philosophy of science" is to seek critical clarification of ideas, concepts, and theoretical formulations; hence to be "helpful" to science and the question for human knowledge.] By way of analogy one thinks of the work of philosophers of quantum physics such as Jeremy Butterfield or James Cushing, whose work can be and has actually been helpful to bench physicists. One thinks as well of the analogous work of certain philosophers in astrophysics (John Barrow) or in evolutionary biology (David Hull, Michael Ruse).

[5] This as opposed, for example, to the kind of philosophy of science currently popular in English departments and in journals like Critical Inquiry -- the kind of philosophy of science that asserts that science is a text that needs to be deconstructed, or that science and literature are equally subjective, or that the worldview of Native Americans should be taught in science classes.

-- Clayton, Philip D. Emergence: us from it. In Science and Ultimate Reality: Quantum Theory, Cosmology and Complexity (John D. Barrow, Paul W. Davies, and Charles L. Harper, Jr., ed.). Cambridge: Cambridge University Press; 2004; pp. 577-606.
Morowitz wrote:The "dogma of molecular biology" makes the genome the primary constuct and moves from genome to proteome to metabalome to physiome to phenome. The view outlined here indicates that the primary laws relate to phenotype and that the epistemic direction is the reverse of that outlined in the dogma. This would be a major paradigm shift and would lead to more effort on the hierarchy of phenotypic laws.

[6] I note only that such extrapolations are neither excluded by good science nor damaging to it -- as long as one avoids confusion on which of the five emergences one intends to refer to. Indeed, good reasons are sometimes given to engage in metaphysical speculation based on scientific results, and it is possible that emergence will turn out to be one of these cases.

-- Morowitz, Harold J. et al. The Robustness of Intermediary Metabolism. In Microbial Phylogeny and Evolution: Concepts and Controversies (Jan Sapp, ed.). Oxford: Oxford University Press; 2005; p. 159.
Last edited by Rob on Sun Apr 09, 2006 6:37 pm, edited 20 times in total.

Rob
Scholar
Posts: 331
Joined: Thu Nov 17, 2005 10:47 am

Emergent Order: True Complexity vs. Trivial Complexity

Post #2

Post by Rob »

Ellis wrote:True complexity and the nature of existence

My concern … is true complexity and its relation to physics. This is to be distinguished from what is covered by statistical physics, catastrophe theory, study of sand piles, the reaction diffusion equation, cellular automata such as “The Game of Life,” and chaos theory. Examples of truly complex systems are molecular biology, animal and human brains, language and symbolic systems, individual human behavior, social and economic systems, digital computer systems, and the biosphere. This complexity is made possible by the existence of molecular structures that allow complex biomolecules such as RNA, DNA, and proteins with their folding properties and lock-and-key recognition mechanisms, in turn underlying membranes, cells (including neurons), and indeed the entire bodily fabric and nervous system. (Ellis 2004: 607)

True complexity involves vast quantities of stored information and hierarchically organized structures that process information in a purposeful manner, particularly through implementation of goal-seeking feedback loops. Through this structure they appear purposeful in their behavior (“teleonomic”). This is what we must look at when we start to extend physical thought to the boundaries, and particularly when we try to draw philosophical conclusions -- for example, as regards the nature of existence -- from our understanding of the way physics underlies reality. Given this complex structuring, one can ask, “What is real?”, that is, “What actually exists?”, and “What kinds of causality can occur in these structures?” (Ellis 2004: 607)

(….)

Not only are complex systems hierarchic, but the levels of this hierarchy represent different levels of abstraction, each built upon the other, and each understandable by itself (and each characterized by a different phenomenology). This is the phenomenon of emergent order. All parts at the same level of abstraction interact in a well-defined way (which is why they have a reality at their own level, each represented in a different language describing and characterizing the causal patterns at work at that level). (Ellis 2004: 612)

We find separate parts that act as independent agents, each of which exhibit some fairly complex behavior, and each of which contributes to many higher level functions. Only through the mutual co-operation of meaningful collections of these agents do we see the higher-level functionality of an organism. This emergent behavior -- the behavior of the whole is greater than the sum of its parts, and cannot even be described in terms of inter-component linkages. This fact has the effect of separating the high-frequency dynamics of the components -- involving their internal structure -- from the low-frequency dynamics -- involving interactions amongst components. (Simon 1982.)

(....) In a hierarchy, through encapsulation, objects at one level of abstraction are shielded from implementation details of lower levels of abstraction.

-- Ellis, George F. R. True complexity and its associated ontology. In Science and Ultimate Reality: Quantum Theory, Cosmology and Complexity (John D. Barrow, Paul W. Davies, and Charles L. Harper, Jr., ed.). Cambridge: Cambridge University Press; 2004; p. 612.
Last edited by Rob on Sat Sep 16, 2006 12:27 pm, edited 2 times in total.

Rob
Scholar
Posts: 331
Joined: Thu Nov 17, 2005 10:47 am

Non-algorithmic Nature of Mathematical Insight

Post #3

Post by Rob »

Penrose wrote:A scientific world-view which does not profoundly come to terms with the problem of conscious minds can have no serious pretensions of completeness. Consciousness is part of our universe, so any physical theory which makes no proper place for it falls fundamentally short of providing a genuine description of the world. I would maintain that there is yet no physical, biological, or computational theory that comes very close to explaining our consciousness and consequent intelligence; but that should not deter us from striving to search for one.

-- Penrose, Roger. Shadows of the Mind: A Search for the Missing Science of Consciousness. Oxford: Oxford University Press; 1994; p. 8.
Penrose wrote:Natural selection of algorithms?

If we suppose that the action of the human brain, conscious or otherwise, is merely the acting out of some very complicated algorithm, then we must ask how such an extraordinary effective algorithm actually came about. The standard answer, of course, would be 'natural selection'. as creatures with brains evolved, those with more effective algorithms would have a better tendency to survive and therefore, on the whole, had more progeny. These progeny also tended to carry more effective algorithms than their cousins, since they inherited teh ingredients of these better algorithms from their parents; so gradually the algorithms improved -- not necessarily steadily, since there could have been considerable fits and starts in their evolution -- until they reached the remarkable status that we (would apparently) find in the human brain. (Compare Dawkins 1986). (Penrose 1990: 414)

Even according to my own viewpoint, there would have to be some truth in this picture, since I envisage that much of hte brain's action is indeed algorithmic, and -- as the reader will have inferred from the above discussion -- I am a strong believer in the power of natural selection. But I do not see how natural selection, in itself, can evolve algorithms which could have the kind of conscious judgements of the validity of other algorithms that we seem to have. (Penrose 1990: 414)

Imagine an ordinary computer program. How would it have come into being? Clearly not (directly) by natural selection! Some human computer programmer would have conceived of it and would have ascertained that it correctly carries out the actions that it is supposed to. (Actually, most complicated computer programs contain errors -- usually minor, but often subtle ones that do not come to light except under unusual circumstances. The presence of such errors does not substantially affect my argument.) Sometimes a computer program might itself have been 'written' by another, say a 'master' computer program, but then the master program itself would have been the product of human ingenuity and insight; or the program itself might well be pieced together from ingredients some of which were the products of other computer programs. But in all cases the validity and the very conception of the program would have ultimately been the responsibility of (at least) one human consciousness. (Penrose 1990: 414)

One can imagine, of course, that this need not have been the case, and that, given enough time, the computer programs might somehow have evolved spontaneously by some process of natural selection. If one believes that the actions of the computer programmers' consciousness are themselves simply algorithms, then one must, in effect, believe algorithms have evolved in just this way. However, what worries me about this is that the decision as to the validity of an algorithm is not itself an algorithmic process! ... (The question of whether or not a Turing machine will actually stop is not something that can be decided algorithmically.) In order to decide whether or not an algorithm will actually work, one needs insights, not just another algorithm. (Penrose 414-415)

Nevertheless, one still might imagine some kind of natural selection process being effective for producing approximately valid algorithms. Personally, I find this very difficult to believe, however. Any selection process of this kind could act only on the output of the algorithms and not directly on the ideas underlying the actions of the algorithms. This is not simply extremely inefficient; I believe that it would be totally unworkable. In the first place, it is not easy to ascertain what an algorithm actually is, simply by examing its output. (It would be an easy matter to construct two quite different simple Turing machine actions for which the output tapes did not differ unti, say, the 2^65536th place -- and this difference could never be spotted in the entire history of the universe!) Moreover, the slightest 'mutation' of an algorithm (say a slight change in a Turing machine specification, or in its input tape) would tend to render it totally useless, and it is hard to see how actual improvements in algorithms could ever arise in this random way. (Even deliberate improvements are difficult without 'meanings' being available. This inadequately documented and complicated computer program needs to be altered or corrected; and the original programmer has departed or perhaps died. Rather than try to disentagle all the various meanings and intentions that the program implicitly depended upon, it is probably easier just to scrap it and start all over again!) (Penrose 1990: 415)

Perhaps some much more 'robust' way of specifying algorithms could be devised, which would not be subject to the above criticisms. In a way, this is what I am saying myself. The 'robust' specifications are the ideas that underlie the algorithms. But ideas are things that, as far as we know, need conscious minds for their manifestation. We are back with the problem of what consciousness actually is, and what it can actually do that unconscious objects are incapable of -- and how on earth natural selection has been clever enough to evolve that most remarkable of qualities. (Penrose 1990: 415)

(....) To my way of thinking, there is still something mysterious about evolution, with its apparent 'groping' towards some future purpose. Things at least seem to organize themselves somewhat better than they 'ought' to, just on the basis of blind-chance evolution and natural selection.... There seems to be something about the way that the laws of physics work, which allows natural selection to be much more effective process than it would be with just arbitrary laws. The resulting apparently 'intelligent groping' is an interesting issue. (Penrose 1990: 416)

The non-algorithmic nature of mathematical insight

... [A] good part of the reason for believing that consciousness is able to influence truth-judgements in a non-algorithmic way stems from consideration of Gödel's theorem. If we can see that the role of consciousness is non-algorithmic when forming mathematical judgements, where calculation and rigorous proof constitute such an important factor, then surely we may be persuaded that such a non-algorithmic ingredient could be crucial also for the role of consciousness in more general (non-mathematical) circumstances. (Penrose 1990: 416)

... Gödel's theorem and its relation to computability ... [has] shown that whatever (sufficiently extensive) algorithm a mathematician might use to establish mathematical truth -- or, what amounts to the same thing, whatever formal system he might adopt as providing his criterion of truth -- there will always be mathematical propositions, such as the explicit Godel proposition P(K) of the system ..., that his algorithm cannot provide an answer for. If the workings of the mathematician's mind are entirely algorithmic, then the algorithm (or formal system) that he actually uses to form his judgements is not capable of dealing with the proposition P(K) constructed from his personal algorithm. Nevertheless, we can (in principle) see that P(K) is actually true! This would seem to provide him with a contradiction, since he ought to be able to see that also. Perhaps this indicates that the mathematician was not using an algorithm at all! (Penrose 1990: 416-417)

(....) The message should be clear. Mathematical truth is not something that we ascertain merely by use of an algorithm. I believe, also, that our consciousness is a crucial ingredient in our comprehension of mathematical truth. We must 'see' the truth of a mathematical argument to be convinced of its validity. This 'seeing' is the very essence of consciousness. It must be present whenever we directly perceive mathematical truth. When we conceive ourselves of the validity of Gödel's theorem we not only 'see' it, but by so doing we reveal the very non-algorithmic nature of the 'seeing' process itself. (Penrose 1990: 418)

-- Penrose, Roger. The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford: Oxford University Press; 1990; pp. 414-418.
Linde wrote:Does consciousness matter?

We cannot rule out the possibility that carefully avoiding the concept of consciousness in quantum cosmology may lead to an artificial narrowing of our outlook. Let us remember an example from the history of science that may be rather instructive in this respect. Prior to the invention of the general theory of relativity, space, time, and matter seemed to be three fundamentally different entities. Space was thought to be a kind of three-dimensional coordinate grid which, when supplemented by clocks, could be used to describe the motion of matter. Spacetime possessed no intrinsic degrees of freedom; it played a secondary role as a tool for the description of the truly substantial material world. The general theory of relativity brought with it a decisive change in this point of view. Spacetime and matter were found to be interdependent, and there was no longer any question which one of the two is more fundamental. Spacetime was also found to have its own inherent degrees of freedom…. This is completely opposite to the previous idea that spacetime is only a tool for the description of matter.

The standard assumption is that consciousness, just like spacetime before the invention of general relativity, plays a secondary, subservient role, being just a function of matter and a tool for the description of the truly existing material world. But let us remember that our knowledge of the world begins not with matter but with perceptions. I know for sure that my pain exists, my “green” exists, and my “sweet” exists. I do not need any proof of their existence, because these events are a part of me; everything else is a theory. Later we find out that our perceptions obey some laws, which can be most conveniently formulated if we assume that there is some underlying reality beyond our perception. This model of a material world obeying laws of physics is so successful that soon we forget about our starting point and say that matter is the only reality, and perceptions are nothing but a useful tool for the description of matter. This assumption is almost as natural (and maybe as false) as our previous assumption that space is only a mathematical tool for the description of matter. We are substituting reality of our feelings by the successful working theory of an independently existing material world. And the theory is so successful that we almost never think about its possible limitations.

-- Linde, Andrei, Author. Inflation, quantum cosmology, and the anthropic priniciple. In Science and Ultimate Reality: Quantum Theory, Cosmology and Complexity. (John D. Barrow, Paul C. W. Davies, and Charles L. Harper, Jr., eds.). Cambridge: Cambridge University Press; 2004: 450-451.
Whitehead wrote:The answer, therefore, which the seventeenth century gave to the ancient question ... "What is the world made of?" was that the world is a succession of instantaneous configurations of matter -- or material, if you wish to include stuff more subtle than ordinary matter.... Thus the configurations determined there own changes, so that the circle of scientific thought was completely closed. This is the famous mechanistic theory of nature, which has reigned supreme ever since the seventeenth century. It is the orthodox creed of physical science.... There is an error; but it is merely the accidental error of mistaking the abstract for the concrete. It is an example of what I will call the 'Fallacy of Misplaced Concretness.' This fallacy is the occasion of great confusion in philosophy. (Whitehead 1967: 50-51)

(....) This conception of the universe is surely framed in terms of high abstractions, and the paradox only arises because we have mistaken our abstractions for concrete realities.... The seventeenth century had finally produced a scheme of scientific thought framed by mathematics, for the use of mathematics. The great characteristic of the mathematical mind is its capacity for dealing with abstractions; and for eliciting from them clear-cut demonstrative trains of reasoning, entirely satisfactory so long as it is those abstractions which you want to think about. The enormous success of the scientific abstractions, yielding on the one hand matter with its simple location in space and time, on the other hand mind, perceiving, suffering, reasoning, but not interfering, has foisted onto philosophy the task of accepting them as the most concrete rendering of fact. (Whitehead 1967: 54-55)

Thereby, modern philosophy has been ruined. It has oscillated in a complex manner between three extremes. These are the dualists, who accept matter and mind as on an equal basis, and the two varieties of monists, those who put mind inside matter, and those who put matter inside mind. But this juggling with abstractions can never overcome the inherent confusion introduced by the ascription of misplaced concreteness to the scientific scheme of the seventeenth century. (Whitehead 1967: 55)

-- Whitehead, Alfred North. Science and the Modern World. The Free Press; 1925; c1967 pp. 50-55.
Harrison wrote:Kurt Gödel in 1931 showed that mathematical systems are not fully self-contained. In a self-consistent logical system (free of internal contradictions), statements can be formulated whose truth is undecidable. When the system is enlarged with additional axioms, the previous statements of uncertain truth can be proved to be true. But the enlarged system contains new undecidable statements that can only be proved to be true by making the system still larger. One conclusion is that the mathematician is inseparable from mathematics, just as the cosmologist is inseparable from cosmology.

-- Harrison, Edward. Cosmology: The Science of the Univese. Second ed. Cambridge: Cambridge University Press; 2000; p. 165.
Harrison wrote:Life , viewed objectively seems sufficiently explained in terms organic structures and their functions. Viewed subjectively, however, its inner world of experience seems inadequately explained by its own concepts of the physical world. No instrument in the laboratory can detect the existence of consciousness and yet each of us knows that consciousness exists.

-- Harrison, Edward. Cosmology: The Science of the Univese. Second ed. Cambridge: Cambridge University Press; 2000; p. 543.
Creationists think there are "provable absolutes" in the Bible; mechanistic materialists espousing the false so-called science of scientism claim the following:
QED wrote:Thank goodness that there are provable absolutes in Mathematics.
Which is proven false by the following:
Misc. Quotes wrote:In 1931, the Czech-born mathematician Kurt Gödel demonstrated that within any given branch of mathematics, there would always be some propositions that couldn't be proven either true or false using the rules and axioms ... of that mathematical branch itself. You might be able to prove every conceivable statement about numbers within a system by going outside the system in order to come up with new rules an axioms, but by doing so you'll only create a larger system with its own unprovable statements. The implication is that all logical system of any complexity are, by definition, incomplete; each of them contains, at any given time, more true statements than it can possibly prove according to its own defining set of rules.

Gödel's Theorem has been used to argue that a computer can never be as smart as a human being because the extent of its knowledge is limited by a fixed set of axioms, whereas people can discover unexpected truths ... It plays a part in modern linguistic theories, which emphasize the power of language to come up with new ways to express ideas. And it has been taken to imply that you'll never entirely understand yourself, since your mind, like any other closed system, can only be sure of what it knows about itself by relying on what it knows about itself.

-- Jones and Wilson, An Incomplete Education

Gödel showed that within a rigidly logical system such as Russell and Whitehead had developed for arithmetic, propositions can be formulated that are undecidable or undemonstrable within the axioms of the system. That is, within the system, there exist certain clear-cut statements that can neither be proved or disproved. Hence one cannot, using the usual methods, be certain that the axioms of arithmetic will not lead to contradictions ... It appears to foredoom hope of mathematical certitude through use of the obvious methods. Perhaps doomed also, as a result, is the ideal of science - to devise a set of axioms from which all phenomena of the external world can be deduced.

-- Boyer, History of Mathematics

He proved it impossible to establish the internal logical consistency of a very large class of deductive systems - elementary arithmetic, for example - unless one adopts principles of reasoning so complex that their internal consistency is as open to doubt as that of the systems themselves ... Second main conclusion is ... Gödel showed that Principia, or any other system within which arithmetic can be developed, is essentially incomplete. In other words, given any consistent set of arithmetical axioms, there are true mathematical statements that cannot be derived from the set... Even if the axioms of arithmetic are augmented by an indefinite number of other true ones, there will always be further mathematical truths that are not formally derivable from the augmented set.

-- Nagel and Newman, Gödel's Proof

The proof of Gödel's Incompleteness Theorem is so simple, and so sneaky, that it is almost embarassing to relate. His basic procedure is as follows:

1. Someone introduces Gödel to a UTM, a machine that is supposed to be a Universal Truth Machine, capable of correctly answering any question at all.

2. Gödel asks for the program and the circuit design of the UTM. The program may be complicated, but it can only be finitely long. Call the program P(UTM) for Program of the Universal Truth Machine.

3. Smiling a little, Gödel writes out the following sentence: "The machine constructed on the basis of the program P(UTM) will never say that this sentence is true." Call this sentence G for Gödel. Note that G is equivalent to: "UTM will never say G is true."

4. Now Gödel laughs his high laugh and asks UTM whether G is true or not.

5. If UTM says G is true, then "UTM will never say G is true" is false. If "UTM will never say G is true" is false, then G is false (since G = "UTM will never say G is true"). So if UTM says G is true, then G is in fact false, and UTM has made a false statement. So UTM will never say that G is true, since UTM makes only true statements.

6. We have established that UTM will never say G is true. So "UTM will never say G is true" is in fact a true statement. So G is true (since G = "UTM will never say G is true").

7. "I know a truth that UTM can never utter," Gödel says. "I know that G is true. UTM is not truly universal."
Think about it - it grows on you ...

With his great mathematical and logical genius, Gödel was able to find a way (for any given P(UTM)) actually to write down a complicated polynomial equation that has a solution if and only if G is true. So G is not at all some vague or non-mathematical sentence. G is a specific mathematical problem that we know the answer to, even though UTM does not! So UTM does not, and cannot, embody a best and final theory of mathematics ...

Although this theorem can be stated and proved in a rigorously mathematical way, what it seems to say is that rational thought can never penetrate to the final ultimate truth ... But, paradoxically, to understand Gödel's proof is to find a sort of liberation. For many logic students, the final breakthrough to full understanding of the Incompleteness Theorem is practically a conversion experience. This is partly a by-product of the potent mystique Gödel's name carries. But, more profoundly, to understand the essentially labyrinthine nature of the castle is, somehow, to be free of it.

-- Rucker, Infinity and the Mind

All consistent axiomatic formulations of number theory include undecidable propositions ...

Gödel showed that provability is a weaker notion than truth, no matter what axiom system is involved ...

How can you figure out if you are sane? ... Once you begin to question your own sanity, you get trapped in an ever-tighter vortex of self-fulfilling prophecies, though the process is by no means inevitable. Everyone knows that the insane interpret the world via their own peculiarly consistent logic; how can you tell if your own logic is "peculiar' or not, given that you have only your own logic to judge itself? I don't see any answer. I am reminded of Gödel's second theorem, which implies that the only versions of formal number theory which assert their own consistency are inconsistent.

The other metaphorical analogue to Gödel's Theorem which I find provocative suggests that ultimately, we cannot understand our own mind/brains ... Just as we cannot see our faces with our own eyes, is it not inconceivable to expect that we cannot mirror our complete mental structures in the symbols which carry them out? All the limitative theorems of mathematics and the theory of computation suggest that once the ability to represent your own structure has reached a certain critical point, that is the kiss of death: it guarantees that you can never represent yourself totally.

-- Hofstadter, Gödel, Escher, Bach
It seems logically reasonable that QED bases his assertion "that there are provable absolutes in Mathematics," upon unreasonable "blind faith," as his very assertion is proven fallacious by the very mathematics he claims provides these "absolutes"!

Logic and reason alone proves there are no provable absolutes in mathematics. Increasing certainties and approximations and probabilities, but not provable absolutes. The belief that mathematics provides us with provable absolutes is nothing more than an irrational belief unsupported by mathematics, logic, or reason.

Post Reply