Self-Organizing Systems (SOS) FAQ
Frequently Asked Questions Version 3 September 2008
For USENET Newsgroup comp.theory.self-org-sys
A Russian translation of this FAQ can be found here: http://ru.aiwiki.org/page/SOS_FAQ
and a Beloussian translation can be found here: http://webhostingrating.com/libs/sosfaq-be
plus a Slovenian translation: http://nextranks.com/for-usenet-newsgroup-comp-theory-self-org-sys/ created by NextRanks
(* new or recently updated questions)
Index
- Introduction
- 1.1 The Science of Self-Organizing Systems
- 1.2 Definition of Self-Organization
- 1.3 Definition of Complexity Theory
- Systems
- 2.1 What is a system ?
- 2.2 What is a system property ?
- 2.3 What is emergence ?
- 2.4 What is organization ?
- 2.5 What is state or phase space ?
- 2.6 What is self-organization ?
- 2.7 Can things self-organize ?
- 2.8 What is an attractor ?
- 2.9 What is an pre-image ?
- 2.10 How do attractors and self-organization relate ?
- 2.11 What is the mechanism of self-organization ?
- 2.12 How do self-ordering and self-direction relate ? *
- Edge of Chaos
- 3.1 What is criticality ?
- 3.2 What is Self-Organized Criticality (SOC) ?
- 3.3 What is the Edge of Chaos (EOC) ?
- 3.4 What is a phase change ?
- 3.5 How does percolation relate to SOC ?
- 3.6 What is a power law ?
- Selection
- 4.1 Isn't this just the same as selection ?
- 4.2 How does natural selection fit in ?
- 4.3 What is a mutant neighbour ?
- 4.4 What is an adaptive walk ?
- 4.5 What is a fitness landscape ?
- Interconnections
- 5.1 What are interactions ?
- 5.2 How many parts are necessary for self-organization ?
- 5.3 What is feedback ?
- 5.4 What interconnections are necessary ?
- 5.5 What is a Boolean Network or NK model ?
- 5.6 What are canalizing functions and forcing structures ?
- 5.7 How does connectivity affect landscape shape ?
- 5.8 What is an NKC Network ?
- 5.9 What is an NKCS Network ?
- 5.10 What is an autocatalytic set ?
- Structure
- 6.1 What are levels of organization ?
- 6.2 How is energy related to these concepts ?
- 6.3 How does it relate to chaos ?
- 6.4 What are dissipative systems ?
- 6.5 What is bifurcation ?
- 6.6 How is cybernetics involved ?
- 6.7 What is synergy ?
- 6.8 What is autopoiesis ?
- 6.9 What is structural coupling ?
- 6.10 What is homeostasis ?
- 6.11 What are extropy and homeokinetics ?
- 6.12 What is stigmergy ?
- 6.13 What is a swarm ?
- Research
- 7.1 How can self-organization be studied ?
- 7.2 What results are there so far ? *
- 7.3 How applicable is self-organization ?
- Resources
- 8.1 Is any software available to study self-organization ?
- 8.2 Where can I find online information ?
- 8.3 What books can I read on this subject ?
- Miscellaneous
1. Introduction
1.1 The Science of Self-Organizing Systems
The scientific study of self-organizing systems is relatively new, although questions about how organization arises have of course been raised since ancient times. The forms we identify around us are only a small sub-set of those theoretically possible. So why don't we see more variety ? To answer such a question is the reason why we study self-organization.
Many natural systems show organization (e.g. galaxies, planets, chemical compounds, cells, organisms and societies). Traditional scientific fields attempt to explain these features by referencing the micro properties or laws applicable to their component parts, for example gravitation or chemical bonds. Yet we can also approach the subject in a very different way, looking instead for system properties applicable to all such collections of parts, regardless of size or nature. It is here that modern computers prove essential, allowing us to investigate the dynamic changes that occur over vast numbers of time steps and with a large numbers of initial options.
Studying nature requires timescales appropriate for the natural system, and this restricts our studies to identifiable qualities that are easily reproduced, precluding investigations involving the full range of possibilities that may be encountered. However, mathematics deals easily with generalised and abstract systems and produces theorems applicable to all possible members of a class of systems. By creating mathematical models, and running computer simulations, we are able to quickly explore large numbers of possible starting positions and to analyse the common features that result. Even small systems have almost infinite initial options, so even with the fastest computer currently available, we usually can only sample the possibility space. Yet this is often enough for us to discover interesting properties that can then be tested against real systems, thus generating new theories applicable to complex systems and their spontaneous organization.
1.2 Definition of Self-Organization
The essence of self-organization is that system structure often appears without explicit pressure or involvement from outside the system. In other words, the constraints on form (i.e. organization) of interest to us are internal to the system, resulting from the interactions among the components and usually independent of the physical nature of those components. The organization can evolve in either time or space, maintain a stable form or show transient phenomena. General resource flows within self-organized systems are expected (dissipation), although not critical to the concept itself.
The field of self-organization seeks general rules about the growth and evolution of systemic structure, the forms it might take, and finally methods that predict the future organization that will result from changes made to the underlying components. The results are expected to be applicable to all other systems exhibiting similar network characteristics.
1.3 Definition of Complexity Theory
The main current scientific theory related to self-organization is Complexity Theory, which states:
Critically interacting components self-organize to form potentially evolving structures exhibiting a hierarchy of emergent system properties.
The elements of this definition relate to the following:
- Critically Interacting - System is information rich, neither static nor chaotic
- Components - Modularity and autonomy of part behaviour implied
- Self-Organize - Attractor structure is generated by local contextual interactions
- Potentially Evolving - Environmental variation selects and mutates attractors
- Hierarchy - Multiple levels of structure and responses appear (hyperstructure)
- Emergent System Properties - New features are evident which require a new vocabulary
We explore and explain the terms comprising this definition in this FAQ. The form of the definition given here is the slightly rephrased result of a discussion on the SOS newsgroup, where the editor of this FAQ offered an initial definition and the concept was refined, but the elements included are found in most general treatments of self-organization, although the emphasis may vary in different approaches to the subject.
2. Systems
2.1 What is a system ?
A system is a group of interacting parts functioning as a whole and distinguishable from its surroundings by recognizable boundaries. There are many varieties of systems, on the one hand the interactions between the parts may be fixed (e.g. an engine), at the other extreme the interactions may be unconstrained (e.g. a gas). The systems of most interest in our context are those in the middle, with a combination both of changing interactions and of fixed ones (e.g. a cell). The system function depends upon the nature and arrangement of the parts and usually changes if parts are added, removed or rearranged. The system has properties that are emergent, if they are not intrinsically found within any of the parts, and exist only at a higher level of description.
2.2 What is a system property ?
When a series of parts are connected into various configurations, the resultant system no longer solely exhibits the collective properties of the parts themselves. Instead any additional behaviour attributed to the system is an example of an emergent system property. A configuration can be physical, logical or statistical, all can show unexpected features that cannot be reduced to an additive property of the individual parts. Crucial to such properties is the fact that we cannot even describe them using the language applicable to the parts, we need a new vocabulary, new terms to be invented, e.g. 'laser' to denote the functional features of the entity (e.g. coherent light producer).
2.3 What is emergence ?
The appearance of a property or feature not previously observed as a functional characteristic of the system. Generally, higher level properties are regarded as emergent. An automobile is an emergent property of its interconnected parts. That property disappears if the parts are disassembled and just placed in a heap. There are three aspects involved here. First is the idea of 'supervenience', this means that the emergent properties will no longer exist if the lower level is removed (i.e. no 'mystically' disjoint properties are involved). Secondly the new properties are not aggregates, i.e. they are not just the predictable results of summing part properties (for example when the mass of a whole is just the mass of all the parts added together). Thirdly there should be causality - thus emergent properties are not epiphenomenal (either illusions or descriptive simplifications only). This means that the higher level properties should have causal effects on the lower level ones - called 'downward causation', e.g. an amoeba can move, causing all its constituent molecules to change their environmental positions (none of which however are themselves capable of such autonomous trajectories). This implies also that the emergent properties 'canalize' (restrict) the freedom of the parts (by changing the 'fitness landscape', i.e. by imposing boundary conditions or constraints).
2.4 What is organization ?
The arrangement of selected parts so as to promote a specific function. This restricts the behaviour of the system in such a way as to confine it to a smaller volume of its state space. The recognition of self-organizing systems can be problematical. New approaches are often necessary to find order in what was previously thought to be noise, e.g. in the recognition that a part of a system looks like the whole (self-similarity) or in the use of phase space diagrams.
2.5 What is state or phase space ?
This is the total number of behavioural combinations available to the system. When tossing a single coin, this would be just two states (either heads or tails). The number of possible states grows rapidly with complexity. If we take 100 coins, then the combinations can be arranged in over 1,000,000,000,000,000,000,000,000,000,000 different ways. We would view each coin as a separate parameter or dimension of the system, so one arrangement would be equivalent to specifying 100 binary digits (each one indicating a 1 for heads or 0 for tails for a specific coin). Generalizing, any system has one dimension of state space for each variable that can change. Mutation will change one or more variables and move the system a small distance in state space. State space is frequently called phase space, the two terms are interchangeable.
2.6 What is self-organization ?
a) The evolution of a system into an organized form in the absence of external pressures.
b) A move from a large region of state space to a persistent smaller one, under the control of the system itself. This smaller region of state space is called an attractor.
c) The introduction of correlations (pattern) over time or space for previously independent variables operating under local rules.
Typical features include (in rough order of generality):
- Absence of external control (autonomy)
- Dynamic operation (time evolution)
- Fluctuations (noise/searches through options)
- Symmetry breaking (loss of freedom/heterogeneity)
- Global order (emergence from local interactions)
- Dissipation (energy usage/far-from-equilibrium)
- Instability (self-reinforcing choices/nonlinearity)
- Multiple equilibria (many possible attractors)
- Criticality (threshold effects/phase changes)
- Redundancy (insensitivity to damage)
- Self-maintenance (repair/reproduction metabolisms)
- Adaptation (functionality/tracking of external variations)
- Complexity (multiple concurrent values or objectives)
- Hierarchies (multiple nested self-organized levels)
2.7 Can things self-organize ?
Yes, any system that takes a form that is not imposed from outside (by walls, machines or forces) can be said to self-organize. The term is usually employed however in a more restricted sense by excluding physical laws (reductionist explanations), and suggesting that the properties that emerge are not explicable from a purely reductionist viewpoint. Examples include magnetism, crystallization, lasers, Bernard cells, Belouzov-Zhabotinsky and Brusselator reactions, cellular autocatalysis, organism structures, bird & fish flocking, immune system, brain, ecosystems, economies etc. An excellent overview of this question can be found in Francis Heylighen's paper 'The Science of Self-Organization and Adaptivity' http://pespmc1.vub.ac.be/Papers/EOLSS-Self-Organiz.pdf
2.8 What is an attractor ?
A preferred position for the system, such that if the system is started from another state it will evolve until it arrives at the attractor, and will then stay there in the absence of other factors. An attractor can be a point (e.g. the centre of a bowl containing a ball), a regular path (e.g. a planetary orbit), a complex series of states (e.g. the metabolism of a cell) or an infinite sequence (called a strange attractor). All specify a restricted volume of state space (a compression). The larger area of state space that leads to an attractor is called its basin of attraction and comprises all the pre-images of the attractor state. The ratio of the volume of the basin to the volume of the attractor can be used as a measure of the degree of self-organisation present. This Self-Organization Factor (SOF) will vary from the total size of state space (for totally ordered systems - maximum compression) to 1 (for ergodic - zero compression)
2.9 What is an pre-image ?
If a system is iterated (stepped in time) and moves from state x to state y, then state x is a pre-image of state y. In other words it is on the trajectory that leads into state y. A pre-image that itself has no pre-image is called a Garden of Eden state, and is the starting point for a trajectory. It is usual to exclude states on the attractor itself from the pre-image list, to avoid circularity, since these are all pre-images of each other.
2.10 How do attractors and self-organization relate ?
Any system that moves to a persistent structure can be said to be drawn to an attractor. A complex system can have many attractors and these can alter with changes to the system interconnections (mutations) or parameters. Studying self-organization is equivalent to investigating the attractors of the system, their form and dynamics. The attractors in complex systems vary in their persistence, some have long durations so can appear as fixed 'objects', some are of very short duration (transient attractors), many are intermediate (e.g. our concepts).
2.11 What is the mechanism of self-organization ?
Random (or locally directed) changes can instigate self-organization, by allowing the exploration of new state space positions. These positions exist in the basins of attraction of the system and are inherently unstable, putting the system under stress of some sort, and causing it to move along a trajectory to a new attractor, which forms the self-organized state. Noise (fluctuations) can allow metastable systems (i.e. those possessing many attractors - alternative stable positions) to escape one basin and to enter another, thus over time the system can approach an optimum organization or may swap between the various attractors, depending upon the size and nature of the perturbations.
2.12 How do self-ordering and self-direction relate ?
Self-organization as a generic term is sometimes separated into two forms. The first Self-Ordered relates to physiochemical systems which organize following natural laws. Into this category come crystallisation and many of the dissipative chemical systems. These systems involve no internal decisions and are generally low-dimensional and predictable in behaviour, having no 'function' and these can be described physically. The second category Self-Directed (often also employing the generic term however) relates to systems that can perform internal choices (the 'epistemic' or 'cybernetic' cut), and these relate to both living and explicit man-made systems. They are steered in relation to some internal goal, value or function, often trying to optimise some fitness in conjunction with their environment and these must be described formally (abstractly or algorithmically). It is an open question as to how, during evolution, the first form developed the autonomous control evident from the second.
3. Edge of Chaos
3.1 What is criticality ?
A point at which system properties change suddenly, e.g. where a matrix goes from non-percolating (disconnected) to percolating (connected) or vice versa. This is often regarded as a phase change, thus in critically interacting systems we expect step changes in properties.
3.2 What is self-organized criticality (SOC) ?
The ability of a system to evolve in such a way as to approach a critical point and then maintain itself at that point. If we assume that a system can mutate, then that mutation may take it either towards a more static configuration or towards a more changeable one (a smaller or larger volume of state space, a new attractor). If a particular dynamic structure is optimum for the system, and the current configuration is too static, then the more changeable configuration will be more successful. If the system is currently too changeable then the more static mutation will be selected. Thus the system can adapt in both directions to converge on the optimum dynamic characteristics.
3.3 What is the Edge of Chaos (EOC) ?
This is the name given to the critical point of the system, where a small change can either push the system into chaotic behaviour or lock the system into a fixed behaviour. It is regarded as a phase change. It is at this point where all the really interesting behaviour occurs in a 'complex' system, and it is where systems tend to gravitate give the chance to do so. Hence most ALife systems are assumed to operate within this regime.
At this boundary a system has a correlation length (connection between distant parts) that just spans the entire system, with a power law distribution of shorter lengths. Transient perturbations (disturbances) can last for very long times (infinity in the limit) and/or cover the entire system, yet more frequently effects will be local or short lived - the system is dynamically unstable to some perturbations, yet stable to others.
3.4 What is a phase change ?
A point at which the appearance of the system changes suddenly. In physical systems the change from solid to liquid is a good example. Non-physical systems can also exhibit phase changes, although this use of the term is more controversial. Generally we regard our system as existing in one of three phases. If the system exhibits a fixed behaviour then we regard it as being in the solid realm, if the behaviour is chaotic then we assign it to the gas realm. For systems on the Edge of Chaos the properties match those seen in liquid systems, a potential for either solid or gaseous behaviour, or both.
3.5 How does percolation relate to SOC ?
Percolation is an arrangement of parts (usually visualised as a matrix) such that a property can arise that connects the opposite sides of the structure. This can be regarded as making a path in a disconnected matrix or making an obstruction in a fully connected one. The boundary at which the system goes from disconnected to connected is a sudden one, a step or phase change in the properties of the system. This is the same boundary that we arrive at in SOC and in physics is sometimes called universality due its general nature.
3.6 What is a power law ?
If we plot the logarithm of the number of times a certain property value is found against the log of the value itself we get a graph. If the result is a straight line then we have a power law. Essentially what we are saying is that there is a distribution of results such that the larger the effect the less frequently it is seen.
The mathematical form is: N(s) = s - t
where N(s) is the number of events with size s and t (tor) is the exponent (the minus sign indicates that the numbers fall with increasing s).
Taking logs we have log N(s) = - t log s
A good example is earthquake activity where many small quakes are seen but few large ones, the Richter scale is based upon such a law. A system subject to power law dynamics exhibits the same structure over all scales. This self-similarity or scale independent (fractal) behaviour is typical of self-organizing systems.
4. Selection
4.1 Isn't this just the same as selection ?
No, selection is a choice between competing options such that one arrangement is preferred over another with reference to some external criteria - this represents a choice between two stable systems in state space. In self-organization there is only one system which internally restricts the area of state space it occupies. In essence the system moves to an attractor that covers only a small area of state space, a dynamic pattern of expression that can persist even in the face of mutation and opposing selective forces. Alternative stable options are each self-organized attractors and selection may then choose between them based upon their emergent phenotypic properties.
4.2 How does natural selection fit in ?
Selection is a bias to move through state space in a particular direction, maximising some external fitness function - choosing between mutant neighbours. Self-organization drives the system to an internal attractor, we can call this an internal fitness function. The two concepts are complementary and can either mutually assist or oppose. In the context of self-organizing systems, the attractors are the only stable states the system has, selection pressure is a force on the system attempting to perturb it to a different attractor. It may take many mutations to cause a system to switch to a new attractor, since each simply moves the starting position across the basin of attraction. Only when a boundary between two basins is crossed will an attractor change occur, yet this shift could be highly significant, a metamorphosis in system properties.
4.3 What is a mutant neighbour ?
In the world of possible systems (the state space for the system) two possibilities are neighbours if a change or mutation to one parameter can change the first system into the second or vice versa. Any two options can then be classified by a chain of possible mutations converting between them (via intermediate states). Note that there can be many ways of doing this, depending on the order the mutations take place. The process of moving from one possibility to another is called an adaptive walk.
4.4 What is an adaptive walk ?
A process by which a system changes from one state to another by gradual steps. The system 'walks' across the fitness landscape, each step is assumed to lead to an improvement in the performance of the system against some criteria (adaptation).
4.5 What is a fitness landscape ?
If we rate every option in state space by its achievement against some criteria then we can plot that rating as a fitness value on another dimension, a height that gives the appearance of a landscape. The result may be a single smooth hill (a correlated landscape), many smaller peaks (a rugged landscape) or something in between.
5. Interconnections
5.1 What are interactions ?
Influences between parts due to their interconnections. These interconnections can be of many forms (e.g. wiring, gravitational or electromagnetic fields, physical contact or logical information channels). We assume that the influence can act in such a way as to change the part state or to cause a signal to be propagated in some way to other parts. Thus the extent of the interactions determines the behavioural richness of the system.
5.2 How many parts are necessary for self-organization ?
As few as two (in magnetic or gravitational attraction) can suffice, but generally we use the term to classify more complex phenomena than point attractors. The richness of possible behaviour increases rapidly with the number of interconnections and the level of feedback. For small systems we are able to analyse the state possibilities and discover the attractor structure. Larger systems however require a more statistical approach where we sample the system by simulation to discover the emergent properties.
5.3 What is feedback ?
A connection between the output of a system and its input, in other words a causality loop - effect is fed back to cause. This feedback can be negative (tending to stabilise the system - order) or positive (leading to instability - chaos). Feedback results in nonlinearities, constraints on the system behaviour leading to unpredictability.
5.4 What interconnections are necessary ?
In general terms, for self-organization to occur, the system must be neither too sparsely connected (so most units are independent) nor too richly connected (so that every unit affects every other). Most studies of Boolean Networks suggest that having about two connections for each unit leads to optimum organisational and adaptive properties. If more connections exist then the same effect can be obtained by using canalizing functions or other constraints on the interaction dynamics.
5.5 What is a Boolean Network or NK model ?
Taking a collection (N) of logic gates (AND, OR, NOT etc.) each with K inputs and interconnecting them gives us a Boolean Network. Depending upon the number of inputs (K) to each gate we can generate a collection of possible logic functions that could be used. By allocating these to the nodes (N) at random we have a Random Boolean Network (RBN - also called a Kauffman Net or the Kauffman Model) and this can be used to investigate whether organization appears for different sets of parameters. Some possible logic functions are canalizing and it seems that this type of function is the most likely to generate self-organization. This arrangement is also referred to biologically as a NK model where N is seen as the number of genes (with 2 alleles each - the output states) and K denotes their inter-dependencies.
5.6 What are canalizing functions and forcing structures ?
A function is canalizing if a single input being in a fixed state is sufficient to force the output to a fixed state, regardless of the state of any other input. For example, for an AND gate if one input is held low then the output is forced low, so this function is canalizing. An XOR gate, in contrast, is not since the state can always change by varying another input. The result of connecting a series of canalizing functions can be to force chunks of the network to a fixed state (an initial fixed input can ripple through and lock up part of the network - a forcing structure). Such fixed divisions (barriers to change) can break up the network into active and passive structures and this can allow complex modular behaviours to develop. Because the structure is canalizing, a single change can switch the structure from passive to active or back again, this allows the network to perform a series of regulatory functions.
5.7 How does connectivity affect landscape shape ?
In general the higher the connectivity the more rugged the landscape becomes. Simply connected landscapes have a single peak, a change to one parameter has little effect on the others so a smooth change in fitness is found during adaptive walks. High connectivity means that variables interact and we have to settle for compromise fitnesses, many lower peaks are found and the system can become stuck at local optima or attractors, rather than being able to reach the global optimum.
5.8 What is an NKC Network ?
If we allow each node (N) to be itself a complex arrangement of interlinked parts (K) then we can regard the connections between nodes (C) as a further layer of control. This relates biologically to a genome interacting with other genomes. K is the gene interactions within the organism, C the genes outside the organism that affect it. The overall fitness is derived from the combinations of the interacting gene fitnesses.
5.9 What is an NKCS Network ?
An extension of the NKC model to add multiple species. Each species is linked to S other species. This can best be seen by visualising an ecosystem, where the nodes are species (assumed genetically identical) each consisting of a collection of genes, and the interactions between the species form the ecosystem. Thus the local connection K specifies how the genes of one species interact with themselves and the distant connections (C x S ) how the genes interact with each of the other species. This model then allows co-evolutionary development and organization to be studied.
5.10 What is an autocatalytic set ?
A collection of interacting entities often react in certain ways only, e.g. entity A may be able to affect B but not C. D may only affect E. For a sufficiently large collection of different entities a situation may arise where a complete network of interconnections can be established - the entities become part of one coupled system. This is called an autocatalytic set, after the ability of molecules to catalyse each other's formation in the chemical equivalent of this arrangement.
6. Structure
6.1 What are levels of organization ?
The smallest parts of a system produce their own emergent properties, these are the lowest 'system' features and form the next level of structure in the system. Those system components then in turn form the building blocks for the next higher level of organization, with different emergent properties, and this process can proceed to higher levels in turn. The various levels can all exhibit their own self-organization (e.g. cell chemistry, organs, societies) or may be manufactured (e.g. piston, engine, car). One measure of complexity is that a complex system comprises multiple levels of description, the more ways of looking at a system then the more complex it is, and more extensive is the description needed to specify it (algorithmic complexity).
6.2 How is energy related to these concepts ?
Energy considerations are often regarded as an explanation for organization, it is said that minimising energy causes the organization. Yet there are often alternative arrangements that require the same energy. To account for the choice between these requires other factors. Organization still appears in computer simulations that do not use the concept of energy, although other criteria may exist. This system property suggests that we still have much to learn in this area, as to the effect of resource flows of various types on organizational behaviour. The relationship between entropy and self-organization is also studied, this tries to relate organization to the 2nd Law of Thermodynamics and recent findings here suggest that order is a necessary result of far-from-equilibrium (dissipative) systems trying to maximise stress reduction. This suggests that the more complex the organism then the more efficient it is at dissipating potentials, a field of study sometimes called 'autocatakinetics' and related to what has been called 'The Law of Maximum Entropy Production'. Thus organization does not 'violate' the 2nd Law (as often claimed) but seems to be a direct result of it.
6.3 How does it relate to chaos ?
In nonlinear studies we find much structure for very simple systems, as seen in the self-similar structure of fractals and the bifurcation structure seen in the logistic map. This form of system exhibits complex behaviour from simple rules. In contrast, for self-organizing systems we have complex assemblies generating simple emergent behaviour, so in essence the two concepts are complementary. For our collective systems, we can regard the solid state as equivalent to the predictable behaviour of a formula, the gaseous state as corresponding to the statistical or chaotic realm and the liquid state as being the bifurcation or fractal realm.
6.4 What are dissipative systems ?
Systems that use energy flow to maintain their form are said to be dissipative systems, these would include atmospheric vortices, living systems and similar. The term can also be used more generally for systems that consume energy to keep going e.g. engines or stars. Such systems are generally open to their environment.
6.5 What is bifurcation ?
A phenomenon that results in a system splitting into two possible behaviours (with a small change in one parameter), further changes to the parameter then cause further splits at regular intervals (the Feigenbaum constant, approx. 4.6692...) until finally the system enters a chaotic phase. This sequence from stability, through increasing complexity, to chaos has much in common with the observed behaviour of complex systems, reflecting changes in attractor structure with variations to parameters. On occasion, successive iterations in a model of the system will cycle between the available behaviours.
6.6 How is cybernetics involved ?
Cybernetics is the precursor of complexity thinking in the investigation of dynamic systems and set the groundwork for the study of self-maintaining systems, using feedback and control concepts. It relates generally to systems isolated or closed in organizational terms, in other words to self-contained systems. Complexity theory includes some new concepts such as self-organization plus its various specialisms, and adds more prominence to borrowed concepts like emergence, phase space and fitness landscapes, but in essence it relates systems to other systems. It includes the two way information flows between them, their mutual reactions to their environment or co-evolution. It also deals with systems that can evolve or adapt, that can become quite different systems.
6.7 What is synergy ?
Synergy studies the additional benefit accruing to collective systems. This relates to the idea that the whole is greater (or less) that the parts. It includes the study of mergers, organisational benefits of co-operation and more generally what is referred to in complexity studies as emergence. Synergy includes symbiotic effects, along with many other forms of co-operative or combinatoric fitness enhancements. Where joint effects reduce fitness (e.g. in destructive competition) the term 'dysergy' can be used. In physical systems the term Synergetics is also employed [Haken, Buckminster-Fuller].
6.8 What is autopoiesis ?
Autopoiesis is self-production - maintenance of a living organism's form with time and flows. It is a special case of homeostasis and relates to a systemic definition of life. The concept is frequently applied to cognition, viewing the mind as a self-producing system, with self-reference and self-regulation which evolves using structural coupling. This concept recognises that outside influences cannot shape the system's internal structure, but only act as triggers to cause the structure to either alter its current attractors or to disintegrate.
6.9 What is structural coupling ?
This is the idea that a complex and autopoietic system must relate to its environment, and the internal structure becomes coupled to relevant features of that environment. In complexity terms the environment selects which of the systems attractors becomes active at any time, what is also called situated or selected self-organization.
6.10 What is homeostasis ?
This is the regulation of critical variables to form an equilibrium state in the face of perturbation. It relates to cybernetics and to the EOC state in complexity, and concentrates on automatic mechanisms of self-regulation.
6.11 What are extropy and homeokinetics ?
Several other terms are loosely used with regard to self-organizing systems, many in terms of human behaviour. Extropy (also variously called 'ectropy', 'negentropy' or 'syntropy') refers to growing organizational complexity. Homeokinetics is connected with SOS and relates to viewing complex systems from an atomic point of view as collections of moving particles.
6.12 What is stigmergy ?
The use of the environment to enable agents to communicate and interact, facilitating self-organization. This can be by deliberate storage of information (e.g. the WWW) or by physical alterations to the landscape made as a result of the actions of the lifeforms operating there (e.g. pheromone trails, termite hills). The future choices made by the agents are thus constrained or stimulated dynamically by the random changes encountered.
6.13 What is a swarm ?
A collection of agents (autonomous individuals) that use stigmergic local knowledge to self-organize and co-ordinate their behaviours. This can occur even if the agents themselves have no intelligence and no explicit purpose. Swarm intelligence is also related to Ant Colony Optimization (ACO) and ALife techniques.
7. Research
7.1 How can self-organization be studied ?
Since we are seeking general properties that apply to topologically equivalent systems, any physical system or model that provides those connections can be used. Much work has been done using Cellular Automata and Boolean Networks, with Alife, Genetic Algorithms, Neural Networks and similar techniques also widely used. In general we start with a set of rules specifying how the interconnections behave, the network is then randomly initiated and iterated (stepped) continually following the ruleset. The stable patterns obtained (if any) are noted and the sequence repeated. After many trials generalisations from the results can be attempted, with some statistical probability.
7.2 What results are there so far ?
Some of these results are very tentative (due to the difficulties in analysing larger networks), and subject to change as more research is undertaken and these systems become better understood. Many of these results are expanded and justified by Stuart Kauffman in his previous lecture notes, see: 'The Nature of Autonomous Agents' (published as "Investigations"). For a more philosophical overview of the difficulties see CALResCo's Quantifying Complexity Theory.
-
The attractors of a system are uniquely determined by the state transition properties of the nodes (their logic) and the actual system interconnections.
-
Attractors result in the merging of historical positions. Thus irreversibility is inherent in the concept. Many scenarios can result in the same outcome, therefore a unique logical reduction that a state arose from a particular predecessor (backward causality) is impossible, even in theory. Merging of world lines in this way invalidates, in general, determination of the specific pre-image of any state.
-
The ratio of the basin of attraction size to attractor size (called here the Self-Organizing Factor or SOF) varies from the size of the whole state space (totally ordered, point attractor) down to 1 (totally disordered, ergodic attractor).
-
Single connectivity mutations can considerably alter the attractor structure of networks, allowing attractors to merge, split or change sequences. Basins of attraction are also altered and initial points may then flow to different attractors.
-
Single state mutations can move a system from one attractor to another within the system. The resultant behaviour can change between fixed, chaotic, periodic and complex in any combination of the available attractors and the effect can be predicted if the system details are fully known.
-
The mutation space of a system with 2 alleles at each node is a Boolean Hypercube of dimension N (number of neighbours). The number of adaptive peaks for random systems is 2 ** N /(N+1), exponentially high.
-
The chance of reaching a random higher peak halves with each step, after 30 steps it is 1 in a Billion. The time required scales in the same way. Mean length of an adaptive walk to a nearby peak is ln N. Branching walks are common initially, but most end on local optima (dead ends). This makes finding a single 'maximum fitness' peak an NP-hard problem. Correlated landscapes are necessary for adaptive improvement.
-
Correlation falls exponentially with mutant difference (Hamming distance), becoming fully uncorrelated for K=N-1 landscapes. Searches beyond the correlation length (1/e) sample random landscapes. Hence the number of recombination 'tries' needed to find a higher peak doubles with each success.
-
For such systems with high connectivity, the median number of attractors is N/e (linear), the median number of states within an attractor averages 0.5 * root(2 ** N) (exponentially large). These systems are highly sensitive to disturbance, and swap amongst the attractors easily.
-
For K=0, there is a smooth landscape with one peak (the global optimum). Length of an adaptive walk is N/2, directions uphill decreasing by one with each step.
-
For K=1, median attractor numbers are exponential on N, state lengths increase only as root N, but again are sensitive to disturbance and easily swap between attractors.
-
For K=2 we have a phase transition, median number of attractors drops to root N, average length is also root N (more recent work has identified that sampling techniques tend to miss small attractors, more generally the number increases at least linearly with N). The system is stable to disturbance and has few paths between the attractors. Most perturbations return to the same attractor (since most perturbations only affect the 'stable core' of nodes outside the attractor).
-
Systems that are able to change their number of connections (by mutation) are found to move from the chaotic (K high) or static (K low) regions spontaneously to that of the phase transition and stability - the self-organizing criticality. The maximum fitness is found to peak at this point.
-
Natural genetic systems with high connectivity K>2 have a higher proportion of canalizing functions than would be the case if randomly assigned. This suggests a selective bias towards functions that can support self-organization to the edge of chaos.
-
To create a relatively smooth landscape requires redundancy, non-optimal systems. Maximal compression (efficiency) gives a rugged landscape, and stagnation on a local peak, preventing improvement. Above suggests that systems alter their redundancy to maximise adaptability.
-
The 'No Free Lunch' Theorem states that, averaged over all possible landscapes, no search technique is better than random. This suggests, if the theory of evolution is valid, that the landscape is correlated with the search technique. In other words the organisms create their own smooth landscape - the landscape is 'designed' by the agents...
-
If we measure the distance between two close points in phase space, and plot that with time, then for chaotic systems the distance will diverge, for static it will converge onto an attractor. The slope gives a measure of the system stability (+ve is chaotic) and a zero value corresponds to edge of chaos. This goes by the name of the Lyapunov exponent (one for each dimension). Other similar measures are also used (e.g. Derrida plot for discrete systems).
-
A network tends to contain an uneven distribution of attractors. Some are large and drain large basins of attraction, other are small with few states in their corresponding basins.
-
The basins of attraction of higher fitness peaks tend to be larger than those for lower optima at the critical point. Correlated landscapes occur, containing few peaks and with those clustered together.
-
As K increases, the height of the accessible peaks falls, this is the 'Complexity Catastrophe' and limits the performance towards the mean in the limit.
-
Mutation pressure grows with system size. Beyond a critical point (dependent upon rate, size and selection pressure) it is no longer possible to achieve adaptive improvement. A 'Selection or Error Catastrophe' sets in and the system inevitably moves down off the fitness peak to a stable lower point, a sub-optimal shell. Limit = 2 * mutation rate * N ** 2 / MOD(selection pressure).
-
For co-evolutionary networks, tuning K (local interactions) to match or exceed C (species interactions) brings the system to the optimum fitness, another SOC. This tuning helps optimise both species (symbiotic effects). Reducing the number S of interacting species (breaking dependancies - e.g. new niches) also improves overall fitness. K should be minimised but needs to increase for large S and C to obtain rapid convergence.
-
In the phase transition region the system is generally divided into active areas of variable behaviour separated by fixed barriers of static components (frozen nodes - the stable core). Pathways or tendrils between the dynamic regions allow controlled propagation of information across the system. The number of active islands is low (less than root N) and comprises about a fifth of the nodes (increasing with K).
-
At the critical point, any size of perturbation can potentially cause any size of effect - it is impossible to predict the size of the effect from the size of the perturbation (for large, analytically intractable systems). A power law distribution is found over time, but the timing and size of any particular perturbation is indeterminate.
-
Plotting the input entropy of a system gives a high value for chaotic systems, a low value for ordered systems and an intermediate for complex system. Variance of the input entropy is high for complex systems but low for both ordered and chaotic ones. This can be used to identify EOC behaviour.
-
For a network of N nodes and E possible edges, then as N grows the number of edge combinations will increase faster than the nodes. Given some probability of meaningful interactions, then there will inevitably be a critical size at which the system with go from subcritical to supracritical behaviour, a SOC or autocatalysis. The relevant size is N = Root ( 1 / ( 2 * probability) ).
-
Since a metabolism is such an autocatalytic set, this implies that life will emerge as a phase transition in any sufficiently complex reaction system - regardless of chemical or other form.
-
Given the protein diversity in the biosphere, this proves to be widely supracritical, yet stability of cells requires partitioning to a subcritical but autocatalytic state. This balance suggests a limit to cell biochemical diversity and a self-organizing maintenance below that limit. This is related to the Error Catastrophe, too high a rate of innovation is not controllable by selection and leads to information loss, chaos and breakdown of the system.
-
Given a supracritical set of existing products M, and potential products M' (M' > M), equilibrium constant constraints predict that the probability of the difference M' - M set should be non-zero. Therefore there will be a gradient towards more diversity, in other words 'creativity', in any such system.
-
Evaluating the above for the diversity we find on this planet shows that we have so far explored only an insignificant fraction of state space during the time the universe has existed. Thus the Universe is not yet in an equilibrium state and the standard assumptions of equilibrium statistical mechanics do not apply (e.g. the ergodic hypothesis).
-
Two or more interacting autocatalytic sets that increase reproduction rates above that of either in isolation will grow preferentially. This is a form of trade or mutual assistance, an ecosystem in miniature.
-
Such interacting sets can generate components that are not in either set. giving a higher level of joint operation, emergent novelty.
-
If such innovation involves a cost, then the rate of innovation will be constrained by payback period. This is seen in economic analogues, where risk/profit forms a balance, as well as in ecological systems. Interactions must be net positive sum to be sustainable.
-
In spatially extended networks a wide variety of different patterns are found, these occur over a large fraction of parameter or state space. Patterns form both by continuous gradient (diffusion over space) and discrete interaction (cell-cell induction signalling) processes.
-
Patterns increase exponentially in frequency with the number of units in the network, inductive processes producing more stable patterns, whilst diffusion processes produce more unstable ones, suggesting the former is more important in morphogenesis.
7.3 How applicable is self-organization ?
The above results seem to indicate that such system properties can be ascribed to all manner of natural systems, from physical, chemical, biological, psychological to cultural. Much work is yet needed to determine to what extent these system properties relate to the actual features of real systems and how they vary with changes to the constraints. Power laws are common in natural systems and an underlying SOC cannot be ruled out as a possible cause of this situation.
8. Resources
8.1 Is any software available to study self-organization ?
Few software packages relate to self-organization as such, but many do show self-organized behaviour in the context of more specialised topics. These include cellular automata (Game of Life), neural networks (recurrent or Hopfield networks, and self-organizing maps), genetic algorithms (evolution), artificial life (agent behaviour), fractals (mathematical art) and physics (spin glasses). These can be found via the relevant newsgroup FAQs.
Some self-organization programs are available from these sites:
CALResCo - http://www.calresco.org/sos/calressw.htm - Many Programs demostrating Order from Chaos, Boolean Networks, Artificial Life, Self-Organized Criticality and Multi-Agent Simulations are currently available (QBASIC & Executables).
Santa Fe - http://www.santafe.edu/~wuensch/ddlab.html - Discrete Dynamics Lab, attractor basins of discrete networks (Unix/XWindows, DOS & MAC).
Jurgen Schmitz - http://surf.de.uu.net/zooland/download/packages/boids/boids10.zip - Boids for Windows, self-organising birds (Windows).
Rudy Rucker - http://www.mathcs.sjsu.edu/faculty/rucker/cellab.htm - Cellab, Cellular Automata (some self-organizing) & Langton's self-reproducing CA (Windows).
8.2 Where can I find online information ?
Specialist Resources
- http://www.calresco.org/ - CALResCo, home of this FAQ, introductions, essays & resources
- http://165.227.26.1/et/self.html - Self-organizing concepts & tools
- http://algodones.unm.edu/~bmilne/bio576/instr/html/SOS/sos.html - introduction
- http://bactra.org/notebooks/self-organization.html - SOS notebook
- http://dsp.jpl.nasa.gov/members/payman/swarm/ - swarm intelligence resources
- http://foto.hut.fi/~markus/selforg.html - extensive links to SOS online papers/sites
- http://home.earthlink.net/~mterp/syl-selforg.html - self-organization course
- http://lorenz.mur.csu.edu.au/complex/library/0Self-organisation.html - Virtual Library for SOS
- http://www.cogs.susx.ac.uk/users/ezequiel/alife-page/complexity.html - SOS bibliography
- http://www.cpm.mmu.ac.uk/~bruce/combib/selforganizing.html - self-org measures
- http://www.ezone.com/sos - SOS on the Web
- http://www.santafe.edu/sfi/publications/Bulletins/bulletin-spr95/12debate.html
- http://www.stigmergicsystems.com/ - stigmergic systems
- http://xxx.lanl.gov/archive/adap-org/ - Archive of Adapation/SOS papers
Specialist Applications
- http://armyant.ee.vt.edu/unsalWWW/cemsthesis.html - self-organisation in mobile robots
- http://www.red3d.com/cwr/boids/ - Craig Reynold's Boids, artificial birds
- http://ishi.lanl.gov/symintel.html - self-organizing knowledge
- http://pil.phys.uniroma1.it/eec1.html - Fractal Structures and Self-Organization
- http://websom.hut.fi/websom/ - WEBSOM Self-Organizing Maps
- http://www.acm.org/sigois/auto/Main.html - Self-Org, Autopoiesis & Enterprises
- http://www.astro.cf.ac.uk/pub/Jos.Thijssen/sandexpl.html - Java sandpile
- http://www.dimacs.rutgers.edu/Projects/Simulations/darpa/ - Scalable Self-Organizing Simulations
- http://www.evalife.dk/cycliophora/cycliophora.html - EVALife: Self-organisation in life-cycles
- http://www.geo.uni-bonn.de/members/hergarten/self-organ.html - Self-organization and fractals
- http://www.iephb.ru/spirov.html - self-organisation in biology
- http://www.labs.bt.com/projects/ibsr/dynamo.htm - Self-Organising Adaptive Systems
- http://www.sandia.gov/media/atomorg.htm - Self-Organising Nanopatterns
- http://www.wolfram.com/s.wolfram/articles/82-cellular/index.html - CAs as SOS
Specialist Papers
- http://www.fes.uwaterloo.ca/u/mbldemps/pubs/mesthe/ - A Self-Organizing Systems Perspective on Planning For Sustainability
- http://www.santafe.edu/~wuensch/thesis.html - Attractor Basins of Discrete Networks: Implications on self-organisation and memory
- http://goertzel.org/dynapsyc/1996/fred.html - Chaos, Bifurcations & Self-Organization: Dynamical Extensions of Neurological Positivism & Ecological Psychology
- http://newton.uor.edu/FacultyFolder/JSpee/iaf99/Thread1/conway.html - Conditions That Support Self-Organization in A Complex Adaptive System
- http://platon.ee.duth.gr/~soeist7t/paper//kueppers2.html - Coping with Uncertainty: The Self-Organisation of Social Systems
- http://www.cis.hut.fi/~sami/thesis/thesis_tohtml.html - Data Exploration Using Self-Organizing Maps
- http://www.ifs.tuwien.ac.at/ifs/research/pub_html/rau_wirn98/wirn98.html - Distributed Digital Library based on Self-Organizing Maps
- http://pikas.inf.tu-dresden.de/~fritzke/research/incremental.html - Growing Self-Organizing Networks
- http://www.radix.net/~ash2jam/holarchy.htm - Holarchies: The Metapattern of the Self-Organizing Universe
- http://bactra.org/Self-organization/soup-done/ - Is the Primordial Soup Done Yet ?
- http://www.democracynature.org/dn/vol6/best_kellner_kelly.htm - Kevin Kelly's Complexity Theory: The Politics & Ideology of Self-Organizing Systems
- http://iaix7.informatik.htw-dresden.de/~muellerj/selforgn.htm - Knowledge Extraction from Data Using Self-Organizing Modeling Technologies
- http://www.calsoft-japan.com/techcenter/research/tree.html - Life-time Selection and Self-Organization in Tree Growth
- http://www.qedcorp.com/pcr/pcr/Kauffman.htm - Of Flesh and Ghosts: Self-Organization as Post-Quantum Physics
- http://platon.ee.duth.gr/~soeist7t/paper//krieger1.html - Operationalizing Self-Organization Theory for Social Science Research
- http://www.c3.lanl.gov/~rocha/ises.html - Selected Self-Organization
- http://armyant.ee.vt.edu/unsalWWW/cemsthesis.html - Self-Organisation in Large Populations of Mobile Robots
- http://www.fes.uwaterloo.ca/u/jjkay/pubs/thesis/toc.html - Self-Organization In Living Systems
- http://life.csu.edu.au/esa/esa97/papers/johnson/johnson.htm - Self-Organising in Spatial Competition Systems
- http://ciiiweb.ijs.si/dialogues/r-detela.htm - Self-Organization within Complex Quantum States
- http://www.tec.spcomm.uiuc.edu/nosh/icasost/nc.html - Self-Organizing Systems Research in the Social Sciences:
- http://www.qedcorp.com/pcr/pcr/Kauffman.htm - Self-Organization as Post-Quantum Physics
- http://www.rwcp.or.jp/people/yk/CCM/HICSS27/paper/CCM-ProblemSolving.html - Stochastic problem solving by SO
- http://goertzel.org/dynapsyc/1999/AutopoiesisPaper.htm - The Sameness of Difference: Self-Organisation & the Evolution of Counselling Theory
- http://www.weiterbildung.unizh.ch/texte/soisoc.shtml - The Self-Organizing Information Society
General Complexity Resources
- http://life.csu.edu.au/complex/library/biblio/ - Complex Systems Bibliography
- http://lslwww.epfl.ch/~moshes/alife_links.html - Complex Adaptive Systems
- http://lumpi.informatik.uni-dortmund.de/alife - Complex Systems & ALife
- http://necsi.org/ - New England Complex Systems Institute
- http://pespmc1.vu.ac.be - Principia Cybernetica Web Project, philosophical aspects
- http://www.prototista.org/ - ProtoTista complexity education
- http://tornade.ere.umontreal.ca/~philippp/Back_to_basics - Complex Systems Theory
- http://views.vcu.edu/complex - VCU complexity research group
- http://www.alcyone.com/max/links/alife - Artificial Life links
- http://www.brint.com/Systems.html - Complex Systems & Chaos Theory
- http://www.ccs.fau.edu - The Center for Complex Systems
- http://www.cpm.mmu.ac.uk/~bruce/combib - Measures of Complexity
- http://www.fmb.mmu.ac.uk/~bruce/evolcomp - What is complexity ?
- http://dllab.caltech.edu/avida/ - Avida (The Digital Life Laboratory)
- http://www.physics.uiuc.edu/groups/complex.html - Complex & Nonlinear science
- http://www.radix.net/~crbnblu/assoc/oconnor/chapt1.htm - Systems Thinking
- http://www.santafe.edu/ - Santa Fe Institute
- http://www.serve.com/~ale/html/cplxsys.html - Complex Adaptive Systems
- http://www.trincoll.edu/depts/psyc/homeokinetics/ - Homeokinekics
8.3 What books can I read on this subject ?
There are Reviews available for some of the books listed here and those covering wider complexity related topics.
- Adami, Christoph. Introduction to Artificial Life (1998 Telos/Springer-Vertag). A good introduction with included Avida software, covering the main concepts and maths - see http://www.telospub.com/catalog/PHYSICS/ALife.html
- Ashby, W. Ross. An Introduction to Cybernetics (1957 Chapman & Hall). The earliest introduction to the applicability of cybernetics to biological systems, now reprinted on the Web. Recommended - see http://pcp.vub.ac.be/books/IntroCyb.pdf
- Ashby, W. Ross. Design for a Brain - The Origin of Adaptive Behaviour (1960 Chapman & Hall).
- Auyang, Sunny Y. Foundations of complex system theories: in economics, evolutionary biology and statistical physics (1998 Cambridge University Press).
- Badii and Politi. Complexity: Hierarchical structures and scaling in physics (1997 Cambridge University Press). Technical and detailed review of the scope and limitations of current knowledge - see http://www1.psi.ch/~badii/book.html
- Bak, Per. How Nature Works - The Science of Self-Organized Criticality (1996 Copernicus). Power Laws and widespread applications, approachable.
- Bar-Yam, Yaneer. Dynamics of Complex Systems. (1997 Addison-Wesley). Mathematical and wide ranging -see http://www.necsi.org/publications/dcs/
- Beer, Stafford. Decision and Control (1967 Wiley, New York)
- Blitz, David. Emergent Evolution: Qualitative Novelty and the Levels of Reality (1992 Kluwer Academic Publishers)
- Boden, Margaret (ed). The Philosophy of Artificial Life (1996 OUP). Essays on the concepts within the field, good background reading.
- Buckminster-Fuller, Richard. Synergetics. (1979 Macmillan Publishing Co. Inc). Geometry based - see http://www.rwgrayprojects.com/synergetics/synergetics.html
- Capra, Frijof. The Web of Life: A New Synthesis of Mind and Matter. (1996 Harper Collins). Good non-technical introduction to the general ideas.
- Casti, John. Complexification: explaining a paradoxical world through the science of surprise (1994 HarperCollins). Takes a mathematical viewpoint, but not over technical.
- Cameron and Yovits (Eds.). Self-Organizing Systems (1960 Pergamon Press)
- Chaitin, Gregory. Algorithmic Information Theory (? Cambridge University Press) - see http://www.cs.auckland.ac.nz/CDMTCS/chaitin
- Cilliers, Paul. Complexity and Postmodernism. (1998 Routledge). Philosophy oriented.
- Cohen and Stewart. The Collapse of Chaos - Discovering Simplicity in a Complex World (1994 Viking). Excellent and approachable analysis.
- Coveney and Highfield. Frontiers of Complexity (1995 Fawcett Columbine). Well referenced and historically situated
- Deboeck and Kohonen. Visual Explorations in Finance with Self Organizing Maps (1998 Springer-Verlag)
- Eigen, Manfred. The Self Organization of Matter (?)
- Eigen and Schuster. The Hypercycle: A principle of natural self-organization (1979 Springer)
- Eigen and Winkler-Oswatitsch. Steps Toward Life: a perspective on evolution (1992 Oxford University Press)
- Emmeche, Claus. The Garden in the Machine: The Emerging Science of Artificial Life (1994 Princeton). A philosophical look at life and the new fields, approachable - see http://alf.nbi.dk/~emmeche/publ.html
- Formby, John. An Introduction to the Mathematical Formulation of Self-organizing Systems (1965 ?)
- Forrest, Stephanie (ed). Emergent Computation: Self-organising, Collective and Cooperative Phenomena in Natural & Artifical Computings Networks (1991 MIT)
- Gell-Mann, Murray. Quark and the Jaguar - Adventures in the simple and the complex (1994 Little, Brown & Company). From a quantum viewpoint, popular.
- Gleick, James. Chaos - Making a New Science (1987 Cardinal). The most popular science book related to the subject, simple but a good start.
- Goldstein, Jacobi & Yovits (Eds.). Self-Organizing Systems (1962 Spartan)
- Goodwin, Brian. How the Leopard Changed Its Spots: The Evolution of Complexity (1994 Weidenfield & Nicholson London). Self-organization in the development of biological form (morphogenesis), an excellent overview.
- Goodwin & Sanders (Eds.). Theoretical Biology: Epigenetic and Evolutionary Order from Complex Systems (1992 John Hopkins University Press)
- Haken, Hermann. Synergetics: An Introduction. Nonequilibrium Phase Transition and Self-Organization in Physics, Chemistry, and Biology, Third Revised and Enlarged Edition. (1983 Springer-Verlag)
- Haken, Hermann. Advanced Synergetics: Instabilities Hierarchies of Self-Organizing Systems and Devices. (1983 First Edition Springer-Verlag)
- Holland, John. Adaptation in Natural and Artificial Systems: An Introductory Analysis with applications to Biology, Control & AI (1992 MIT Press)
- Holland, John. Emergence - From Chaos to Order (1998 Helix Books). Excellent look at emergence and rule-based generating procedures.
- Holland, John. Hidden Order - How adaptation builds complexity (1995 Addison Wesley). Complex Adaptive Systems and Genetic Algorithms, approachable.
- Jantsch, Erich. The Self-Organizing Universe: Scientific and Human Implications of the Emerging Paradigm of Evolution (1979 Oxford)
- Johnson, Steven. Emergence (2001 Penguin). A nice overview of self-organization in action in many areas.
- Kampis, George. Self-modifying systems in biology and cognitive science: A new framework for dynamics, information, and complexity (1991 Pergamon)
- Kauffman, Stuart. At Home in the Universe - The Search for the Laws of Self-Organization and Complexity (1995 OUP). An approachable summary - see http://www.santafe.edu/sfi/People/kauffman/
- Kauffman, Stuart. The Origins of Order - Self-Organization and Selection in Evolution (1993 OUP). Technical masterpiece - see http://www.santafe.edu/sfi/People/kauffman/
- Kelly, Kevin. Out of Control - The New Biology of Machines (1994 Addison Wesley). General popular overview of the future implications of adaptation - see http://panushka.absolutvodka.com/kelly/5-0.html
- Kelso, Scott. Dynamic Patterns: The Self-Organisation of Brain and Behaviour (1995 MIT Press) - see http://bambi.ccs.fau.edu/kelso/
- Kelso, Mandell, Shlesinger (eds.). Dynamic Patterns in Complex Systems (1988 World Scientific)
- Klir, George. Facets of Systems Science (1991 Plenum Press)
- Kohonen, Teuvo. Self-Organization and Associative Memory (1984 Springer-Verlag)
- Kohonen, Teuvo. Self-Organizing Maps: Springer Series in Information Sciences, Vol. 30 (1995 Springer) - see http://www.cis.hut.fi/nnrc/new_book.html
- Langton, Christopher (ed.). Artificial Life - Proceedings of the first ALife conference at Santa Fe (1989 Addison Wesley). Technical (several later volumes are available but this is the best introduction).
- Levy, Steven. Artificial Life - The Quest for a New Creation (1992 Jonathan Cape). Excellent popular introduction.
- Lewin, Roger. Complexity - Life at the Edge of Chaos (1993 Macmillan). An excellent introduction to the general field.
- Mandelbrot, Benoit. The Fractal Geometry of Nature (1983 Freeman). A classic covering percolation and self-similarity in many areas.
- Nicolis and Prigogine. Self-Organization in Non-Equilibrium Systems (1977 Wiley)
- Nicolis and Prigogine. Exploring Complexity (1989 Freeman). Within physio-chemical systems, technical.
- Pines, D. (ed). Emerging Syntheses in Science, (1985 Addison-Wesley)
- Pribram K.H. (ed). Origins: Brain and Self-organization (1994 Lawrence Ealbaum)
- Prigogine & Stengers. Order out of Chaos (1985 Flamingo). Non-equilibrium & dissipative systems, a popular early classic.
- Salthe, Stan. Evolving Hierarchical Systems (1985 New York)
- Schroeder, Manfred. Fractals, Chaos, Power Laws - Minutes from an Infinite Paradise (1991 Freeman & Co.). Self-similarity in all things, technical.
- Schweitzer, Frank (ed.). Self-Organisation of Complex Structures: From Individual to Collective Dynamics (1997 Gordon and Breach) - see http://catalog.gbhap-us.com/fc3/catalog?/books/TITLE_REC_0007814
- Sherman and Schultz. Open Boundaries: Creating Business Innovation through Complexity (1998 Perseus Books). The philosophy of company self-organization.
- Sprott, Clint. Strange Attractors: Creating Patterns in Chaos (? M&T Books). Exploring types of attractor with generating programs - see http://sprott.physics.wisc.edu/sa.htm
- Stanley, H.E. Introduction to Phase Transitions and critical phenomena (1971 OUP)
- Stewart and Cohen. Figments of Reality: The Evolution of the Curious Mind. (1997 Cambridge University Press).
- Turchin, Valentin F. The Phenomenon of Science: A Cybernetic Approach to Human Evolution (1977 Columbia University Press). An online book covering similar concepts from an earlier viewpoint, - see http://pespmc1.vub.ac.be/PoS/
- von Bertalanffy, Ludwig. General Systems Theory (1968 George Braziller)
- von Foerster and Zopf (Eds.). Principles of Self-Organization (1962 Pergamon)
- von Neumann, John. Theory of Self Reproducing Automata (1966 Univ.Illinois)
- Waldrop, Mitchell. Complexity - The Emerging Science at the Edge of Order and Chaos (1992 Viking). Popular scientific introduction.
- Ward, Mark. Universality: The Underlying Theory behind Life, the Universe and Everything (2002 Pan). A somewhat hyped popular look at self-organized criticality under another name.
- Wolfram, Stephen. Cellular Automata and Complexity: Collected Papers, (1994 Addison-Wesley). Deep look at mostly 1D CAs and order/complexity/chaos classes - see http://www.stephenwolfram.com/publications/books/ca-reprint/
- Yates, F.Eugene (ed). Self-Organizing Systems: The Emergence of Order (1987 Plenum Press)
9. Miscellaneous
9.1 How does self-organization relate to other areas of complex systems ?
Many studies of complex systems assume that the systems self-organize into emergent states which are not predictable from the parts. Artificial Life, Evolutionary Computation (incl Genetic Algorithms), Cellular Automata and Neural Networks are the main fields directly associated with this idea, all of which fall under the general auspices of Complex Systems or Complexity Theory.
9.2 Which Newsgroups are relevant ?
- comp.theory.self-org-sys - self organizing systems & sponsor of this FAQ
- comp.ai - artificial intelligence
- comp.ai.alife - artificial life
- comp.ai.genetic - genetic algorithms and evolutionary computation
- comp.ai.neural-nets - neural networks
- comp.robotics - robotics
- comp.theory.cell-automata - cellular automata
- comp.theory.dynamic-sys - dynamic systems
- sci.bio.evolution - natural organization and evolution
- sci.fractals - fractal and self-similar systems
- sci.nonlinear - nonlinear and chaotic systems
- sci.systems - systems
9.3 Which Journals are relevant ?
Some journals (both online and printed) which relate to complexity and self-organisation are:
- InterJournal http://www.interjournal.org/ - Self-Organization
- Adaptive Behaviour http://www.adaptive-behavior.org/journal/
- Chaos http://ojps.aip.org/chaos/
- Complexity http://www.interscience.wiley.com/jpages/1076-2787/
- Complexity in Human Systems http://www.systems.org/HTML/Chs-room.htm
- Complexity International http://journal-ci.csse.monash.edu.au/
- Cybernetics and Human Knowing http://www.imprint.co.uk/C&HK/cyber.htm
- Discrete Dynamics in Nature and Society http://journals.wiley.com/1076-2787/
- Dynamic Psychology http://goertzel.org/dynapsyc/dynapsyc.html
- Emergence: Complexity Issues in Organizations and Management http://emergence.org/front.htm
- HyperPSYCOLOQUY http://www.cogsci.ecs.soton.ac.uk/cgi/psyc/newpsy
- International Journal of Futures Studies http://www.systems.org/HTML/fsj-room.htm
- Journal of Artificial Societies and Social Simulation http://www.soc.surrey.ac.uk/JASSS/JASSS.html
- Noetica Cognitive Psychology http://www.cs.indiana.edu/Noetica/toc.html
- Regular and Chaotic Dynamics http://web.uni.udm.ru/~rcd/
- Santa Fe Institute Bulletin http://www.santafe.edu/sfi/publications/Bulletins/
- Studies in Nonlinear Dynamics & Econometrics http://mitpress.mit.edu/e-journals/SNDE/
- The Interscience Review http://hermes-op.com/inscirev/inscirev.html
- U.K. Non-Linear News http://www.amsta.leeds.ac.uk/Applied/news.dir/
9.4 Updates to this FAQ
This FAQ has been compiled and is maintained by Chris Lucas of the CALResCo Group. Comments, suggestions, requests for additions and particularly criticisms and corrections are warmly welcomed. Please feel free to EMail me at CALResCo or post relevant messages to the Usenet newsgroup comp.theory.self-org-sys for discussion.
9.5 Acknowledgements
Thanks are due to many people who have contributed to this FAQ either directly, by discussion and questions, or by influential publications. Especially (in alphabetical order):
Per Bak, Jack Cohen, Kelle Cruz, Erik Francis, Stephan Halloy, Tim Haug, Francis Heylighen, Josh Howlett, Stuart Kauffman, David Kirshbaum, Chris Langton, William Latham, Graeme McCaffery, Yuriy Milov, Mike Monkowski, Gary Nelson, Joseph O'Connor, David O'Neal, Craig Reynolds, Zed Shaw, Peter Small, Clint Sprott, Ian Stewart, Stephen Wolfram, Andy Wuensche, Qi Zeng.
Particular thanks are due to Pete Brown of Mountain Man Graphics, Australia who kindly performed the initial HTML conversion of this document.
9.6 Disclaimers
Usual get out clauses, I take no responsibility for any errors contained in the information presented here or any damages resulting from its use. The information is accurate however as far as I am aware.
This FAQ may be posted in any newsgroup, mail list or BBS as long as it remains intact and contains the following copyright notice. This document may not be used for financial gain or included in commercial products without the express permission of the author.
Copyright 1997/8/9/2000/1/2/3/4/5/6/8/11/12 Chris Lucas, all rights reserved.