Nature = computational process, i.e. clockwork Universe more like a program than a clock The technological revolution of the early 1900's led some scientists to believe that Nature was basically a computational process. An extension of the clockwork Universe idea, this philosophy regards the entire Universe as a gigantic information-processing system or a cosmic computer. The laws of Nature serve as the programming, the initial conditions at the origin of the Universe are the input, and events of the world are the output.

 formalism, a philosophy of mathematics that states there is a computational procedure to prove theorems, leads Turing to design a logic machine to understand the Universe Hilbert noted that there might be a computational procedure for proving mathematical theorems. But at the same time there were several disturbing developments associated with the concept of infinity and various logical paradoxes of self-reference (see below). Turing continued the search for a mathematical `machine', a device that could determine the truth of mathematical statements without human involvement by following a deterministic sequence of instructions. A mechanism that would settle the truth or falsity of every possible statement about numbers (and hence Nature) that could be made.

 instead of demonstrating that the Universe was computable, Turing's machine finds that there exist uncomputable numbers i.e. there are flaws in the mathematical Universe A Turing machine was a device for transforming one string of symbols into another following a fixed set of rules. The rules could be tabulated and the device worked from step to step similar to the modern-day computer. Such a machine could to the basic arithmetic taught to all schoolchildren, such as calculating fractions or square roots. Turing then considered a list of all computable numbers, those numbers which could be generated with a finite set of instructions on a Turing machine. Such a list would itself be infinitely long, but Turing was able to show that such a list could be used to discover the existence of other numbers which could not be on the list, i.e. an uncomputable number. The existence of uncomputable numbers suggests that there must be undecidable mathematical propositions. This undecidability is a clear contradiction to Hilbert's conjecture about the mechanization of mathematics, a theorem cannot be proved or disproved by a systematic general procedure.

 examples of these flaws are found in paradoxes, contradictions in logic Liar's paradox noted by ancient philosopher Epimenides. According to Epimenides, Socrates makes the single statement `What Plato says is false'. Similarly, Plato makes the statement `What Socrates say must be true'. The combination of these two statements immediately leads to the conclusion that what Socrates says must be both true and false at the same time, a contradiction. Another variation of this paradox are the calling cards distributed by French mathematician Philip Jourdain with the following messages printed on opposite sides: The common element to a paradox is that things follow a reductionist system, a set of rules or recipes. This leads us to form expectations about events of the worlds and when the endresult of these rules do not match our expectations, then there is a sense of surprise.

 there are visual paradoxes as well as symbolic ones Other examples are `This sentence is false', if the sentence is true, then it follows that it is false. If the sentence is false, then the sentence is true. Or `I am a liar' or `This sentence cannot be proved true.' Visual aspects of this paradox include Escher drawing hands and impossible waterfall.

 modern science hinges on formal logical systems as means to understand the patterns of Nature logic systems are composed of rules and symbols, abstract representations of structure that we use to deduce Nature The road to truth is paved by a formal logical system, and a paradox is a statement about the limitations to logic, and thereby mathematics itself. Logical systems are idealized, abstract languages originally developed by modern logicians as a means of analyzing the concept of deduction. Logical models are structures which may be used to provide an interpretation of the symbolism embodied in a formal system. Together the concepts of formal system and model constitute one of the most fundamental tools employed in modern physical theories. A formal logical system is a collection of abstract symbols, together with a set of rules for assembling the symbols into strings. Such a system has four components: 1) an alphabet, a set of abstract symbols, 2) grammar, rules which specify the valid ways one can combine the symbols, 3) axioms, a set of well-formed statements accepted as true without proof, and 4) rules of inference, procedures by which one can combine and change axioms into new strings. How does a formal system relate to the mathematical world that we use to describe Nature? One can use a process of dictionary construction to attach meaning to the abstract, purely syntactic structure of the symbols and strings of a formal system to the semantics of a mathematical one. Not all that can be imagined is possible, for example time travel. And it is possible to prove something impossible. Consider the tiling of dominos on a checker board. There are billions of possible combinations for tiling of a full board. And it is possible to find a solution is two adjacent corners are removed. But is it possible to tile with two opposite corners removed?

 examples of limitations to formal systems are the domino problem and chess problems The answer is no, although it would take years of supercomputer time to test every possible combination. A shorter solution is to notice that a domino must tile over a black and white square no matter how it is oriented. Removing two adjacent corners removes a black and white square leaving an even number of black and white ones remaining. But removing two opposite corners leaves an odd number of white squares proving that a tiling solution can not be found.

 Note for the chess problem above, white's pawn takes black rook is an irresistible move for a numerical computer. However, a human player quickly realizes that this is a stalemate game and white cannot release black's pawns.

 the study of paradoxes is very useful to modern science since it quickly demonstrates where the problems are in the questions, not the lack of answers or solutions Most paradoxes end up being statements without meaning or context, such as `What's North of the North Pole?' or `What was before the Big Bang?'. But others appear to be statements that are literally unprovable due to flaws in our logic systems.

Incompleteness:

 formalism requires true/false in finite number of steps The development of computational mathematics in the late 1800's raised the hypothesis that every mathematical statement could be should to be true or false in a finite number of steps. The idea that mathematics is nothing by symbol manipulation is known as formalism.

 Godel demonstrates that there exist mathematical statements that are undecidable The formalist interpretation of mathematics was overturned in the early 1930's by Austrian logician Kurt Godel. Godel produced a sweeping theorem that showed that there exist mathematical statements for which no systematic procedure could determine whether they are either true or false. There exist undecidable propositions in mathematics.

 his theorem is extended to any formal system, like science A mathematical system is called consistent if a statement S and its negation are not both theorems of the system. Godel's Incompleteness theorem states that there exist arithmetic truths unprovable within that formal system. Consider formal system M. Some statements in M are true and some are false. As we prove a statement to be true, it becomes a theorem (colored yellow below). Others are proven false (shown as black below). Godel's Incompleteness theorem says that there will always be statements, like G, that are eternally doomed to be proven neither true or false.

 for any system, there exist statements which must be taken on faith Statements like G are called undecidable since it can neither be proved or disproved within the framework of the formal system M. Even if G is made an axiom, an unproven true statement, there will exist some other statement G' in the new system that is undecidable. If there are true statements that can never be proven true, then these statements must have a Platonic type existence. In other words, at the bottom of any logic system (such as science) are statements that must be taken on faith alone.

Artificial Life:

 artificial life studies investigate the problem of highly deterministic, yet undecidable systems in computational worlds it is possible to demonstrate that randomness and uncertainty are built into the restricts of the logic the Turing test and Penrose hypothesis indicate that human thought transcends rationalism There exist uncomputable operations in artificial life structures such as the game of Life. Since there is no systematic way to decide in advance whether a given mathematical problem is decidable or undecidable by the operation of a Turing machine, i.e. the fate of the machine cannot be known in advance. Therefore, the fate of cellular automata patterns cannot be systematically known in advance, even though all such patterns are strictly deterministic. Randomness and uncertainty are built into the Universe due to the restrictions of logic itself, as soon as systems become complex enough to engage in self-reference. The Turing test is an example of this, can a machine fool a human interrogator into thinking it is actually a human. Interchanges have always failed to convince judges, always with mistakes rooted in the lack of everyday experiences and common sense. Are we testing thinking or behavior? Turing test says nothing about architecture of processing unit. Penrose believes that machines will never duplicate human thought because we are capable of transcending rational thought, the method of following rules or algorithms to arrive at a result by logical inference. The concept of computable truths and turing machines led to the development of artificial life or cellular automation.

Conway's Life

Life Library

 complex systems are defined as those systems which are very sensitive to initial conditions, appearance of randomness is an error The behavior of complex systems is not truly random, it is just that the final state is so sensitive to the initial conditions that it is impossible to predict the future behavior without infinite knowledge of all the motions and energy (i.e. a butterfly in South America influences storms in the North Atlantic).

 examples in Nature are many Although this is `just' a mathematical game, there are many examples of the same shape and complex behavior occurring in Nature.

Deterministic Chaos:

 the key to the failure of determinism is the false assumption that determinism requires prefect predictability It has always been assumed that determinism goes hand and hand with predictability. For example, consider the following geometric construction. Each point on the top line is uniquely associated with a point on the bottom line. Any point close to a point (P' with respect to P) will have corresponding point (Q' with respect to Q). Small errors in our knowledge of the position of P will only produce small errors in our knowledge of the position of Q. If this is a physical system, then we would call this predictability.

 perfect predictability would be similar to the linear correspondence of points on a line, when in fact there are many examples of correspondence that is not linear In contrast to the above situation, P and P' on the arc of a circle are associated with the more widely separated points Q and Q' (where Q is found by drawing a line from the top of the circle through P). The sensitivity becomes more pronounced as P gets closer to the top of the circle. Slight errors in P produce big errors in the location of Q despite the fact that points on the horizontal line are uniquely determined by those on the circle.

 in fact, many systems the error in our knowledge grows faster than our ability to measure a parameter to the desired precision this is called a chaos system Are things chaotic because we lack sufficient information? Consider a line. To label every point on the line, you need not all possible rational numbers, but all irrational numbers as well. So the typical real number can only be expressed as a decimal expansion consisting of an infinite string of digits with no systematic pattern to it, a random sequence. One could use all the atoms in the entire Universe and still not be able to encode the information to record one irrational number with complete precision. Determinism implies predictability only in the idealized limit of infinite precision. The Universe itself cannot know its own workings with absolute precision, and there cannot predict what will happen next in every detail. Deterministic chaos seems random because we are necessarily ignorant of the ultrafine details and so is the Universe itself. The best way to study chaotic behavior, also called nonlinear behavior, is with the use of a pendulum. Notice that complicated behavior does not necessarily imply complicated forces or laws. Even with deterministic laws, the future states of the Universe are open and this bestows upon Nature an element of creativity, an ability to bring forth that which is genuinely new.

 outcomes appear random due to poor knowledge of initial conditions if the error grows exponentially, then the system will remain unpredictable even when operating under determinist rules or laws Chaotic systems are examples of unstable motion because each trajectory is identified with distinct initial conditions that, no matter how close, diverge exponentially over time. In deterministic chaos the equations of motion are deterministic, but the outcome appears random due to the sensitivity to initial conditions. The errors in chaotic systems grow at an exponential rate. So that the randomness of chaotic motion is not simply the result of our ignorance of the initial conditions, but rather is fundamental. More information must be processed to maintain the same level of accuracy and, thus, the system is truly unpredictable. Indeed, when we consider chaos and incompleteness we find that the world is rich in truth and creativity because of their existence, not in spite of. Whatever real-world truths might exist, the overwhelming majority of them cannot be the counterparts of theorems in an formal logical system. The gap between proof and truth can be narrowed, but never closed.

Irreversibility:

 Epicurus' clinamen = deterministic rules but irreversible Universe in fact, our laws of Nature are really abstractions of patterns that are only statistical in nature irreversiblity leads to a important, necessary requirement for a complex Universe, states near nonequilibrium nonequilibrium allows for self-organization Classical physics is a science upon which our belief in a deterministic, time-reversible description of Nature is based. Classical physics does not include any distinction between the past and the future. The Universe is ruled by deterministic laws, yet the macroscopic world is not reversible. This is known as Epicurus' clinamen, the dilemma of being and becoming, the idea that some element of chance is needed to account for the deviation of material motion from rigid predetermined evolution. The astonishing success of simple physical principles and mathematical rules in explaining large parts of Nature is not something obvious from our everyday experience. On casual inspection, Nature seems extremely complex and random. There are few natural phenomenon which display the precise sort of regularity that might hint of an underlying order. Where trends and rhythms are apparent, they are usually of an approximate and qualitative form. How are we to reconcile these seemingly random acts with the supposed underlying lawfulness of the Universe? For example, consider falling objects. Galileo realized that all bodies accelerate at the same rate regardless of their size or mass. Everyday experience tells you differently because a feather falls slower than a cannonball. Galileo's genius lay in spotting that the differences that occur in the everyday world are in incidental complication (in this case, air friction) and are irrelevant to the real underlying properties (that is, gravity). He was able to abstract from the complexity of real-life situations the simplicity of an idealized law of gravity. Reversible processes appear to be idealizations of real processes in Nature. Probability-based interpretations make the macroscopic character of our observations responsible for the irreversiblity that we observe. If we could follow an individual molecule we would see a time reversible system in which the each molecule follows the laws of Newtonian physics. Because we can only describe the number of molecules in each compartment, we conclude that the system evolves towards equilibrium. Is irreversiblity merely a consequence of the approximate macroscopic character of our observations? Is it due to our own ignorance of all the positions and velocities? Irreversibility leads to both order and disorder. Nonequilibrium leads to concepts such as self-organization and dissipative structures (Spatiotemporal structures that appear in far-from-equilibrium conditions, such as oscillating chemical reactions or regular spatial structures, like snowflakes). Objects far from equilibrium are highly organized thanks to temporal, irreversible, nonequilibrium processes.

 complex structures in the Universe (e.g. lifeforms) are due to the behavior of ensembles reductionism must fail for complex systems, like weather forecasting Individual descriptions are called trajectories, statistical descriptions of groups are called ensembles. Individual particles are highly deterministic, trajectories are fixed. Yet ensembles of particles follow probable patterns and are uncertain. Does this come from ignorance of all the trajectories or something deeper in the laws of Nature? Any predictive computation will necessarily contain some input errors because we cannot measure physical quantities to unlimited precision. Note that relative probabilities evolve in a deterministic manner. A statistical theory can remain deterministic. However, macroscopic irreversiblity is the manifestation of the randomness of probabilistic processes on a microscopic scale. Success of reductionism was based on the fact that most simple physical systems are linear, the whole is the sum of the parts. Complexity arrives in nonlinear systems.