Mathematics / Critical Notes

Vol. 1, NO. 4 / October 2015

An Algorithmic God

Gregory Chaitin

Letters to the Editors

In response to “An Algorithmic God


What follows is based on a talk given by the author at a meeting on Randomness in Quantum Physics and Beyond, Barcelona, May 4–8, 2015, organized by the Institute of Photonic Sciences with support from the John Templeton Foundation.

Originally, I wanted to be a physicist, not a mathematician, and when I was a student, a course on quantum mechanics would begin with Schrödinger’s equation. Now courses on quantum mechanics begin with qubits and quantum circuits. The concepts of information and computation provide a new perspective on traditional quantum mechanics.

And in other areas as well—in particular, epistemology.

I am interested in using software models at a meta level that can help us to understand what is possible in physics, mathematics, and biology.

I propose to measure the conceptual complexity of a theory in terms of its algorithmic information content.

Let me start with the metaphysics. In the 1960s, Ray Solomonoff and I independently proposed a simple model of empirical science. A scientific theory is a computer program that calculates experimental data. Both the theory and the empirical data are represented as finite strings of bits; the data must be explained bit for bit without error. Solomonoff was interested in inference, artificial intelligence, machine learning, and practical applications, but my main interest was Gödel incompleteness theorems.

This model enables us to measure the conceptual complexity of a theory in terms of the number of bits in the software. A theory is useful to the extent that it contains a smaller number of bits than the data itself.

Data that cannot be compressed is algorithmically irreducible or structurally random.

Algorithmic randomness is related to, but different from, physical randomness; it concentrates on the result, not on the process by which it is generated; and on individual outcomes, and not on ensembles. If a physical process can generate any of 2N possible N-bit strings with equal probability, then this process is physically random; but algorithmic randomness distinguishes between typically unstructured outcomes and unexpectedly structured outcomes—for example, an outcome that is all 0s or all 1s.

And what is the link to incompleteness and metamathematics? It turns out that most bit strings are algorithmically random: there is no simpler theory to explain them. But you can almost never be sure that an individual bit string is irreducibly complex. You can almost never prove this mathematically, even though it is overwhelming likely to be the case! Similarly, you can almost never be sure you have the best theory, the simplest program that produces the output that it does.

David Hilbert had the idea of showing that mathematics could provide absolute truth, just so long as there existed a formal axiomatic theory for all of mathematics. This would be like an expanded version of Euclid’s system for geometry, except that the rules for carrying out proofs would be completely objective and completely mechanical. Hilbert’s idea was that if we could all agree on such a formal axiomatic theory, the subjective element would be removed from mathematics, at least in principle.

Some connection is already implicit in Alan Turing’s famous paper, “On Computable Numbers.”1 The connection was studied again by Emil Post.2 Both Hilbert and Post realized that a formal axiomatic theory is a computer program that runs through the tree of all possible deductions from the axioms, rejecting invalid arguments, and proving every possible theorem, one by one. In other words, a formal axiomatic theory is merely a program for mechanically generating the set of all possible theorems. This is Post’s concentrated essence of Hilbert’s idea.

We can now measure the conceptual complexity or algorithmic information content of a formal axiomatic theory via the number of bits of software in this computer program.

Why is this useful? Because it is easy to prove that an N-bit formal axiomatic theory cannot enable us to prove that a bit string with more than N bits is algorithmically irreducible.

Information-theoretic incompleteness arises in any attempt to determine the binary expansion of a universal Turing machine’s halting probability omega, Ω. An N-bit theory can determine, at most, N bits of Ω, where Ω measures the probability that a randomly chosen program will halt.

The bits of Ω are a perfect simulation of physical randomness—independent tosses of a fair coin, say. The infinite sequence of bits in Ω is maximally unstructured. 0s and 1s are equally likely; and Ω is a normal real number in the sense of Émile Borel. In every base, all blocks of digits of the same size are equally likely. Not only is Ω uncomputable, Ω is, in fact, maximally uncomputable, and so, an algorithmically and logically irreducible real number.

The individual bits in Ω violate the principle of sufficient reason because they are mathematical truths that are true for no reason. They have the value that they do for no reason simpler than themselves.

Whether each bit of Ω is a 0 or a 1 is both totally random and mathematically well-determined. The halting probability Ω is, after all, a real number whose value is precisely determined once a universal programming language is fixed.

The world of pure mathematics has infinite conceptual complexity, but any mathematical theory has finite conceptual complexity.

No mathematical theory can capture more than an infinitesimal part of mathematical truth!

Mathematicians have very often rejected incompleteness as insignificant. Mathematicians continue to believe that they, and not the physicists, are the unique possessors of absolute truth. But in the past few years, my work on biology has led me to a radical reinterpretation of incompleteness. Incompleteness means, as Post recognized, that mathematics is creative. Incompleteness is the first step toward a mathematical theory of creativity, a bridge between pure mathematics and biology.

In 1948, John von Neumann suggested that in software, biology and computer science share the same fundamental idea.3 Software accounts for both the plasticity of the biosphere and for the plasticity of computer technology. It is an idea that was mathematically expressed for the first time in Turing’s 1936 paper, but DNA was around for a long time before Turing.

The random evolution of DNA is too complicated, too messy, to discuss mathematically. It is easier to study the random evolution of computer programs. Darwinian evolution has been characterized as design without a designer, and metabiology is about programming without a programmer.

Suppose that I have a single organism at a time undertaking a hill-climbing random walk in software space. Mutations are accepted only if they increase the fitness, and in my simplest model, the fitness of a software organism is the number it calculates. The bigger the number, the fitter the organism.

Metabiology deals with software organisms and software mutations. In metabiology, the space of all possible organisms is modeled by the space of all possible programs in a fixed universal computer programming language. I believe that this is a space that is rich enough to provide a starting point for mathematical biology.

In pure mathematics beauty is a criterion for truth. A theory is true if it has beautiful proofs. But to be able to prove beautiful theorems about metabiological evolution, I was forced to use algorithmic mutations. An algorithmic mutation is a program that takes the original organism as input, and produces the mutated organism as output. This is a far cry from the indels and point mutations of molecular biology.

The good thing about this extremely general and powerful kind of mutation is that algorithmic information theory provides us with a natural a priori probability measure on the space of all possible mutations: If the mutation M is an N-bit program, then M has probability 2N. This is the same probability measure that we used in defining the halting probability Ω.

An intriguing feature of metabiology is the possibility of using algorithmic mutations to explain jumps in evolution, such as missing intermediate forms, the Cambrian explosion, and the transition from single to multicellular organisms.

So that is the current status of metabiology. It’s an embryonic theory, a proposed new research program providing a very abstract overview of biology. To me as a pure mathematician, metabiology is justified if it enables us to prove beautiful theorems. More generally, software models and current work on quantum information and randomness show the importance of an idealistic as opposed to a materialist conception of nature. Information and computation are better than matter and energy. I expect that this is just the beginning and that, if we are fortunate, we will continue to see idealism supersede materialism, particularly in the much-to-be-desired basic theoretical understanding of consciousness and mind.4

Endmark

  1. Alan Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society 42, ser. 2 (1937): 230–65. 
  2. See Emil Post, “Finite Combinatory Processes - Formulation 1,” Journal of Symbolic Logic 1 (1936): 103–5; Emil Post, “Formal Reductions of the General Combinatorial Decision Problem,” American Journal of Mathematics 65 (1943): 197–215; Emil Post, “Recursively Enumerable Sets of Positive Integers and Their Decision Problems,” Bulletin of the American Mathematical Society 50 (1944): 284–316. 
  3. See John von Neumann, “The General and Logical Theory of Automata,” in Cerebral Mechanisms in Behavior: The Hixon Symposium, ed. Lloyd Jeffress (New York: Wiley, 1951); John von Neumann, Theory of Self-Reproducing Automata (Champaign, IL: University of Illinois Press, 1966). 
  4. For more on such matters please see Gregory Chaitin, “Conceptual Complexity and Algorithmic Information,” La Nuova Critica 61–2; Gregory Chaitin, Meta Math! The Quest for Omega (New York: Pantheon, 2005); Gregory Chaitin, Proving Darwin: Making Biology Mathematical (New York: Pantheon, 2012), or my website. See also Virginia Chaitin’s presentation “Metabiology and the Neo-Darwinian Synthesis” at the Beyond Center in Arizona on 4 May 2015, and Virginia Chaitin, “Metabiology, Interdisciplinarity and the Human Self-Image,” La Nuova Critica 61–2, both available from her website. Last but not least, see the work on metabiology at Felipe Abrahão’s website

Gregory Chaitin is an Argentine-American mathematician living in Rio de Janeiro.


More from this Contributor

More on Mathematics


Endmark

Copyright © Inference 2024

ISSN #2576–4403