I.HILBERT AND GÖDEL
Hilbert’s programme required that an axiomatic system be composed of axioms and rules of inference, be consistent, complete, and have an algorithm (a literally mechanisable procedure, although this virtual implication unfolds in parallel with its technical actualisation) for deciding whether or not a theorem is true (belongs in the system). The system can be thought of as a black box that takes in strings of symbols, evaluates them and returns other strings of symbols.
Gödel showed the impossibilty of the completion of the Hilbert program by:
1.Demonstrating the possibility of mapping a symbolic system onto number in such a way that any symbol or string of symbols has a unique identifier (number-name) (the mapping being therefore reversible (allowing translation back to the original symbolic order). So any string of symbols s has a Gödel number G(s)
2.Thereby, propositions about the symbolic system become arithmetical propositions (propositions about relationships between numbers, or formulae which can either be true or false):
” ‘1+2’ is the beginning of the proposition ‘1+2=3’ “
becomes
G[ödel number for](‘1+2’) is a factor of G(‘1+2=3’)
and in general x has metamathematical relation R to z or R(x,z) can be given an arithmetical (that is, mechanical, programmatic) definition.
One such purely arithmetic relationship is proof or demonstration:
x proves z
Dem (x,z)
3.Metamathematical characterization of variable substitution
sub (m,13,m) specifies the G-number of a formula resulting from variable substitution – ‘the result of substituting 13 for variable ‘m’ in the formula whose G-number is ‘m’.
This is not a formula but a naming of a number, a designator instead of carrying out the calculation. But since it designates the result of an arithmetical operation, it can itself be designated within the system.
NB: if there _is_ no variable ‘m’ in the formula numbered G(m) then the result is simply the result of the formula numbered G(m) (the substitution has no effect)
4.so –
(x) ~Dem(x,z) – of all G-numbers, there is none which stand in a Dem relationship to the Godel-number z – or, of all possible formulas (strings of symbols), not one proves the formula of which z is the Gödel number.
(x) ~Dem(x, sub(y,13,y)) – of all G-numbers, there is none that stand in a Dem relationship to the G-number for the formula produced by substituting 13 for variable ‘y’ in the formula whose Gödel-number is y.
given the Gödel number for the above proposition n, we can construct:
(x) ~Dem(x,sub(n,13,n)) – of all G-numbers, there is none that stand in a Dem relation to the G-number for the formula produced by substituting 13 for variable ‘n’ in the formula whose Gödel-number is n -” this formula denoted by n being, of course, the very same (x) ~Dem(x,sub(n,13,n)).
Thus this proposition is saying that it itself is unprovable, resulting in either inconsistency or incompleteness.
II.CHAITIN: FIXED POINTS AND PARADOX-AS-VIRUS
Chaitin generalises on the Gödel procedure, showing how in a programming language you can construct a ‘fixed point’ (an expression which yields itself) first by creating a function that ‘quotes’ itself twice
function f { given variable x, output “x” “x”;}
so that applying it to itself:
(f f)
yields itself “quoted” as a string twice:
“f” “f”
forcing an evaluation of these quoted strings, however (ie running them through the system as code):
eval (f f)
precipitates an attempt to process the result “f” “f” as code, leading to a copy of the function ready to process itself as input:
{given variable x, output “x” “x”;} {given variable x,output “x” “x”;}
…or
(f f)
so that eval (f f) == (f f), the function yields itself or is a “fixed point”; any number of eval eval eval eval (f f) will always yield (f f).
Chaitin uses a biological metaphor for this process: ‘the first f is an organism, and the second f, the one that’s copied twice, that’s the genome. In other words, the first f is an organism, and the second f is its DNA!…Just as in biology, where an organism cannot copy itself directly but needs to contain a description of itself, the self-reproducing function f cannot copy itself directly (because it cannot read itself – and neither can you [by which presumably Chaitin means that it is unable to process itself as process]). So f needs to be given a (passive) copy of itself.”
In Gödel’s argument, the proposition
(x) ~Dem (x,z) – there is no formula that proves the formula denoted by G-number z
needed to be given a copy of itself in the form of a numerical designator (its own G-number); this was done by the underhand method of using the substitution subroutine to specify the result of the formula-with-substitution (although there would be no actual substitution! ):
(x) ~Dem(x,sub(n,13,n))
can then be seen as a viral algorithm that reproduces itself.
Later on Chaitin remarks “(f f) works in an environment that has a definition for f, but that’s cheating [the analogue would be if Gödel already had at his disposal a readymade designator for the proposition]. What I want is a standalone…expression that reproduces itself. But f produces only a stand-alone version of itself, and _that_ is the actual self-reproducing expression. (f f) is like a virus that works only in the right environment (namely in the cell that it infects), because it’s too simple to work on its own.”
The ‘cell environment’ is the ability of the system to interpret data as code; to interpret bits of DNA as instructions. The analogue in a programming language is the ability to evaluate a string as a programming expression (which in most programming languages is forced explicitly by the command eval(“x”)). This ability to translate from data to code is both a strength and vulnerability in programming languages and biological systems alike (it allows both for evolution and for viral corruption). For instance, if a web application contains an eval (x) statement, there is always the possibility that a user can access the x variable, allowing them access not merely to the prophylactically-protected variable-space in the runtime of the application but to the executing power of the programming language itself. We could compare this disastrous situation with the breaching of the Weissman barrier, meaning that the cell, rather than producing data which is ported externally to create new cells, could itself be ‘reprogrammed’ by its data.
In Gödel’s system part of the ‘cell environment’ is the power of the number-designator to be both a name for an expression and a number in an expression. This was introduced in the phase of the argument where it was shown that meta-mathematical propositions could be dealt with inside the system, thereby collapsing the transcendent relationship of statements about the system to statements within the system.
Chaitin argues that it is the self-replicating nature of Gödel’s argument that is the most important; the fact that he needs to devise an underhand way to make this work (arithmetizing metamathematics) is, like the actual system of Gödel numbering, incidental, despite its undeniable elegance.
Given the assumption that there is such a thing as a valid-proof-checker (a minimal requirement for any axiomatic system), and a suitable medium (one in which you can talk both about expressions and evaluate them – both name and process strings – in other words, where data and code can flow into each other) a viral program will always be able to assert that its own unprovability. The details of the system do not matter at all.
Abstractly, then, start by defining a statement that asserts of whatever you give to it, that the formula it represents, applied to itself, is unprovable
function g {given variable “x” output ‘it is impossible to prove the formula designated by “x” “x” is unprovable’}
and then you apply it to itself:
(g g)
yielding
it is impossible to prove the formula designated by “{function g’}” “{function g}” [this is the expression that asserts its own unprovability.]
next we extract from the output the part where the output ‘names’ itself:
last-two-segments-of (g g)
yielding
“{function g’}” “{function g}”
and evaluate them
eval last-two-segments-of (g g)
yielding an ‘uncoated’ g applied to itself
{function g} {function g}
which, of course, is identical to
(g g)
III.TURING
Turing’s argument, unlike Gödel’s, does not rely on any technical knowledge of the inside of the system and can be said to be a purer form of virus.
It takes the same form – a piece of code comprises a description of the halting algorithm/proofchecker within itself:
function t {given variable ‘x’ create representation “x” “x” intenally; using halting algorithm determine whether it halts; then do the opposite}
once again when run on itself:
turing turing
yields (internally to the program, this time):
“turing” “turing”
if halting algorithm says the program this describes will halt then eval (“turing” “turing”) [creating an infinite loop]
if halting algorithm says the program this describes will not halt then stop.[halt!]
with the resulting, paradoxical, output.
IV.FROM VIRUS TO OMEGA
We’ve seen how Gödel demonstrated incompleteness using a sneaky virus, and how turing’s uncomputability result cemented this by the same method, proving that there is ‘no answer’ to the halting problem and getting incompleteness as a side-effect.
The mechanical immanence of process and processed is no surprise to us: after all, everything that happens on a computer is on one plane: ultimately it is all flattened onto a memory device that is a serial arrangement of binary bits. (And the halting problem is a real computational problem (how long does an operating system wait before deciding that an application has crashed?))
Rather than using an artificial device to ‘break’ axiomatic systems or to perversely break the proof-checker, Chaitin will demonstrate how one can calculate the probability of a program’s halting; and show that this probability is strictly random: there is no rule that can predict or generate it; each bit is totally disconnected from the others around it; the only way it can be produced or recorded is by producing or recording ita bit at a time; it is irreducibly, uncalculably real.
He does this by considering an axiomatic system – that is, a programming language – that consists of binary strings for programs. We need to calculate, given all possible programs which halt, the probability that any given program will halt. Chaitin calls this probability Ω and defines it as
Ω = Σp:U(p) halts2-|p|
(Omega is equal to the summation of 2 to the minus |p| (program length) for all programs p that halt.)
To pick a program at random to see whether it will halt – and we are assuming that the programs in this system are binary – you just need to toss a coin for each bit of the program. Ω is the probability that a program created randomly like this will actually do something, rather than getting into an infinite loop and never terminating.
Lets say you have a list of all the programs which halt:
0001
1001110
0111000
since the probability of getting a 0 or 1 for each bit is 1/2 (like tossing a coin), the probability of getting program one is 1/24, program two 1/26 and program three 1/26. So the probability of the computer halting when running a random program, or the Ω of the system, is simply all of these probabilities added together:
Ω =1/24 + 1/26 + 1/26
or, in binary:
.0001 (probability of getting program one, 4 bits long)
.000001 (probability of getting program two, 6 bits long)
.000001 (probability of getting program three, 6 bits long)
———-
.000110
One important condition that Chaitin had to add is that for this to work programs must be ‘self-delimiting’. This means that no extension of a valid program is a valid program; if 001 halts, then we don’t count 0011 or 0010 or 0010000100101010. So each halting program in fact chops out a whole swathe of others. In this way, the limit case would be if the two shortest programs in a binary system:
0
1
halted: then the probability of halting would be
Ω =1/21 + 1/21
or:
1
0
–
1
Ω =1 !
without the self-delimiting clause, Ω can diverge to infinity rather than converging to a probability 0 < Ω < 1. The result of this is that if we know _n_ bits of Ω , then we can predict (the probability of) a program n bits long halting.
Chaitin’s algorithm for calculating Ω works like this:
for every value of parameter K 1,2,3,4,5….
run every possible program up to K bits in size for K seconds.
This computes a _lower bound_ on the halting probability (obviously, more accurate as K increases) – it tells you how many programs halted before K time elapsed.
As K increases, Ω converges to its true value. “And as soon as the first N bits are correct, you know that you’ve encountered every program up to N bits in size that will ever halt.” (Chaitin notes: ” It should be mentioned that the stage K at which the first N bits of Ω are correct grows immensely quickly, in fact, faster than any computable function of N. “)
Ω is the solution to the halting problem in its kernel form – the most highly compressed ‘answer’ that can be given to the halting question. Since each bit of Ω can only be found out by calculating it (not from any other more compact rule), each bit must be added as a separate axiom to a system that wishes to make use of it.
Each bit is “true for no reason”, in the same way an axiom is – it represents a real unground of mathematics – which is worse than the regulative grounds that fundamental axioms represent (rules that form the basis of a system but have no justification). Worse than that, there are an infinite number of bits in Ω, which means an infinite number of axioms.
V.SPECULATIONS…
In his actual proof, Chaitin constructs a diophantine (using only natural numbers) equation that carries out this procedure, thus demonstrating that simple number theory can produce uncalculable results. He argues that this makes certain areas (at least) of mathematics quasi-empirical. But the problem now is “knowing when things are irreducible”
[We would perhaps have to go back to Kant’s 3rd critique here and say something about pattern, teleology, paley’s watch. The difficulty of defining information.] Chaitin quotes Leibniz’s Discours de métaphysique:
Dieu a choisi celuy qui est… le plus simple en hypotheses et le plus riche en phenomenes
[God has chosen that which is the most simple in hypotheses and the most rich in phenomena]
Mais quand une regle est fort composée, ce qui luy est conforme, passe pour irrégulier
[But when a rule is extremely complex, that which conforms to it passes for random]
(When does a rule r for creating data d become so detailed that they are indistinguishable? Algorithmic Information Theory may be the only way to address such questions.) Chaitin’s omega, the discovery of irreducible mathematical facts, uncalculable reals, is the breakdown of the principle of sufficient reason, of the basic belief that the universe is rational, that everything has a reason other than “It’s just there”… And perhaps a partial return to experimental, empirical mathematics, against the tradition of Platonic rationality. Chaitin interestingly links this to the birth of the polis:
“In ancient Greece if you wanted to convince your fellow citizens to vote with you on some issue, you had to reason with them. Which I guess is how they came up with the idea that in math you had to prove things rather than just discover them experimentally, which is all that it appears that previous cultures in Mesopotamia and Egypt did.”
Whereas Gödel and Turing’s results caused what we might call ‘formal shock’, their impact on actual mathematical practice was minimal (Gödel complained in a letter to his mother that his ideas had failed to have the impact in mathematics that Einstein’s had in physics). Mathematicians realised that these viral contraptions were very unlikely to come up in the course of any calculation, so they could be simply ignored. Chaitin’s contention is that his demonstration Ω heralds a more serious shock to the system; it isn’t just an assertion that formal systems can be overturned by certain methods, it’s the discovery of real mathematical entities that defy systematic rationality, so it’s more of a positive fact than the aporias of incompleteness and uncomputability.
It is extremely interesting that Chaitin should quote the following, also from Leibniz:
Sans les mathématiques on ne pénètre point au fond de la philosophie.
Sans la philosophie on ne pénètre point au fond des mathématiques.
Sans les deux on ne pénètre au fond de rien.
[Without mathematics we cannot penetrate deeply into philosophy.
Without philosophy we cannot penetrate deeply into mathematics.
Without both we cannot penetrate deeply into anything.]
What is this if not Badiou’s call to arms, the re-entanglement of mathematics and philosophy. And yet Chaitin’s speculations lead in an entirely different direction to Badiou’s platonic refoundation: or rather, where Badiou seeks to found ontology on the void of being supplemented by the unknowable excess of truth, Chaitin shows that what is in excess of the ordered system of knowledge is a seething mass of uncomputable reality that can only be brought into the system on its own terms.[2]
My work on Bacon, although not initially concerned with Badiou, set out to explore an irreducible real that I believe his rationalist “materialism” cannot deal with, even explicitly rejects (as being-small-b) -” the point of indiscernibility between phenomenology and materialism (which, obviously, needs more careful formulation). But to give Badiou’s position the credit it deserves, to avoid being consigned to the scrapheap of phenomenological pottering, I would need to show a mathematical account of real irreducible being-small-b that supercedes the platonic-ontological account of mathematics as Being-big-B and argues for a vagabond vs royal science. To make contingent materiality, rather than ‘subjectivity’, the source of excess over dead knowledge . To overturn Badiou’s mystical excess of void (which remains entirely the property of the Gödelian aporia) in favour of an overflowing of the positive real on the full BwO, a nonorganic life from outside. I believe Chaitin’s work provides the necessary tools for this, and that in this light it appears rather [3] that it’s Badiou’s position that repudiates the holocaust of mechanicity that was the enlightenment; in favour of an unknowable connection (pineal gland?) between Being and truth. And Chaitin’s speculation about a new quasi-empirical “‘biological’ complicated mathematics’ is particularly poignant given Badiou’s dismissal of biology as “that wild empiricism disguised as science”[4] (as if empirical research represented some kind of unacceptable dionysian incontinence!) Maybe the re-entanglement of philosophy and mathematics needs to go not by way of Plato but back further, to the Pythagorean brotherhood and the quasi-empirical experimentation, tracking the real of number, which had as much in common with occult numerology as with scientific rationalism.
On the side of art where the irreducible real must be accessed by abandonment to automatism without reason via the diagram; so on the side of mathematics it must be accessed by abandonment to an automatism without reason via the dogged pursuit of the uncomputable. The mathematician, becoming-automated, meets the artist, becoming-automated, in the uncalculable real. This real continually leaks into the system, provides the mechanosmos with what it needs to exceed reason.
[1] All of the information here is extracted (sometimes almost verbatim) from Chaitin’s many publications, all of which are available at this site in digital form. There is also a LISP interpreter and code that enables you to run versions of his proof (and Gödel’s and Turing’s) yourself.
[2] This point is made brilliantly by Ray Brassier in his paper “Nihil Unbound: Remarks on Subtractive Ontology and Thinking Capitalism” in P.Hallward (ed) “Think Again: Alain Badiou and the Future of Philosophy”, Continuum 2004, to which I am endebted for my introduction to Chaitin’s work. I don’t claim to have added anything to Ray’s argument here (apart from a more detailed run-through of the Omega algorithm), these notes are mostly just clarification for my own purposes.
[3]Again, see “Nihil Unbound’.
[4] Badiou “Mathematics and Philosophy – The Grand Style and the Little Style”, in “Theoretical Writings”, Continuum 2004.