A Logician’s Tale

The Association for Symbolic Logic (ASL) is a professional organization of researchers in Mathematical Logic, a field that also includes people from Computer Science, Artificial Intelligence, Philosophy as well as Mathematics itself. When it is not a plague year, the ASL holds an annual meeting in North America and another one in Europe.  This April (2022), the North American meeting was held at Cornell University in Ithaca NY and this writer was invited to give a talk – not on his latest theorems this time but on certain historical aspects of an area in Mathematical Logic he last worked on forty years ago. But it was wonderful to take a break from Covid induced solitude, to re-find old friends and to meet new people in the field.
A gathering like this is basically a religious event for a priesthood of scholars who believe in the magic and majesty of their subject. It is a pilgrimage drawing people from all over – English is the official language but one would hear Hebrew, Polish, Spanish etc. in conversations and black-board sessions throughout the meeting. The comparison can be made to the School of Pythagoras, where Philosophy and Mathematics were blended with a religious theory based on the harmony of the spheres.
The field has its mythic figures – ancients like Aristotle, medieval scholars like William of Ockham and more recent ones like George Boole, Bertrand Russell, Kurt Gödel, Alan Turing, Ludwig Wittgenstein, … . It has a hierarchy based on talent – and some grand old men and women. Apropos, the group at Cornell was about 25% women, 75% men; the current president of the ASL is Julia Knight, a professor at Notre-Dame.
As the meeting unfolds, a first impression one forms is just how far removed from ordinary earthly considerations this subject can be. Like mathematics in general, the subject is driven above all by its own internal momentum – which does make its practitioners seem to inhabit an Ivory Tower constructed by their own imaginations. Many there were talking about monstrously large infinite cardinal numbers – a mystical pursuit justified in part by the knowledge that more one knows about the infinite, the more one can know about the ordinary integers. There was a series of talks on the frontier field of Quantum Computing, calibrating it with classical mathematical models of computability such as Turing machines. Among other topics, there were talks on Model Theory, a subject which extracts rich mathematical information from the simple fact that a subject’s axioms, theorems and open problems can be expressed in a particular formal language.
But these researchers are on the faculties of elite colleges and universities; their work is funded by grants from government agencies like the National Science Foundation and the European Science Foundation. (Some young people from Europe even said that they were especially glad of the opportunity to come to a live meeting at Cornell because the travel money in their grants had to be spent this academic year!) But why all this financial support for such a seemingly marginal enterprise?
The simple answer is that research in pure mathematics has again and again proved vital to progress in the physical sciences. Historically, much mathematics developed in tandem with physics – Archimedes, Newton – but even so their work was based in turn on the geometries of Euclid and Descartes.
In a more modern context, work on the Riemannian Geometry of dimensions higher than 3 provided Einstein the tools he needed for the 4-dimensional geometry of the Theory of Relativity. Drolly put, mathematicians were traveling in space-time even before Einstein!
Yet more recently, it was work in pure math by Yves Meyer and others on Harmonic Analysis (the mathematics of sound) that had a new tool ready for engineers for the development of high-definition digital television –  the wavelet. The wavelet plays the role the classical Fourier transform does for analog radio and TV.
Apropos, Meyer was a colleague of this writer at the University of Paris (Jussieu campus) back in the 1970s – very much the Parisian intellectual: good looking, brilliant, witty and, what’s more, a very nice guy.
And this kind of anticipation of the needs of science and engineering is also true of Mathematical Logic. To start, Boolean Algebra is key to the design of computer chips. And there was Gödel’s Incompleteness Theorem, the extraordinary discovery that the axiomatizations of strong mathematical systems like Set Theory and Number Theory would necessarily fall short as oracles for discovering the truth – human intuition can not be done away with; to accomplish this, Gödel applied mathematical methods to mathematics itself (aka meta-mathematics), to analyze algorithms and proofs, an analysis that led to computer programming as we know it today. Indeed, at a high enough level of abstraction, proofs and programs are pretty much the same thing.
Gödel’s work was in response to Hilbert’s Program, a project which was launched by German mega-mathematician David Hilbert in the late 1920s to apply Proof Theory and its meta-mathematics to establish that the axioms of standard mathematical systems could not yield inconsistent results. Gödel, practically speaking, put an end to Hilbert’s Program although its spirit continued to motivate outstanding work in Proof Theory.
Apropos, at an ASL meeting many years ago, this writer was walking with Stephen Cole Kleene, a giant in the field and one who contributed important work on mathematical models of computability in the 1930s; when asked what motivated them back then, Kleene, an American, responded “Well, the Germans had this Proof Theory and we were just trying to catch up.”
In 1933, John Von Neumann came to the recently created Institute for Advanced Study in Princeton; Von Neumann, a true polymath, worked in many areas of mathematics including Logic: Set Theory and Proof Theory, in particular. It was he who arranged for Kurt Gödel to visit the Institute for Advanced Study in Princeton on three occasions in the 1930s and then arranged a permanent position for Gödel there after the latter’s dramatic escape from Vienna in 1940 – train from Vienna to Moscow, the Trans-Siberian Railroad to Vladivostok, boat to Japan, ship to the US – in 1940 before the German invasion of Russia and before Pearl Harbor. Gödel himself wasn’t Jewish but he was being accused of doing “Jewish mathematics” and his life was being threatened.
In 1936, the young British mathematician Alan Turing published a paper “On Computable Numbers with an Application to the Entscheidungsproblem.” Here Turing presented a new mathematical model for computation – the “automatic machine” in the paper, the “Turing Machine” today. Turing proved that this much simpler model is equivalent to the other schemes of Gödel, Herbrand, Church and Kleene; the Turing Machine is couched in terms of a very primitive device manipulating 0s and 1s; furthermore, Turing demonstrated the existence of a Universal Turing Machine which can emulate the operation of any Turing machine M given the description of M in 0s and 1s along with the intended input; this will turn out to prove very important barely 10 years later. Turing presented his paper at Princeton and then stayed on to do a PhD under Alonzo Church, author of another important model of computation, the λ-calculus – a model far less intuitive than Turing’s but one important today in work on automated proof checking and other areas of Computer Science. Von Neumann tried to get Turing to stay at Princeton as a post-Doc after the latter’s PhD dissertation there in 1938 but Turing went back to England where he was soon working on breaking the codes of the German Enigma machine.
BTW Applying the Universal Turing Machine to itself opens the door to a treasure trove of paradoxical insights and results. In a similar way, Gödel’s Incompleteness Theorem relies on self-reference. Very roughly speaking Gödel’s proof employs a stratagem reminiscent of the Liar’s Paradox of Antiquity: Gödel constructed a self-referential formula asserting “This statement is not provable” – if provable, it’s false; if not provable, it’s true. So if the axioms do not yield false results, Gödel’s statement is true but unprovable. (For an example of an incompleteness in mathematics that does not employ mathematical self-reference, see the Kanamori-McAloon Theorem.)
Scientists have long been involved in the design of new weapons systems: Archimedes used parabolic mirrors to create a laser like beam of light that set the sails of Roman ships on fire in Syracuse harbor; Leonardo supplemented his income by sketching visionary weapons for Ludovico II, the Duke of Milan. But WW II was a watershed when the military and governments realized that for new modern weapons systems, scientists and mathematicians were needed in addition to military engineers.
The most spectacular wartime weapons effort was the Manhattan Project for constructing atomic weapons. John von Neumann worked on the Manhattan Project as did Logician Stanislas Ulam. Ulam started his academic career in Lviv working in Set Theory on very large infinite cardinal numbers – yes, at that same city in western Ukraine today that is subject to constant bombardment and yes those same monstrous infinities that were the subject of several exciting talks at Cornell.
Apropos, Ulam wrote a breezy autobiography Adventures of a Mathematician (1976). At one point he came to Paris and joined a couple of us logicians for dinner at a Basque restaurant near the Panthéon. We tried to get him to tell us whether the Monte Carlo algorithms he had invented were done in connection with his work on the hydrogen bomb – he was charming but evasive. However, he did write down our names most carefully; presumably, were we to become famous, we would get a mention in his next book!
BTW The Soviet Union followed suit in its post-War support for Mathematics and the Soviet School (already strong before the War) became second to none. Mathematics and Theoretical Physics were very attractive areas for young researchers in the USSR since these were the only areas where spying government apparatchiks would never be able to understand what you were actually doing and therefore would have to leave you alone.
In the early days of modern computing machinery one had to rewire the machine, replace external tapes, swap plugboards or reset switches for the next application. This would change. After the War, Von Neumann joined the team at the University of Pennsylvania under John Mauchly and J. Presper Eckert, the team that built the pioneering ENIAC (1945). For this next government funded project, Von Neumann wrote up a report on the design of the next digital computer, “First Draft of a Report on the EDVAC”; inspired by the Universal Turing Machine, in this report, von Neumann introduced “stored programming” where you just input the algorithm along with the data into the memory of the machine – the algorithm and the data live in the same universe after all. This was a crucial step forward. Today, the role of the Universal Turing Machine is played by the operating system of the computer or phone; MS-DOS, Windows, macOS, Unix, Linux, Android, iOS.
BTW In the post-War period, US courts were revealed to have a Platonistic philosophy of mathematics – who knew ? It was ruled that an algorithm could not be patented because the mathematical theorems underlying the algorithm were already true before their proofs came to light – mathematicians were thus discoverers and not inventors. Later the courts patched things up with industry by declaring that one could patent the implementation of an algorithm!
After the war, US government and military financing of university research continued to pay off in spectacular fashion: e.g. the Internet and Artificial Intelligence (AI). AI itself had its roots in Mathematical Logic and the first to warn the world that machine intelligence was destined to outstrip human intelligence was Alan Turing. In his 1951 talk at the University of Manchester entitled Intelligent Machinery: A Heretical Theory, Turing spoke of machines that will eventually surpass human intelligence: “once the machine thinking method has started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control.” This eerie event is now called the Singularity and “experts” predict it will come soon after 2030.
Set Theory, Model Theory, Proof Theory and other areas of Logic also prospered in the post-War era; interest in the field spread and new centers of Logic emerged in the US and as far abroad as Novosibirsk in Siberia. In 1966, Paul Cohen received the Fields Medal (the mathematicians’ “Nobel Prize”) for his elegant work on the Continuum Hypothesis  – this was the subject of the very first in a list of 23 important open problems drawn up by that same David Hilbert in 1900, problems whose solutions would determine the directions Mathematics would take.
Apropos, this writer used Cohen’s techniques to settle a form of the Continuum Hypothesis problem that had been raised by work of Gödel. This earned him an audience with Gödel where they discussed set-theoretic axioms to extend the power of mathematics; when one of this writer’s suggestions was proving too convoluted, Gödel simply said “That won’t work; it has to be beautiful to be true.”
Today, AI and other fields that originated in Mathematical Logic have merged with Computer Science and new fields have been created – such as Complexity Theory which analyzes the run-time of algorithms; and this links in turn to modern cryptography such as that behind the omnipresent https://  . Also in this intersection of Logic and Computer Science there is ongoing work on automated proof checking: this involves new logics and new constraints on the structure of proofs and the conversion of proofs into programs – right back where this all started in Proof Theory.
But can one say that the kind of work presented at the Cornell ASL meeting will have such pervasive consequences as that from years past? We do not know, of course, but mathematics is the best tool humans have for understanding the physical universe both in the large and in the small. Indeed, people always marvel at how the Mathematics fits the Physics so perfectly. Some skeptics claim that human intelligence is limited to the point that mathematical models used for Physics are simply the only ones that we ourselves can understand. Others, more traditional in their philosophy, hold that Mathematics is just the best way for us to touch the mind of God.

2 thoughts on “A Logician’s Tale

  1. Fascinating discussion. What is your view as to whether AI can successfully navigate the issues Godel raised with regard to completeness and consistency? Is it possible there are certain problems which AI algorithms can not successfully “solve” due to their inability to recognize their own mathematical limitations? (See, for example: https://www.sciencedaily.com/releases/2022/03/220317120356.htm)

    1. Excellent point. The reference you give deals with worries that AI systems can’t be trusted. To make them trustworthy, an AI system would IMHO have to be able to provide an explanation
      for its decisions – but that is one of those exponential explosion problems for logic based systems, so
      this is non-trivial. That exponential explosion phenomenon (P ?= NP and all that) is a version of the Goedel-Turing paradoxes where “exponential-time” replaces “unsolvable.” But I like your formulation – in my version, this becomes something like, “just as Set Theory can not prove its own consistency, a neural net might not be able to provide a good solution for an application because that would require a net that ‘understood’ neural nets.”

Comments are closed.