AI I: Pre-History — 500 BC to 1950 AD

Artificial Intelligence (AI) is the technology that is critical to getting humanity to the Promised Land of the Singularity, where machines will be as intelligent as human beings.
The roots of modern AI can be traced to attempts by classical philosophers to describe human thinking in systematic terms.
Aristotle’s syllogism exploited the idea that there is structure to logical reasoning:                “All men are mortal; Socrates is a man; therefore Socrates is mortal.”
The ancient world was puzzled by the way syntax could create semantic confusion; for example, The Liar’s Paradox: “This statement is false” is false if it is true and true if it is false.
Also, in the classical world, there was what became the best selling textbook of all time: Euclid’s Elements where all plane geometry flows logically from axioms and postulates.
The greatest single advance in computational science took place in northern India in the 6th or 7th century AD – the invention of the Hindu-Arabic numerals. The Hindu sages encountered the need for truly large numbers for Vedic cosmology – e.g. a day for Brahma, the creator, endures for about 4,320,000,000 solar years (in Roman numerals that would take over 4 million M’s). This advance made its way west across the Islamic world to North Africa. The father of the teenage Leonardo of Pisa (aka Fibonacci) was posted to Bejaia (in modern Algeria) as commercial ambassador of the Republic of Pisa. Fibonacci brought this number system back to Europe and in 1212 published his Book of Calculation (Liber Abaci) which introduced Europe to its marvels. With these numerals, all the computation was done by manipulating the symbols themselves – no need for an external device like an abacus or the calculi (pebbles) of the ancient Romans. What is more, with these 10 magic digits, as Fibonacci demonstrates in his book, one could compute compound interest and the Florentine bankers further up the River Arno from Pisa soon took notice.
In the Middle Ages in Western Europe, Aristotle’s Logic was studied intently as some theologians debated the number of angels that could fit on the head of a pin while others formulated proofs of the existence of God, notably St. Anselm and St. Thomas Aquinas. A paradoxical proof of God’s existence was given by Jean Buridan: consider the following pair of sentences:
    God exists. Neither of the sentences in this pair is true.
Since the second one cannot be true, the existence of God follows. Buridan was a true polymath, one making significant contributions to multiple fields in the arts and sciences – a glamorous and mysterious figure in Paris life, though an ordained priest. His work on Physics was the first serious break with Aristotle’s cosmology; he introduced the concept of “inertia” and influenced Copernicus and Galileo. The leading Paris philosopher of the 14th Century, he is known for his work on the doctrine of Free Will, a cornerstone of Christianity. However, the name “Buridan” itself is actually better known for “Buridan’s Ass,” the donkey who could never choose which of two equally tempting piles of hay to eat from and died of starvation as a result of this “embarrassment of choice” – apparently a specious attribution contrived by his opponents as this tale does not appear in any of Buridan’s writings; it appears to be mocking Buridan’s work on free will: Buridan taught that simply realizing which of two choices was evil and which was moral was not enough and an actual decision still required an act of will.
Doubtlessly equally unfounded is the tale that Buridan was stuffed in a sack and drowned in the Seine by order of King Louis X because of his affair with the Queen, Marguerite of Burgundy – although this story was immortalized by the immortal poet François Villon in his Ballade des Dames du Temps Jadis, the poem in which the refrain is “Where are the snows of yester-year” (Mais où sont les neiges d’antan); in the poem Villon compares the story of Marguerite and Buridan to that of Hėloise and Abėlard!
It the late middle ages, Ramon Llull, the Catalan polymath (father of Catalan literature, mathematician, artist whose constructions inspired work of superstar architect Daniel Libeskind) published his Ars Magna (1305) which described a mechanical method to help in arguments, especially in ones to win Muslims over to Christianity.
François Viète (aka Vieta in Latin) was another real polymath (lawyer, mathematician, Huguenot, privy councilor to kings). At the end of the 16th Century, he revolutionized Algebra, replacing the awkward Arab system, with a purely symbolic one; Viète was the first to say “Let x be the unknown,” he made Algebra a game of manipulating symbols. Before that, in working out an Algebra problem, one actually thought of “10 squared” as a 10-by-10 square and “10 cubed” as a 10-by-10-by-10 cube.
Llull’s work is referenced by Gottfried Leibniz, the German polymath (great mathematician, philosopher, diplomat) who in the 1670’s proposed a calculus for philosophical reasoning based on his idea of a Characteristica Universalis, a perfect language which would provide for a direct representation of ideas.
Leibniz also references Thomas Hobbes, the English polymath (philosopher, mathematician, very theoretical physicist). In 1655, Hobbes wrote : “By reasoning, I understand computation.” This assertion of Hobbes is the cornerstone of AI today; cast in modern terms: intelligence is an algorithm.
Blaise Pascal, the French polymath (mathematics, philosophy, theology) devised a mechanical calculation engine in 1645; in the 1800s, Thomas Babbage and Ada Lovelace worked on a more ambitious project, the Analytical Engine, a proposed general computing machine.
Also in the early 1800s, there was the extraordinarily original work of Evariste Galois. He boldly applied one field of Mathematics to another, the Theory of Groups to the Theory of Equations. Of greatest interest here is that his work showed that there were problems for which no appropriate algorithm existed. With his techniques, one can show, for example, that there is no general method to trisect an angle using a ruler and compass – Euclid’s Elements presents an algorithm of this type for bisecting an angle. Tragically, Galois was embroiled in the violent politics that led up to the destitution of Charles X and was killed in a duel at the age of twenty in 1830. He is considered to be the inspiration for the young hero of Stendahl’s novel Lucien Leuwen.
Later in the 19th Century, we have George Boole whose calculus of Propositional Logic is the basis on which computer chips are built, Gottlob Frege who dramatically extended Boole’s Logic to First Order Logic which allowed for the development of systems such as Alfred North Whitehead and Bertrand Russell’s Principia Mathematica and other Set Theories; these systems provide a framework for axiomatic mathematics. Russell was particularly excited about Frege’s new logic, which is a notable advance over Aristotle: while Aristotle could prove that Socrates was mortal, the syllogism cannot deal with binary relations as in
    “All lions are animals; therefore the tail of a lion is a tail of an animal.”
Aristotle’s syllogistic Logic is still de rigueur, though, in the Vatican where some tribunals yet require that arguments be presented in syllogistic format!
Things took another leap forward with Kurt Gödel’s landmark On Formally Undecidable Propositions of Principia Mathematica and Related Systems, published in 1931 (in German). In this paper, Gödel builds a programming language and gives a data structuring course where everything is coded as a number (formulas, the axioms of number theory, proofs from the axioms, properties like “provable formula”, …). Armed with the power of recursive self-reference, Gödel ingeniously constructed a statement about numbers that asserts its own unprovability. Paradox enters the picture in that “This statement is not provable” is akin to “This sentence is false” as in the Liar’s Paradox. All this self-reference is possible because with Gödel’s encoding scheme everything is a number – formulas, proofs, etc. all live in the same universe, so to speak. First Order Logic and systems like Principia Mathematica make it possible to apply Mathematics to Mathematics itself (aka Metamathematics) which can turn a paradox into a theorem.
For a theorem of ordinary mathematics that is not provable from the axioms of number theory but is not self-referential, Google “The Kanamori-McAloon Theorem.”
The Incompleteness Theorem rattled the foundations of mathematics but also sowed the seeds of the computer software revolution. Gödel’s paper was quickly followed by several different formulations of a mathematical model for computability, a mathematical definition of the concept of algorithm: the Herbrand-Gödel Recursive Functions, Church’s lambda-calculus, Kleene’s μ-recursive functions – all were quickly shown to be equivalent, that any algorithm in one model had an equivalent algorithm in each of the others. That these models must capture the notion of algorithm in its entirety is known as Church’s Thesis or the Church-Turing Thesis.
In 1936 Alan Turing published a paper “On Computable Numbers with an Application to the Entscheidungsproblem” – German was still the principal language for science! Here Turing presented a new mathematical model for computation – the “automatic machine” in the paper, the “Turing Machine” today. Turing proved that this much simpler model of computability is equivalent to the other schemes; it is couched in terms of a device manipulating 0s and 1s; furthermore, Turing demonstrated the existence of a Universal Turing Machine which can emulate the operation of any Turing machine M given the description of M in 0s and 1s along with the intended input: the Universal Turing Machine will decipher the description of M and will perform the same operations on the input as M would, thus yielding the same output: this is the inspiration for stored programming – in the early days of computing machinery one had to rewire the machine, change external tapes, swap plugboards or reset switches if the problem was changed; with Turing’s setup, you just input the algorithm along with the data into the memory of the machine – the algorithm and the data live in the same universe. In 1945, John Von Neumann joined the team under engineers John Mauchly and J. Presper Eckert and wrote up a report on the design of a new digital computer, “First Draft of a Report on the EDVAC.” Stored programming was the crucial innovation of the Von Neumann Architecture. Because of the equivalence of the mathematical models of computability and Church’s Thesis, Von Neumann also knew that this architecture captured all possible algorithms that could be programmed by machine – subject only to limitations of speed and size of memory. (In the future, though, quantum computing could challenge Church’s Thesis.)
With today’s computers, it is the operating system (Windows, Unix, Android, macOS, …) that plays the role of the Universal Turing Machine. Interestingly, the inspiration for the Turing Machine was not a mechanical computing engine but rather the way a pupil in an English school of Turing’s time used a European style notebook with graph paper pages to do math homework.
BTW, Gödel and Turing have both made it into motion pictures. Gödel is played by Lou Jacobi in the rom-com I.Q. and Turing is played by Benedict Cumberbatch in The Imitation Game, a movie of a more serious kind.
Computer pioneers were excited by the possibility of Artificial Intelligence from the outset, in Turing’s case at least from 1941. In his 1950 paper Computing Machinery and Intelligence, Turing proposed his famous test where, roughly put, a machine would be deemed “intelligent” if it could pass for a human in an interactive session he called “The Imitation Game” (whence the movie title). Futurologist Ray Kurzweil (now at Google) has predicted that a machine will pass the Turing Test by the year 2029. But gurus have been wrong in the past. Nobel Prize winning economist and AI pioneer, Herbert Simon, boldly predicted in 1957 that computer chess programs would outperform humans within “ten years” but that was wrong by some thirty years!
In his 1951 talk at the University of Manchester entitled Intelligent Machinery: A Heretical Theory, Turing spoke of machines that will eventually surpass human intelligence: “once the machine thinking method has started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control.” From the beginning, the Singularity was viewed with a mixture of wonder and dread.
But the field of AI wasn’t formally founded until 1956; it was at a summer research conference at Dartmouth College, in Hanover, New Hampshire, that the term Artificial Intelligence was coined. Principal participants at the conference included Herbert Simon as well as fellow scientific luminaries Claude Shannon, John McCarthy and Marvin Minsky.
Today, Artificial Intelligence is simultaneously bringing progress and miracles to humankind on the one hand and representing an existential threat to humanity on the other. Investment in AI research is significant and it is proceeding apace in industry and at universities; the latest White House budget includes $1.1B for AI research (NY Times, Feb. 17, 2020) reflecting in part the interest of the military in all this.
The principal military funding agency for AI has been the Defense Research Projects Agency (DARPA). According to a schema devised by DARPA people, AI has already gone through two phases and is in the beginning of the 3rd Phase now. The Singularity is expected in a 4th Phase which will begin around 2030 according to those who know.
More to come.
N.B. For the next post in this series, click  HERE .

3 thoughts on “AI I: Pre-History — 500 BC to 1950 AD

  1. Nice summary of the history– clear even to a non-mathematician like me. 🙂

    I have considered this coming “singularity” for some time, have even written about AI in general (Master’s Thesis, Theology: “Just what Makes Us Human?”) and would make only this simple statements below regarding it. And, before I do, I would suggest any reader become comfortable with the subtle differences in the meaning of these words: Knowledge, Intelligence, Judgment, Wisdom… and Intuition.

    If one looks at the definitions it is easy to see how, by definition, mechanical devices–AI, if you will– can eventually/will indeed surpass man in the first four categories, where information alone– and the logic to process it– are fundamental determinants: More info, processed faster, probabilities calculated, judgments made by the AI than by man … and we already see them beating some specialists at stock market and other predictions.

    But we are not only physical beings, filled with data and processing it, for parallel to our “mechanics” we have “feelings” about what we know, judgments based not on facts but, say, on how the sky looked–a rainbow, perhaps– when we thought about a topic; or when we had that strawberry-rhubarb pie as we thought about an issue; or a chemist–in a bit of a drugged haze– got the structure of benzene solved while riding a bus; or when our grandchild called in the middle of our pondering whatever… that changes how we see that data trail.

    THAT AI will never do, never can do, and it– for sci fi lovers– is the quintessential Kirk/Spock, Picard/Data difference.

    1. Along with “feelings,” “intuitions”, etc, our intelligence is much richer than what we are consciously thinking. Perhaps work with AI systems will expose processes of intellection that we are unconsciously using which would make us know ourselves all the better.

      1. Surely to some extent, we can bring what is unconscious into consciousness as you say, but I would see that as being subject to a limit, as an asymptote approaches a limit: Close but no cigar.

        Crossing over consciously, willing it, is not-thinking about a blue wolf. A parallel is the leap of faith some of us take re our how we approach super-natural truth. AI, as I understand it, is inherently incapable of that leap…. a different way to say what I said above,

Comments are closed.