AI V: The Random and the Quantum

As children, we first encounter randomness in flipping a coin to see who goes first in a game or in shuffling cards for Solitaire – nothing terribly dramatic. Biological evolution, on the other hand, uses randomness in some of the most important processes of life such as the selection of genes for transfer from parent to child. Physical processes also exhibit randomness such as collisions of gas molecules or the “noise” in communication signals. And then randomness is a key concept in Quantum Mechanics, the physics of the subatomic realm: the deterministic laws of Newton are replaced by assertions about likely outcomes formulated using mathematical statistics and probability theory – worse, the fundamental axiom of Quantum Mechanics is called the Heisenberg Uncertainty Principle.
But randomness has given Artificial Intelligence (AI) researchers and others a rich set of new algorithmic tools and some real help in dealing with the issues raised by exponential growth and combinatorial explosion.
Indeed, with the advent of modern computing machinery, mathematicians and programmers quickly introduced randomization into algorithms; this is indeed coeval with the Computer Age – mathematicians working laboriously by hand just are not physically able to weave randomness into an algorithmic process. Randomized algorithms require long sequences of random numbers (bits actually). But while nature manages to do this brilliantly, it is a challenge for programmers: computer software pioneer John von Neumann pontifically said “Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.” But sin they did and techniques using random numbers felicitously named “Monte-Carlo algorithms” were developed early on and quickly proved critical in the Manhattan Project and in the design of the Hydrogen Bomb. (“Las Vegas algorithms” came in later!)
Monte-Carlo methods are now in widespread use in fields such as physics, engineering, climate science and (wouldn’t you know it!) finance. In the Artificial Intelligence world, Monte Carlo methods are used for systems that play Go, Tantrix, Battleship and other games. Randomized algorithms are employed in machine learning, robotics and Bayesian networks. AI researchers reverse-engineered some of Nature’s methods and applied them to applications that are subject to combinatorial explosion. For example, the process of evolution itself uses randomness for mutations and gene crossover to drive natural selection; genetic algorithms, introduced in 1960 by John Holland, use these stratagems to develop codes for all sorts of practical challenges – e.g. scheduling, routing, machine learning, etc.
The phenomenon of exponential growth continues to impede progress in AI, in Operations Research and other fields. But hope springs eternal and perhaps the physical world can provide a way around these problems. And, in fact, quantum computing is a technology that has the potential to overcome the combinatorial explosion that limits the practical range of mathematical algorithms. The role of randomness in Quantum Mechanics is the key to the way Quantum Mechanics itself can deal with the problem of an exponential number of possibilities.
Just as the Theory of Relativity is modern physics at a cosmic scale, Quantum Mechanics is modern physics at the atomic and sub-atomic scale. A rich, complex, mind-bending theory, it started with work of Max Planck in 1900 to account for the mysterious phenomenon of “black-body radiation” – mysterious in that the laws of physics could not account for it properly. Planck solved this complicated problem by postulating that energy levels could only increase in whole number multiples of a minimal increment, now called a quantum. Albert Einstein was awarded his Nobel prize for work in 1905 analyzing an equally mysterious phenomenon, the “photo-electric effect,” in terms of light quanta (aka photons). Einstein showed that photons make discrete “quantum leaps” in going from one energy level to another.
The first pervasive practical application of quantum mechanics was the television set – in itself a confounding tale of corporate games playing and human tragedy; for the blog post on this click HERE . Then there were the laser, the transistor, the computer chip, etc.
Digital computers are not able to simulate quantum systems of any complexity since they are confronted with an exponential number of possibilities to consider, another example of combinatorial explosion. In a paper published in 1982, Simulating Physics with Computers, Nobel laureate Richard Feynman looked at this differently; since simulating a quantum experiment is a computational task too difficult for a digital computer, a quantum experiment itself must be actually performing a prodigious act of computation; he concluded that to simulate quantum systems you should try to build a new kind of computational engine, one based on Quantum Mechanics – quantum computers ! (Moreover, for the experts, the existing field of quantum interferometry with its multiparticle interference experiments would provide a starting point! In other words, they knew where to begin.)
In Quantum Mechanics, there is no “half a quantum” and energy jumps from 1 quantum to 2 quanta without passing through “1.5 quanta.” But there are more strange phenomena to deal with such as entanglement and superposition.
The phenomenon of entanglement refers to the fact that two particles in the same state can be separated in space but later a change to one will force the same state change in the other; so although there is no physical link between them, the particles still influence each other. Here we have a return to physics of the magical “action at a distance,” a concept that goes back to the middle ages and William of Ockham (of “razor” renown); but “action at a distance” was something thought banished from serious physics by Maxwell’s Equations in the 19th century. However, entanglement has been verified experimentally with photons, neutrinos, and other sub-atomic particles.
The phenomenon of superposition also applies to sub-atomic particles. But it is traditionally illustrated by the fable of Schrödinger’s Cat. There is a small amount of radioactive material which has a half life of one hour, meaning that the material will decay within one hour with 50% probability – the decay is a random subatomic event that can happen at any time within the hour. A cat is in a box and an apparatus is set up so that the cat will be poisoned if and only if the radioactive material actually decays; an outside observer will not know if any of this has happened. At the end of the hour, the apparatus that would poison the cat is turned off. Naively, when the hour is up, one would think that the cat in the box is either alive or dead. However, if the cat is thought of as a quantum system like an electron or photon, while waiting in the box the cat is in two superposed states simultaneously because of the 50% probability that it is alive and the 50% probability that it is dead; only when the diligent outside observer opens the box and the cat is seen, does the superposition collapse into one state (alive) or the other (dead).
Ironically, this fable was intended to poke fun at the interpretation of Quantum Mechanics put forth by the Copenhagen School led by Niels Bohr but it has become the classic story for illustrating superposition.
Confusing? Well, Niels Bohr famously said “And anyone who thinks they can talk about quantum theory without feeling dizzy hasn’t yet understood the first word about it.” But physicists are proud of the “quantum weirdness” of entanglement and superposition and even have Bell’s Theorem, a fundamental result that proves that quantum weirdness unavoidably comes with the territory.
All this made Albert Einstein himself uncomfortable with Quantum Mechanics describing entanglement as “spooky action at a distance.” He made light of the field itself for its reliance on probability and randomness saying “God does not play dice with the universe,” to which Bohr responded “Einstein, don’t tell God what to do.”
Quantum computing is based on the qubit, a quantum system that emulates the {0,1} of the traditional binary bit of digital computers by means of two distinguishable quantum states. But, because qubits behave quantumly, one can capitalize on “quantum weirdness.”
A quantum computer solves a problem by simultaneously searching through all possibilities in the search space without committing to one which it only does at the end when its state is measured (or observed) at which point, as with Schrödinger’s Cat, it settles into the desired value for the qubits.
Recently, Google announced a milestone in quantum computing: in 200 seconds their quantum system solved a problem that would take a traditional digital supercomputer about 10,000 years to complete! (The problem was to verify the randomness of a very long sequence of numbers.) And they claimed Quantum Supremacy in the race with IBM. The latter contested this claim, of course, saying (among other things) that their digital computers could solve that problem in only 2.5 days! So the jury is still out on all this – but something is happening and both companies are investing most seriously in the field.
With quantum computing, Mathematical Logic might still play an important role in reaching the Technological Singularity (the point where machine intelligence will surpass human intelligence) if the impasse presented by combinatorial explosion can indeed be broken through. One thing is for sure, quantum computing would signal the end of the encryption technique which is the basis of the s (for secure) in “https://” ; this technique (known by its authors’ initials as RSA) exploits combinatorial explosion and the concomitant clumsiness of algorithms – but have no fear, Microsoft is already working on encryption which will be immune to quantum computing.
Scientists have long marveled at the extraordinary fit between mathematics and physics, from the differential equation to the geometry of space-time – to the extent that today theoretical physicists start by looking at the mathematics to point them in new directions. Quantum computing would be an elegant “thank you” on the part of physics; it would be a revolution in our understanding of algorithms and a challenge to the Church-Turing Thesis that the Turing Machine model and, therefore, digital computers represent the theoretical limit of what can ever be computed.
So after a bumpy start in the 1950s and ‘60s and a history of over-promising, AI appears to be well poised for the race to the Technological Singularity: the field has had several decades of accelerating progress; its effects can be seen everywhere, often anthropomorphized as with Siri and Alexa. So what can be expected in the next decade, in the 3rd Wave of AI? More to come. Affaire à suivre.

2 thoughts on “AI V: The Random and the Quantum

  1. Lot of interesting points here, which I am surely math-incompetent to address, but ones of which I do, nonetheless, have some related knowledge and/or can intuit analogies, any one of which parallel thoughts below might spark a discussion, any one of which is also too big to develop here. ☺

    Some thoughts:

    1) “Random” may actually be in the eye of a beholder who may/does/can not grasp an inherent underlying and even self-correcting order, one that cannot be measured physically. To the same person who tells God he can now make man from dirt by himself and therefore no longer needs Him, God gently reminds Him, “Uh…that’s MY dirt.”

    2) Action-at-a-distance-entanglement, even if only limited to the subatomic world, may yet prove to be in the blind-eye of the experimenter who believes his equations have real-world, macro-relevance. One must distinguish between the quite predictable Gaussian behavior of a large groups of actors of whatever kind, a predictability that turns to probabilities for 1 or a few actors, as stated here and also by me a post or two back. Schrodinger’s cat is for Physics 101 to spark interest among Freshmen, not for helping the world to be free from cats or to have a good alibi for not being where some say we have been seen.

    3) Saying something takes 10,000 years to calculate—vs. 200 seconds or 2.5 days— but not how that (esp. first) measurement was made or estimated, is a bit like trying to grasp space travel even to the nearest star—4 light years away; most star groups of interest are much farther. Even at the speed of light, about 200,000 miles per second the closest star is a 4-year journey; we cannot, by currently known means, go faster. At the still dazzling speed of 1/100th of that speed, though, a dazzling 2,000 miles per second, the journey would take 400 years… we think, if by our extrapolating the rules we know here, today. We can still talk about that 400-year journey, but also have no idea what it means in terms of the many generations of humans required to make it, or perhaps of one group of well-frozen folks.

    4) Relevant to #3: There have to be “other ways” to get to such a destination that bypass those so-far-unsurpassable limits… and there are, for we have the familiar worm-hole travel models, which include Scotty’s Warp-Drive engine and other probable fictions. Such succeed by bending the rules/altering the premises of travel in space by… bending, warping, space. But maybe there are other ways….

    5) David Bohm, famously, and others—I am pretty much a fan—believed that time was a fractal… which, of course, it is, at least for God for Whom time is an everlasting NOW, which includes our non-infinite series of now’s; it is also for us in some ways—e.g. our sense of unchanging I which merely, over our years, acquires changing attributes. Travel in THAT dimension—ultimately doable, if the paradox of change-possibility once out-of-our-time remains unresolved—might also save us worm-hole special-effects… except that may also be part of what time travel is—a different kind of space travel measured by the mile not only hour.

    6) I still would not want to be on any ship that used a quantum drive engine as in the Hitchhiker’s Guide to the galaxy.

Comments are closed.