AI IV: Exponential Growth

The phenomenon of exponential growth is having an impact on the way Artificial Intelligence (AI) is bringing us to the Technological Singularity, the point at which machine intelligence will catch up to human intelligence – and surpass it quickly thereafter.
The phenomenon of exponential growth is also much with us today because of the Corona virus whose spread gives a simple example: 1 person infects 2, 2 infect 4, 4 infect 8, and so on; after 20 iterations of this, over 4 million people are infected. Each step doubles the number of new patients and when you get to 20 the number of patients is huge – and all that from one initial case.
A recent New York Times article (April 25) uses the example of a pond being overrun by lily pads to illustrate the exponential spread of the virus. At first there is only one lily pad but once the pond is half-covered with lily pads, the next day the entire pond is covered.
For another example, in a classic tale from medieval India, the King wants to reward the man who has just invented Chess; the man requests 1 grain of wheat on a corner square of the chess board and then twice the previous amount on each successive square until all squares are accounted for; naively, the King agrees thinking he is getting a bargain of sorts; but after a few squares, they realize that soon there would be no wheat left in the kingdom! According to some accounts, the inventor does not survive the King’s wrath; according to others he becomes a high ranking minister. BTW, in the end the total number of grains of wheat on the chessboard would come to 18,446,744,073,709,551,615 – over 18 quintillion, way more than current annual world wheat production.
Such huge numbers were not new to the Hindu sages of the middle ages; in fact it was they who invented the 10 digit numerals {0,1,2,3,4,5,6,7,8,9} that are universally used today. Their motivation came from Hindu Vedic cosmology where very, very large numbers are required; for example, a day for Brahma, the creator, endures for about 4,320,000,000 solar years – in Roman numerals that would take over 4 million M’s. The trick in the base 10 Hindu-Arabic numerals is that the numbers represented can grow exponentially in size while the lengths of their written representations grow by 1; thus, 10 to the power n is written down using n+1 digits: 100 is 10 squared, 1000 is 10 cubed etc. The same kind of thing applies to the numbers in base 2 used in digital computers: 2 to the power n,  2n , is written down using n+1 bits: 100 is 4, 1000 is 8, 10000 is 16 etc. Roman numerals (and the equivalent systems that were used throughout the medieval world) are mathematically in base 1 so there is no real compression – even with the abbreviation M for 1000, the number represented is only 1000 times the length of the Roman numerals needed to write it down and so 1 quintillion still needs 1 quadrillion M’s. For yet another historical note, the word algorithm is derived from the name of the medieval Persian mathematician Al-Khwarizmi, who wrote a treatise on the Hindu-Arabic numerals in the early 9th Century.
The introduction of the Hindu-Arabic numerals is arguably the most important event in the history of computation – critical to AI where the fundamental idea is that reasoning is a form of computation, a point made already in 1655 by dystopian British philosopher Thomas Hobbes who wrote “By reasoning, I understand computation.”
Compound interest is an example too of exponential growth. The principal on a 30 year balloon mortgage for $100,000 at 7.25 percent interest compounded annually would double in 10 years, double again in 20 years and come to over $800,000 at maturity! The interest feeds on itself. Interestingly, when Fibonacci popularized the magical Hindu-Arabic numerals in Europe with his Book of Calculation (Liber Abaci, 1212), he included the example of compound interest (nigh impossible using Roman numerals – in any case, charging compound interest was outlawed in the Roman Empire). Eventually, the Florentine bankers further up the River Arno from Fibonacci’s home town of Pisa took notice and the Renaissance was born – the rest is history. BTW, the storied sequence of Fibonacci numbers also grows exponentially; for more on this part of the story, click HERE . For the whole story, consult the catchily titled Finding Fibonacci: The Quest to Rediscover the Forgotten Mathematical Genius Who Changed the World by Keith Devlin.
Yet another example is due to the English country parson Thomas Malthus. In An Essay on the Principle of Population (1798), writing in opposition to the feel-good optimism of the Enlightenment, he argued that the food supply will only grow at a slow pace but that the population will increase exponentially leading to an eventual catastrophe. (Malthus employed the terminology of infinite series, describing the growth of the food supply as arithmetic and that of the population as geometric.) However, even as the earth’s population has increased apace since WW II, the food supply has kept up – thanks on the one hand to things like the Green Revolution and on the other hand to things like animal cruelty, antibiotic resistant bacteria, methane pollution, overzealous insecticides, the crash of the bee population, over fishing, GMO frankenfoods, the food divide, deforestation and wide spread spoliation of the environment. Clearly, we have to do a better job; however, despite the fact that all this craziness is well known, protest has been ineffective; “hats off” though to the B-film industry who spoke truth to power with the surprise hit horror movie Frankenfish (2004).
Especially appalling from an historical viewpoint are the packed feed lots where beef cattle are fed corn (which makes them sick) to fatten them for market. The glory of the Indo-European tribes was their cattle and their husbandry – a family’s worth was the size of their herd! From the Caspian Steppe, some went East to India where today the cow is iconic; others went to Europe and then as far as the North American West to find grazing land where the cattle could eat grass and ruminate. The root of the Latin word for money pecunia is the word for cow: pecus; the connection persists to this day in English words like pecuniary and impecunious. So deep is the connection that certainly feeding corn to beef cattle would bring tears to the pagan gods of Mt. Olympus and Valhalla.
A most famous example of exponential growth in technology is Moore’s Law: in 1975, Gordon E. Moore, a founder of INTEL, noted that the number of transistors on a microchip was doubling every two years even as the cost was being halved – adding that this was likely to continue. Amazingly this prediction has held true into the 21st Century and the number of transistors on an integrated circuit has gone from 5 thousand to 1 billion. The phenomenon of Moore’s Law is appearing to level off now but it might well take off again with new insights. Indeed, exponential bursts are part of the evolutionary process that is technology. Like compound interest, progress feeds on itself. The future is getting closer all the time – advances that once would have been the stuff of science fiction can now be expected in a decade or two. This is an important element in the march toward the Technological Singularity.
However, exponential growth of a different kind can be a stumbling block for AI and other areas of Computer Science because it leads to Combinatorial Explosion: situations where the time for an algorithm to return a solution would surpass the life-expectancy of our solar system if not of the galaxy. This can happen if the size of the “search space” grows exponentially: there will simply be too many combinations that the algorithm will have to account for. For example, consider the archetypical Traveling Salesman Problem (TSP): given a list of cities, the distances between them and the city to start from, find the shortest route that visits all the cities and returns to the start city. For n cities, the number of possible routes that any computer algorithm has to reckon with in order to return the optimal solution is the product of all the numbers from 1 through n, aka n factorial, a quantity that grows much faster than the 2n of the examples above.
One reason that we resort often to the TSP as an example is that it is representative of a large class of important combinatorial problems – a good algorithm for one can translate into a good algorithm for all of them. (These challenging problems are known in the trade as NP-Complete.) Current analysis, based on the universal role of problems like the TSP, makes a strong case that fundamentally better algorithms cannot be found for any of them – this is not unrelated to the limits on provability and computability uncovered by Gödel and Turing. Another representative of this class is the problem of provability and unprovability in propositional logic: p → q, truth-tables and all that. As a result, Mathematical Logic as such does not play the role in AI that was once predicted for it; the stunning progress in machine learning and other areas has relied on Connectionism, Bayesian Networks, emulating biological and physical processes etc. rather than on Logic itself.
One side effect of this humbling of Logic is that we are beginning to look with greater respect at models of intelligence different from our own (conscious) way of thinking. Up till now, our intelligence has been the gold standard – for philosophers the definition itself of intelligence, for theologians even the model for the mind of God.
While a direct attack on the phenomenon of Combinatorial Explosion seems unlikely to yield results, researchers and developers have turned to techniques that use randomness, Statistics and Probability to help in decision making in applications. Introducing uncertainty into algorithms might well make Al-Khwarizmi turn in his grave, but it has worked for Quantum Mechanics – the physics that brought us television and the transistor. And it has also worked in AI systems for decision making that deal with uncertainty, notably with Bayesian Networks. So perhaps randomized algorithms and even Quantum Mechanics can open the way to some further progress in AI on this front. Then too the creativity of researchers in exploiting the genius of nature knows no limits and can boggle the imagination: on the horizon are virus built lithium batteries and amoeba inspired algorithms. More to come. Affaire à suivre.

One thought on “AI IV: Exponential Growth

  1. A few thoughts:

    1) The apparent eco-extreme perspective of the author seems loud and clear in the reasons given for our planet’s population not starving to death. Debating those partial-truths and misdirecting thoughts is beyond the scope of a short, polite reply– more the stuff of a chat or two or ten over a couple of beers at the very least. I will note, though, that the “Green Revolution”– a term lately coined– had little to nothing to do with the food bounty we have achieved; AND, I also add, that there is much bad–in the sense of ultimate-purpose-problematic– about how we did get there. It’s not just as presented here.

    2) Moore’s law– plateauing as it must–is but the reality of an asymptotic approach to a limit that would require a step change for further improvement. An analogy: In chemical or atomic reactions, a “reaction rate” is predictable when there are large numbers of molecules/atoms present. When they get to be too few, the Gaussian predictability of any rate-constant shifts to being one of probability. E.g. The half-life of a sample of radioactive C14 (the rate at which half of it has decomposed) is almost 6,000 years. With one atom of C14, however, that becomes not a rate but the probability that it will decompose in that same period: Right now…. or in 6,000 years or anywhere in between. In between–between one and many– numbers of decomposing atoms become decreasingly predictable. Moore’s law is approaching that limit as the sizes (number of atoms) of the devices shrink.

    “New Insights” therefore, as noted correctly, will indeed be needed, but they will not be along the existing silicon path toward that limit to which the Law approaches asymptotically; it can not cross the line. Other entirely new ways will be needed. I have long considered light-based computing a method of promise, and some today tout quantum computing… about which I have nothing to add for knowing virtually nothing about it, not to mention my only seeing silly uses for it as to be found in the Hitchiker’s Guide to the Galaxy.

    3) There is no doubt that there has been and will continue to be “exponential growth” of computer knowledge and in many ways it already surpasses human abilities in complex-processing speed at least. BUT…. as I think I have opined in all previous posts on this: It will never be (for its can never being) the equivalent of human thought… at least not for those of us who believe man is more than a material processer; computers, which man created, are only that–material and material-process, human-like thought. Man too maybe once was only that, but unpacking that aside is a long post unto itself. 🙂

Comments are closed.