After the Chinese invasion and takeover of Tibet in the 1950s, China became a practitioner of panda diplomacy where it would send those cuddly bears to zoos around the world to improve relations with various countries. But back then, China was still something of a sleepy economic backwater. Napoleon once opined “Let China sleep, for when she wakes she will shake the world” – an admonition that has become a prediction. In recent times, China has emerged as the most dynamic country on the planet: Shanghai has long replaced New York and Chicago as the place for daring sky-scraper architecture, the Chinese economy is the second largest in the world, the Silk Road project extends from Beijing across Asia and into Europe itself. Indeed, considerable Silk Road investment in Northern Italy in industries like leather goods and fashion has brought tens of thousands of Chinese workers (many illegally) to the Bel Paese to make luxury goods with the label “Made In Italy”; this has led to scheduled direct flights from Milan Bergamo Airport (BGY) to Wuhan Airport (WUH) – the root cause of the virulence of the corona virus outbreak in Northern Italy.
Indeed, China is the center of a new kind of capitalist system, state controlled capitalism where the government is the principal actor. But the government of China is run by the Chinese Communist Party, the organization founded by Mao Zedong and others in the 1920s to combat the gangster capitalism of the era – for deep background, there is the campy movie classic Shanghai Express (Marlene Dietrich: “It took more than one man to change my name to Shanghai Lily”). So has the party of the people lost its bearings or is something else going on? Mystère.
The Maoist era in China saw the economy mismanaged, saw educated cadres and scientists exiled to rural areas to learn the joys of farming, saw the Great Leap Forward lead to the deaths of millions. Following Mao’s own death in 1976, Deng Xiaoping emerged as “paramount leader” of the Communist Party and began the dramatic transformation of the country into the economic behemoth it is today.
Deng’s modernization program took on new urgency in the early 1990s when the fall of the Soviet Union made Western capitalism a yet more formidable opponent. However, the idea that capitalist economic practices were going to prove necessary on the road to Communism was not new, far from it. Marx himself wrote that further scientific and industrial progress under capitalism was going to be necessary to have the tools in place for the transition to communism. Then too there was the example of Lenin who thought the Russia inherited from the Czars was too backward for socialism; in 1918 he wrote
“Socialism is inconceivable without large-scale capitalist engineering based on the latest discoveries of modern science.”
Accordingly Lenin resorted to market based incentives with the New Economic Policy in the 1920s in the USSR. So, there is nothing new here: in China, Communism is alive and well – just taking a deep breath, getting its bearings.
Normally we associate capitalism with agile democracies like the US and the UK rather than autocratic monoliths like China. But capitalism has worked its wonders before in autocratic societies: prior to the World Wars of the 20th century, there was a thriving capitalist system in Europe in the imperial countries of Germany and Austria-Hungary which created the society that brought us the internal combustion engine and the Diesel engine, Quantum Mechanics and the Theory of Relativity, Wagner and Mahler, Freud and Nietzsche. All of which bodes well for the new China – to some libertarian thinkers, democracy just inhibits capitalism.
The US formally recognized the People’s Republic of China in 1979, a few years after Nixon’s legendary visit and the gift of pandas Ling-Ling and Hsing-Hsing to the Washington DC zoo. Deng’s policies were bolstered too by events in the capitalist world itself. There was the return of Japan and Germany to dominant positions in high-end manufacturing by the 1970s: machine tools, automobiles, microwave ovens, cameras, and so on – with Korea, Taiwan and Singapore following close behind. Concomitantly, the UK and the US turned from industrial capitalism to financial capitalism in the era of Margaret Thatcher and Ronald Reagan. Industrial production was de-emphasized as more money could be made in financing things than in making them. This created a vacuum and China was poised to fill the void – rural populations were uprooted to work in manufacturing plants in often brutal conditions, ironically creating in China itself the kind of social upheaval and exploitation of labor that Marx and Engels denounced in the 19th century. But the resulting boom in the Chinese economy led to membership in the World Trade Organization in 2001!
The displaced rural populations were crammed into ever more crowded cities. This exacerbated the serious problems China has long had with transmission of animal viruses to humans – Asian Flu, Hong Kong Flu, Bird Flu, SARS and now COVID-19. The fact that China is no longer isolated as it once was but a huge exporter and importer of goods and services from all over the world has made these virus transmissions a frightening global menace.
The corona pandemic is raging as the world finds itself in a situation eerily like that of August 1914 in Europe: two powerful opposing capitalist systems – one led by democracies, the other by an autocratic central government. The idea of a full-scale war between nuclear powers is unthinkable. Instead, there is hope that this latest virus crisis will be for China and the West what William James called “the moral equivalent of war,” leading to joint mobilization and cooperation to make the world a safer place; hopefully, in the process, military posturing will be transformed into healthy economic competition so that the interests of humanity as a whole are served in a kinder gentler world. Hope springs eternal, perhaps naively – but consider the alternative.
Alan Turing was a Computer Science pioneer whose brilliant and tragic life has been the subject of books, plays and films – most recently The Imitation Game with Benedict Cumberbatch. Turing and others were excited by the possibility of Artificial Intelligence (AI) from the outset, in Turing’s case at least from 1941. In his 1950 paper Computing Machinery and Intelligence, Turing proposed his famous test where, roughly put, a machine would be deemed “intelligent” if it could pass for a human in an interactive session he called “The Imitation Game” – whence the movie title. (Futurologist Ray Kurzweil has predicted that a machine will pass the Turing Test by the year 2029.)
Historically, the practice of war has hewn closely to developments in technology. And warfare, in turn, has made demands on technology. Indeed, even men of genius like Archimedes and Leonardo da Vinci developed weapons systems. However, the relationship between matters military and matters technological became almost symbiotic with WWII. Technological feats such as radar, nuclear power, rockets, missiles, jet planes and the digital computer are all associated with the war efforts of the different powers of that conflict. Certainly, the fundamental research behind these achievements was well underway by the 1930s but the war determined which areas of technology should be prioritized, thereby creating special concentrations of brilliant scientific talent. The Manhattan Project itself is studied as a model of large scale R&D; furthermore, the industrial organization of the war period and military operations such as countering submarine warfare gave rise to a new mathematical discipline, aptly called Operations Research, which is now taught in Business Schools under the name Management Science.
In his masterful treatise War in the Age of Intelligent Machines (1991), Manuel DeLanda summarizes it thusly: “The war … forged new bonds between the military and scientific communities. Never before had science been applied at so grand a scale to such a variety of warfare problems.”
However, now the reliance on military funding might be skewing technological progress leading in less fruitful directions than capitalism or science-as-usual itself would. Perhaps this is why, instead of addressing the environmental crisis, post WWII technological progress has perfected drones and fueled the growth of organizations such as the NSA and its surveillance prowess. In the process, it has created surveillance capitalism: our personal behavorial data are amassed by AI enhanced software – Fitbit, Alexa, Siri, Google, FaceBook, … ; the data are analyzed and sold for targeted advertising and other feeds to guide us in our lives; this is only going to get worse as the internet of things puts sensors and listening devices throughout the home and machines start to shepherd us through our day – a GPS for everything.
All that said, since WWII the US Military has been a very strong supporter of research into AI; in particular funding has come from the Defense Advanced Research Projects Agency (DARPA). It is worth noting that one of their other projects was the ARPANET which was once the sole domain of the military and research universities; this became the internet of today when liberated for general use and eCommerce by the High Performance Computing and Communications Act (“Gore Act”) of 1991.
The field of AI was only founded formally in 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term Artificial Intelligence itself was coined.
Following a scheme put forth by John Launchbury of DARPA, the timeline of AI can be broken into three parts. The 1st Wave (1950-2000) saw the development of three fundamental approaches to AI – one based on powerful Search Algorithms, one on Mathematical Logic and one on Connectionism, imitating the structure of neurons in the human brain. Connectionism develops slowly in the First Wave but explodes in the 2nd Wave (2000-2020). We are now entering the 3rd Wave.
Claude Shannon, a scientist at the legendary Bell Labs, was a participant at the Dartmouth conference. His earlier work on implementing Boolean Logic with electromagnetic switches is the basis of computer circuit design – this was done in his Master’s Thesis at MIT making it probably the most important Master’s Thesis ever written. In 1950, Shannon published a beautiful paper Programming a Computer for Playing Chess, which laid the groundwork for games playing algorithms based on searching ahead and evaluating the quality of possible moves.
Fast Forward: Shannon’s approach led to the triumph in 1997 of IBM’s Deep Blue computer which defeated reigning chess champion Gary Kasparov in a match. And things have accelerated since – one can now run even more powerful codes on a laptop.
Known as the “first AI program”, Logic Theorist was developed in 1956 by Allen Newell, Herbert A. Simon and Cliff Shaw – Simon and Newell were also at the Dartmouth Conference (Shaw wasn’t). The system was able to prove 38 of the first 52 theorems from Russell and Whitehead’s Prinicipia Mathematica and in some cases to find more elegant proofs! Logic Theorist established that digital computers could do more than crunch numbers, that programs could deal with symbols and reasoning.
With characteristic boldness, Simon (who was also a Nobel prize winner in Economics) wrote
[We] invented a computer program capable of thinking non-numerically, and thereby solved the venerable mind-body problem, explaining how a system composed of matter can have the properties of mind.
Again with his characteristic boldness, Simon predicted in 1957 that computer chess programs would outperform humans within “ten years” but that was wrong by some thirty years! In fact, “over-promising” has plagued AI over the years – but presumably all that is behind us now.
AI has also proved too attractive to researchers and companies. For example, at Xerox PARC in the 1970s, the computer mouse, the Ethernet and WYSIWYG editors (What you see is what you get) were invented. However, rather than commercializing these advances for a large market as Apple would do with the Macintosh, Xerox produced the Dandelion, a $50,000 workstation designed for work on AI by elite programmers.
The Liar’s Paradox (“This statement is false”) was magically transformed into the Incompleteness Theorem by Kurt Gödel in 1931 by exploiting self-reference in systems of mathematical axioms. With Turing Machines, an algorithm can be the input to an algorithm (even to itself). And indeed, the power of self-reference gives rise to variants of the Liar’s Paradox that become theorems about Turing machines and algorithms. Thus, the only algorithm for telling how long an algorithm or program will run will come down to running the program; and, be warned, it might run forever and there is no sure way you can tell that in advance.
In a similar vein, it turns out that the approach through Logic soon ran into the formidable barrier called Combinatorial Explosion where all possible algorithms will necessarily take too long to reach a conclusion on a large family of mathematical problems – for example, there is the Traveling Salesman Problem:
Given a set of cities and distance between every pair of cities, the problem is to find the shortest possible route that visits every city exactly once and returns to the starting point.
This math problem is not only important to salesmen but is also important for the design of circuit boards, for DNA sequencing, etc. Again the impasse created by Combinatorial Explosion is not unrelated to the issues of limitation in Mathematics and Computer Science uncovered by Gödel and Turing
Expert Systems are an important technology of the 1st Wave; they are based on the simplified logic of if-then-rules:
If it’s Tuesday, this must be Belgium.
As the rules are “fired” (applied), a data base of information called a “knowledge base” is updated making it possible to fire more rules. Major steps in this area include the DENDRAL and the MYCIN expert systems developed at Stanford University in the 1960s and 1970s.
A problem for MYCIN which assisted doctors in the identification of bacteria causing infections was that it had to deal with uncertainty and work with chains of propositions such as:
“Presence of A implies Condition B with 50% certainty”
“Condition B implies Condition C with 50% certainty”
One is tempted to say that presence of A implies C with 25% certainty, but (1) that is not mathematically correct in general and (2) if applied to a few more rules in the chain that 25% will soon be down to an unworkable 1.5%.
Still MYCIN was right about 65% of the time, meaning it performed as well as the expert MDs of the time. Another problem came up, though, when a system derived from MYCIN was being deployed in the 1970s: back then MDs did not type! Still this area of research led to the development of Knowledge Engineering Environments which built rules derived from the knowledge of experts in different fields – here one problem was that the experts (stock brokers, for example) often did not have enough expertise to encode to make the enterprise worthwhile, although they could type!
For all that, Rule Based Systems are widespread today. For example, IBM has a software product marketed as a “Business Rules Management System.” A sample application of this software is that it enables an eCommerce firm to update features of the customer interaction with its web page – such as changing the way to compute the discount on a product – on the fly without degrading performance and without calling IBM or having to recompile the system.
To better deal with reasoning and uncertainty, Bayesian Networks were introduced by UCLA Professor Judea Pearl in 1985 to address the problem of updating probabilities when new information becomes available. The term Bayesian comes from a theorem of the 18th century Presbyterian minister Thomas Bayes on what is called “conditional probability” – here is a example of how Bayes’ Theorem works:
In a footrace, Jim has beaten Bob only 25% of the time but of the 4 days they’ve done this, it was raining twice and Jim was victorious on one of those days. They are racing again tomorrow. What is the likelihood that Jim will win? Oh, one more thing, the forecast is that it will certainly be raining tomorrow.
At first, one would say 25% but given the new information that rain is forecast, a Bayesian Network would update the probability to 50%.
Reasoning under uncertainty is a real challenge. A Nobel Prize in economics was recently awarded to Daniel Kahneman based on his work with the late Amos Tversky on just how ill-equipped humans are to deal with it. (For more on their work, there is Michael Lewis best-selling book The Undoing Project.) As with MYCIN where the human experts themselves were only right 65% of the time, the work of Kahneman and Tversky illustrates that medical people can have a lot of trouble sorting through the likely and unlikely causes of a patient’s condition – these mental gymnastics are just very challenging for humans and we have to hope that AI can come to the rescue.
Bayesian Networks are impressive constructions and play an important role in multiple AI techniques including Machine Learning. Indeed Machine Learning has become an ever more impressive technology and underlies many of the success stories of Connectionism and the 2nd Wave of AI. More to come.
Artificial Intelligence (AI) is the technology that is critical to getting humanity to the Promised Land of the Singularity, where machines will be as intelligent as human beings.
The roots of modern AI can be traced to attempts by classical philosophers to describe human thinking in systematic terms.
Aristotle’s syllogism exploited the idea that there is structure to logical reasoning: “All men are mortal; Socrates is a man; therefore Socrates is mortal.”
The ancient world was puzzled by the way syntax could create semantic confusion; for example, The Liar’s Paradox: “This statement is false” is false if it is true and true if it is false.
Also, in the classical world, there was what became the best selling textbook of all time: Euclid’s Elements where all plane geometry flows logically from axioms and postulates.
In the Middle Ages in Western Europe, Aristotle’s Logic was studied intently as some theologians debated the number of angels that could fit on the head of a pin while others formulated proofs of the existence of God, notably St. Anselm and St. Thomas Aquinas. A paradoxical proof of God’s existence was given by Jean Buridan: consider the following pair of sentences:
God exists. Neither of the sentences in this pair is true.
Since the second one cannot be true, the existence of God follows. Buridan was a true polymath, one making significant contributions to multiple fields in the arts and sciences – a glamorous and mysterious figure in Paris life, though an ordained priest. His work on Physics was the first serious break with Aristotle’s cosmology; he introduced the concept of “inertia” and influenced Copernicus and Galileo. The leading Paris philosopher of the 14th Century, he is known for his work on the doctrine of Free Will, a cornerstone of Christianity. However, the name “Buridan” itself is actually better known for “Buridan’s Ass,” the donkey who could never choose which of two equally tempting piles of hay to eat from and died of starvation as a result – apparently a specious attribution contrived by his opponents as this tale does not appear in any of Buridan’s writings; it appears to be mocking Buridan’s work on free will: Buridan taught that simply realizing which choice was evil and which was moral was not a enough and an actual decision still required an act of will.
Doubtlessly equally unfounded is the tale that Buridan was stuffed in a sack and drowned in the Seine by order of King Louis X because of his affair with the Queen, Marguerite of Burgundy – although this story was immortalized by the immortal poet François Villon in his Ballade des Dames du Temps Jadis, the poem in which the refrain is “Where are the snows of yester-year” (Mais où sont les neiges d’antan); in the poem Villon compares the story of Marguerite and Buridan to that of Hėloise and Abėlard!
It the late middle ages, Ramon Llul, the Catalan polymath (father of Catalan literature, mathematician, artist whose constructions inspired work of superstar architect Daniel Libeskind) published his Ars Magna (1305) which described a mechanical method to help in arguments, especially in ones to win Muslims over to Christianity.
François Viète (aka Vieta in Latin) was another real polymath (lawyer, mathematician, Huguenot, privy councilor to kings). At the end of the 16th Century, he revolutionized Algebra, replacing the awkward Arab system, with a purely symbolic one; Viète was the first to say “Let x be the unknown,” he made Algebra a game of manipulating symbols. Before that, in working out an Algebra problem, one actually thought of 102 as a 10-by-10 square and 103 as a 10-by-10-by-10 cube.
Llul’s work is referenced by Gottfried Leibniz, the German polymath (great mathematician, philosopher, diplomat) who in the 1670’s proposed a calculus for philosophical reasoning based on his idea of a Characteristica Universalis, a perfect language which would provide a direct representation of ideas.
Leibniz also references Thomas Hobbes, the English polymath (philosopher, mathematician, very theoretical physicist). In 1655, Hobbes wrote : “By reasoning, I understand computation.” This assertion of Hobbes is the cornerstone of AI today; cast in modern terms: intelligence is an algorithm.
Blaise Pascal, the French polymath (mathematics, philosophy, theology) devised a mechanical calculation engine in 1645; in the 1800s, Thomas Babbage and Ada Lovelace worked on a more ambitious project, the Analytical Engine, a proposed general computing machine.
Also in the early 1800s, there was the extraordinarily original work of Evariste Galois. He boldly applied one field of Mathematics to another, the Theory of Groups to the Theory of Equations. Of greatest interest here is that he showed that there were problems for which no appropriate algorithm existed. With his techniques, one can show, for example, that there is no general method to trisect an angle using a ruler and compass – Euclid’s Elements presents an algorithm of this type for bisecting an angle. Tragically, Galois was embroiled in the violent politics that led up to the destitution of Charles X and was killed in a duel at the age of twenty in 1830. He is considered to be the inspiration for the young hero of Stendahl’s novel Lucien Leuwen.
Later in the 19th Century, we have George Boole whose calculus of Propositional Logic is the basis on which computer chips are built, Gottlob Frege who dramatically extended Boole’s Logic to First Order Logic which allowed for the development of systems such as Alfred North Whitehead and Bertrand Russell’s Principia Mathematica and other Set Theories; these systems provide a framework for axiomatic mathematics. Russell was particularly excited about Frege’s new logic, which is a notable advance over Aristotle: while Aristotle could prove that Socrates was mortal, the syllogism cannot deal with binary relations as in
“All lions are animals; therefore the tail of a lion is a tail of an animal.”
Aristotle’s syllogistic Logic is still de rigueur, though, in the Vatican where some tribunals yet require that arguments be presented in syllogistic format!
Things took another leap forward with Kurt Gödel’s landmark On Formally Undecidable Propositions of Principia Mathematica and Related Systems, published in 1931 (in German). In this paper, Gödel builds a programming language and gives a data structuring course where everything is coded as a number (formulas, the axioms of number theory, proofs from the axioms, properties like “provable formula”, …). Armed with the power of recursive self-reference, Gödel ingeniously constructed a statement about numbers that asserts its own unprovability. Paradox enters the picture in that “This statement is not provable” is akin to “This sentence is false” as in the Liar’s Paradox. All this self-reference is possible because with Gödel’s encoding scheme everything is a number – formulas, proofs, etc. all live in the same universe, so to speak. First Order Logic and systems like PrincipiaMathematica make it possible to apply Mathematics to Mathematics itself (aka Metamathematics) which can turn a paradox into a theorem.
For a theorem of ordinary mathematics that is not provable from the axioms of number theory but is not self-referential, Google “The Kanamori-McAloon Theorem.”
The Incompleteness Theorem rattled the foundations of mathematics but also sowed the seeds of the computer software revolution. Gödel’s paper was quickly followed by several different formulations of a mathematical model for computability, a mathematical definition of the concept of algorithm: the Herbrand-Gödel Recursive Functions, Church’s lambda-calculus, Kleene’s μ-recursive functions – all were quickly shown to be equivalent, that any algorithm in one model had an equivalent algorithm in each of the others. That these models must capture the notion of algorithm in its entirety is known as Church’s Thesis or the Church-Turing Thesis.
In 1936 Alan Turing published a paper “On Computable Numbers with an Application to the Entscheidungsproblem” – German was still the principal language for science! Here Turing presented a new mathematical model for computation – the “automatic machine” in the paper, the “Turing Machine” today. Turing proved that this much simpler model of computability is equivalent to the other schemes; it is couched in terms of a device manipulating 0s and 1s; furthermore, Turing demonstrated the existence of a Universal Turing Machine which can emulate the operation of any Turing machine M given the description of M in 0s and 1s along with the intended input: the Universal Turing Machine will decipher the description of M and will perform the same operations on the input as M would, thus yielding the same output: this is the inspiration for stored programming – in the early days of computing machinery one had to rewire the machine, change external tapes, swap plugboards or reset switches if the problem was changed; with Turing’s setup, you just input the algorithm along with the data into the memory of the machine – the algorithm and the data live in the same universe. In 1945, John Von Neumann joined the team under engineers John Mauchly and J. Presper Eckert and wrote up a report on the design of a new digital computer, “First Draft of a Report on the EDVAC.” Stored programming was the crucial innovation of the Von Neumann Architecture. Because of the equivalence of the mathematical models of computability and Church’s Thesis, Von Neumann also knew that this architecture captured all possible algorithms that could be programmed by machine – subject only to limitations of speed and size of memory. (In the future, though, quantum computing could challenge Church’s Thesis.)
With today’s computers, it is the operating system (Windows, Unix, Android, macOS, …) that plays the role of the Universal Turing Machine. Interestingly, the inspiration for the Turing Machine was not a mechanical computing engine but rather the way a pupil in an English school of Turing’s time used a European style notebook with graph paper pages to do math homework.
BTW, Gödel and Turing have both made it into motion pictures. Gödel is played by Lou Jacob in the rom-com I.Q. and Turing is played by Benedict Cumberbatch in The Imitation Game, a movie of a more serious kind.
Computer pioneers were excited by the possibility of Artificial Intelligence from the outset, in Turing’s case at least from 1941. In his 1950 paper Computing Machinery and Intelligence, Turing proposed his famous test where, roughly put, a machine would be deemed “intelligent” if it could pass for a human in an interactive session he called “The Imitation Game” (whence the movie title). Futurologist Ray Kurzweil (now at Google) has predicted that a machine will pass the Turing Test by the year 2029. But gurus have been wrong in the past. Nobel Prize winning economist and AI pioneer, Herbert Simon, boldly predicted in 1957 that computer chess programs would outperform humans within “ten years” but that was wrong by some thirty years!
In his 1951 talk at the University of Manchester entitled Intelligent Machinery: A Heretical Theory, Turing spoke of machines that will eventually surpass human intelligence: “once the machine thinking method has started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control.” From the beginning, the Singularity was viewed with a mixture of wonder and dread.
But the field of AI wasn’t formally founded until 1956; it was at a summer research conference at Dartmouth College, in Hanover, New Hampshire, that the term Artificial Intelligence was coined. Principal participants at the conference included Herbert Simon as well as fellow scientific luminaries Claude Shannon, John McCarthy and Marvin Minsky.
Today, Artificial Intelligence is simultaneously bringing progress and miracles to humankind on the one hand and representing an existential threat to humanity on the other. Investment in AI research is significant and it is proceeding apace in industry and at universities; the latest White House budget includes $1.1B for AI research (NY Times, Feb. 17, 2020) reflecting in part the interest of the military in all this.
The principal military funding agency for AI has been the Defense Research Projects Agency (DARPA). According to a schema devised by DARPA people, AI has already gone through two phases and is in the beginning of the 3rd Phase now. The Singularity is expected in a 4th Phase which will begin around 2030 according to those who know.
Futurology is the art of predicting the technology of the future.
N.B. We say “futurology” because the term “futurism” denotes the Italian aesthetic movement “Il Futurismo”: it began with manifestos – Manifesto del Futurismo (1909) which glorified the technology of the automobile and its speed and power followed by two manifestos on technology and music, Musica Futurista (1912), L’arte dei Rumori (1913). The movement’s architectural esthetic can be appreciated at Rockefeller Center; its members also included celebrated artists like Umberto Boccioni whose paintings are part of the permanent collection of the MoMA in New York.
We live in an age of accelerating technological forward motion and this juggernaut is hailed as “bearer of the future.” However, deep down, genuine distrust of science and “progress” has always been there. Going back in history, profound discomfort with technology is expressed in the Greek and Roman myths. The Titan Prometheus brings fire to mankind but he is condemned by the gods to spend eternity with an eagle picking at his liver. Vulcan, the god of fire, is a master craftsman who manufactures marvels at his forge under the Mt. Etna volcano in Sicily. But Vulcan is a figure of scorn: he is homely with a permanent limp for which he is mocked by the other gods; though married to Venus, he is outrageously cuckolded by his own brother Mars; for Botticelli’s interpretation of Olympian adultery, click HERE .
More recently, there is the myth of Frankenstein and its terrors. Then there is the character of the Mad Scientist in movies, magazines and comic books whose depiction mirrors public distrust of what technology is all about.
For all that, in today’s world, even the environmentalists do not call for a return to more idyllic times; rather they want a technological solution to the current crisis – for example, The Green New Deal. Future oriented movements like Accelerationism also call upon free-market capitalism to push change harder and harder rather than wanting to retreat to an earlier bucolic time.
The only voluble animosity towards science and technology comes from Donald Trump and his Republican spear carriers but theirs is opportunistic and dishonest, not something they actually believe in.
Futurology goes back at least to the 19th century with Jules Verne and his marvelous tales of submarines and trips to the moon. H.G. Wells too left an impressive body of work dealing with challenges that might be in the offing. In a different vein, there are the writings of Teilhard de Chardin whose noosphere is a predictor of where the world wide web and social media might be taking us – one unified super-mind. In yet another style, there are the books of the Tofflers from the 1970s such as Future Shock which among other things dealt with humanity’s dealing with the endless change to daily life fueled by technology, change at such a speed as to make the present never quite real.
For leading technologist and futurologist Ray Kurzweil, for the Accelerationists and for most others, the vector of technological change has been free-market capitalism. Another vehicle of technological progress, to some a most important one, is warfare. Violence between groups is not new to our species. Indeed, anthropologists point out that inter-group aggression is also characteristic of our closest relatives, the chimpanzees – so all this likely goes way back to our common ancestor. The evolutionary benefit of such violence is a topic of debate and research among social scientists. The simplest and most simple-minded explanation is that the more fit, surviving males had access to more females and so more offspring. One measure of the evolutionary importance of fighting among males for reproductive success is the relative size of males and females. In elephant seals, where the males stage mammoth fights for the right to mate, the ratio is 3.33 to 1.0; in humans it is roughly 1.15 to 1.0 – this modest ratio implies that the simple-minded link between warfare and reproductive success cannot be the whole story.
Historically, the practice of war has hewn closely to developments in technology. And warfare, in turn, has made demands on technology. Indeed, even men of genius like Archimedes and Leonardo da Vinci developed weapons systems. However, the relationship between matters military and technology became almost symbiotic with WWII. Technological feats such as nuclear power, rockets, missiles, jet planes and the digital computer are all associated with the war efforts of the different powers of that conflict. Certainly, the fundamental research and engineering behind these achievements was well underway in the 1930s but the war efforts determined priorities and thus to which areas of technology should resources and funding be allocated, thereby creating remarkable concentrations of brilliant scientific talent. The Manhattan Project itself is studied as a model of large scale R&D; furthermore, the industrial organization of the war period and military operations such as countering submarine warfare gave rise to a new mathematical discipline, aptly called Operations Research, which is now taught in Business Schools under the name Management Science.
In his masterful treatise War in the Age of Intelligent Machines (1991), Manuel DeLanda summarizes it thusly: “The war … forged new bonds between the military and scientific communities. Never before had science been applied at so grand a scale to such a variety of warfare problems.”
Since WWII we have been in a “relatively” peaceful period. But the technological surge continues. Perhaps we are just coasting on the momentum of the military R&D that followed WWII – the internet, GPS systems, Artificial Intelligence, etc. However, military funding might be skewing technological progress today in less fruitful directions than capitalism or science-as-usual itself would. Perhaps this is why post WWII technological progress has fueled the growth of paramilitary surveillance organizations such as the CIA and NSA and perfected drones rather than addressing the environmental crisis.
Moreover, these new technologies are transforming capitalism itself: the internet and social media and big data have given rise to surveillance capitalism, the subject of a recent book by Harvard Professor Emerita Shoshana Zuboff, The Age of Surveillance Capitalism: our personal behavorial data are amassed by Alexa, Siri, Google, FaceBook et al., analyzed and sold for targeted advertising and other feeds to guide us in our lives; this is only going to get worse as the internet of things puts sensoring and listening devices throughout the home. The 18th Century Utilitarian philosopher Jeremy Bentham promoted the idea of the panopticon, a prison structured so that the inmates would be under constant surveillance by unseen guards – click HERE . To update a metaphor from French post-modernist philosopher Michel Foucault, with surveillance technology we have created our own panopticon, one in which we dwell quietly and willingly as our every keystroke, every move is observed. An example: as one researches work on machine intelligence on the internet, Amazon drops ads for books on the topic (e.g. The Sentient Machine) onto one’s FaceBook page !
The futurologists and Accelerationists, like many fundamentalist Christians, await the coming of the new human condition – for fundamentalists this will happen at the Second Coming; for the others the analog of the Second Coming is the singularity – the moment in time when machine intelligence surpasses human intelligence.
In Mathematics, a singularity occurs at a point that is dramatically different from those around it. John von Neumann, a mathematician and computer science pioneer (who worked on the Manhattan Project) used this mathematical term metaphorically: “the ever accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” For Von Neumann, the singularity will be the moment when “technological progress will become incomprehensibly rapid and complicated.” Like Von Neumann, Alan Turing was a mathematician and a computer science pioneer; famous for his work on breaking the German Enigma Code during WWII, he is the subject of plays, books and movies. In 1951, Turing wrote “once the machine thinking method has started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control … .” The term singularity was then used by Vernor Vinge in an article in Omni Magazine in 1983, a piece that develops Von Neumann’s and Turing’s remarks further: “We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.” The concept of the singularity was brought into the mainstream by the work of Ray Kurzweil with his book entitled The Singularity Is Near (2005).
In Kurzweil’s writings, he emphasizes how much technological development is growing exponentially. A most famous example of exponential technological growth is Moore’s Law: in 1975, Gordon E. Moore, a founder of INTEL, noted that the number of transistors on a microchip was doubling every two years even as the cost was being halved – and that this was likely to continue. Amazingly this prediction has held true into the 21st Century and the number of transistors on an integrated circuit has gone from 5 thousand to 1 billion: for a graph, click HERE . Another example of exponential growth is given by compound interest: at 10% compounded annually, your money will more than double in 5 years, more than quadruple in 10 years and so on.
Kurzweil argues that exponential growth also applies to many other areas, indeed to technology as a whole. Here his thinking is reminiscent of that of the French post-structuralist accelerationists Deleuze and Guattari who also view humanity-cum-technology as a grand bio-physical evolutionary process. To make his point, Kurzweil employs compelling charts and graphs to illustrate that growth is indeed exponential (click HERE ); because of this the future is getting closer all the time – advances that once would have been the stuff of science fiction can now be expected in a decade or two. So when the first French post-structuralist, post-modern philosophers began calling for an increase in the speed of technological change to right society’s ills in the early 1970s, the acceleration had already begun!
But what will happen as we go past the technological singularity? Mystère. More to come.
Accelerationism is a philosophical movement that emerged from the work of late 20th century disillusioned Marxist-oriented French philosophers who were confronted with the realization that capitalism cannot be controlled by current political institutions nor supplanted by the long awaited revolution: for centuries now, the driving force of modernity has been capitalism, the take-no-prisoners social-economic system that produces ever faster technological progress with dramatic physical and social side-effects – the individual is disoriented; social structure is weakened; the present yields constantly to the onrushing future; “the center cannot hold.” However, for the accelerationists, the response is not to slow things down to return to a pre-capitalist past but rather to push capitalism to quicken the pace of progress so a technological singularity can be reached, one where machine intelligence surpasses human intelligence and begins to spark its own development and that of everything else; it will do this at machine speed as opposed to the clumsy pace of development today. This goal will not be reached if human events or natural disasters dictate otherwise, speed is of the essence.
Nick Land, then a lecturer in Continental Philosophy at the University of Warwick in the UK picked up on the work in France and published an accelerationist landmark in 1992, The Thirst for Annihilation: Georges Bataille and Virulent Nihilism. Land builds on work of the French eroticist writer Georges Bataille and emphasizes that Accelerationism does not necessarily predict a “happy ending” for humanity: all is proceeding nihilistically, without direction or value, humanity can be but a cog in a planetary process of Spaceship Earth. Accelerationism is different from Marxism, Adventism, Mormonism, Futurism – all optimistic forward-looking world views.
Land pushed beyond the boundaries of academic life and methodology. In 1995, he and colleague Sadie Plant founded the Cybernetic Culture Research Unit (CCRU) which became an intellectual warren of forward thinking young people – exploring themes such as “cyberfeminism” and “libidinal-materialist Deleuzian thinking.” Though disbanded by 2003, alums of the group have stayed the course and publish regularly in the present day accelerationist literature. In fact, a collection of writings of Land himself has been published under the title Fanged Noumena and today, from his aerie in Shanghai, he comments on things via Twitter. (In Kant’s philosophy noumena as opposed to phenomena are the underlying essences of things that the human mind does not have direct access to.)
Accelerationists have much in common with the Futurist movement: they expect the convergence of computer technology and medicine to bring us into the “bionic age” where a physical merge of man and robot can begin with chip-implantation, gene manipulation and much more. Their literature of choice is dystopian science-fiction, particularly the cyberpunk subgenre: William Gibson’s pioneering Neuromancer has the status of scripture; Rudy Rucker’s thoughtful The Ware Tetralogy is required reading and Richard Morgan’s ferocious Market Forces is considered a minor masterpiece.
Accelerationism is composed today of multiple branches.
Unconditional Accelerationism (aka U/Acc) is the most free-form, the most indifferent to politics. It celebrates modernity and the wild ride we are on. It tempers its nihilism with a certain philosophical playfulness and its mantra, if it had one, would be “do your own thing”!
Left Accelerationism (aka L/Acc) harkens back to Marx as precursor: indeed, Marx did not call for a return to the past but rather claimed that capitalism had to move society further along until it had created the tools – scientific, industrial, organizational – needed for the new centralized communist economy. Even Lenin wrote (in his 1918 text “Left Wing” Childishness)
Socialism is inconceivable without large-scale capitalist engineering based on the latest discoveries of modern science.
So Lenin certainly realized that Holy Russia was nowhere near the level of industrialization and organization necessary for a Marxist revolution in 1917 but plunge ahead he did. Maybe that venerable conspiracy theory where Lenin was transported back to Russia from Switzerland by the Germans in order to get the Russians out of WW I has some truth to it! Indeed, Lenin was calling for an end to the war even before returning; with the October Revolution and still in the month of October, Lenin proposed an immediate withdrawal of Russia from the war which was followed by an armistice the next month between Soviet Russia and the Central Powers. All this freed up German and Austrian men and resources for the Western Front.
An important contribution to L/Acc is the paper of Alex Williams and Nick Srinek (Manifesto for an Accelerationist Politics, 2013) in which they argue that “accelerationist politics seeks to preserve the gains of late capitalism while going further than its value system, governance structures, and mass pathologies will allow.” Challenging the conceit that capitalism is the only system able to generate technological change at a fast enough speed, they write “Our technological development is being suppressed by capitalism, as much as it has been unleashed. Accelerationism is the basic belief that these capacities can and should be let loose by moving beyond the limitations imposed by capitalist society.” They dare to boldly go beyond earthbound considerations asserting that capitalism is not able to realize the opening provided by space travel nor can it pursue “the quest of Homo Sapiens towards expansion beyond the limitations of the earth and our immediate bodily forms.” The Left accelerationists want politics and the acceleration, both, to be liberated from capitalism.
Right Accelerationism (aka R/Acc) can claim Nick Land as one of its own – he dismisses L/Acc as warmed over socialism. In his frank, libertarian essay, The Dark Enlightenment (click HERE), Land broaches the difficult subject of Human Bio-Diversity (HBD) with its grim interest in biological differences among human population groups and potential eugenic implications. But Land’s interest is not frivolous and he is dealing with issues that will have to be encountered as biology, medicine and technology continue to merge and as the cost of bionic enhancements drives a wedge between social classes and racial groups.
This interest of the accelerationists in capitalism brings up a “chicken or egg” problem: Which comes first – democratic political institutions or free market capitalism?
People (among them the L/Acc) would likely say that democracy has been necessary for capitalism to develop, having in mind the Holland of the Dutch Republic with its Tulip Bubble, the England of the Glorious Revolution of 1689 which established the power of parliament over the purse, the US of the Founding Fathers. However, 20th Century conservative thinkers such as Friedrich Hayek and Milton Friedman argued that free markets are a necessary precondition for democracy. Indeed, the case can be made that even the democracy of Athens and other Greek city states was made possible by the invention of gold coins by the neighboring Lydians of Midas and Croesus fame: currency led to a democratic society built around the agora/marketplace and commerce rather than the palace and tribute.
In The Dark Enlightenment, Land also pushes the thinking of Hayek and Friedman further and argues that democracy is a parasite on capitalism: with time, democratic government contributes to an ever growing and ever more corrupt state apparatus which is inimical to capitalism and its accelerationist mission. In fact, Land and other accelerationists put forth the thesis that societies like China and Singapore provide a better platform for the acceleration required of late capitalism: getting politics out of everyday life is liberating – if the state is well run and essential services are provided efficiently, citizens are free to go about the important business of life.
An historical example of capitalism in autocratic societies is provided by the German and Austro-Hungarian empires of the half century leading up to WWI: it was in this world that the link was made between basic scientific research (notably at universities) and industrial development that continues to be a critical source of new technologies (the internet is an example). In this period, the modern chemical and pharmaceutical industries were created (Bayer aspirin and all that); the automobile was pioneered by Karl Benz’ internal combustion engine and steam power was challenged by Rudolf Diesel’s compression-ignition engine. Add the mathematics (Cantor and new infinities, Riemann and new geometries), physics (Hertz and radio waves, Planck and quantum mechanics, Einstein and relativity), the early Nobel prizes in medicine garnered by Koch and Ehrlich (two heroes of Paul De Kruif’s classic book Microbe Hunters), the triumphant music (Brahms, Wagner, Bruckner, Mahler). Certainly this was a golden age for progress, an example of how capitalism and technology can thrive in autocratic societies.
Starkly, we are now in a situation reminiscent of the first quarter of the 20th Century – two branches of capitalism in conflict, the one led by liberal democracies, the other by autocratic states (this time China and Singapore instead of Germany and Austria). For Land and his school, the question is which model of capitalism is better positioned to further the acceleration; for them and the rest of us, the question is how to avoid a replay of the Guns of August 1914, all the pieces being ominously in place.
In the US, the Constitution plays the role of sacred scripture and the word unconstitutional has the force of a curse. The origin story of this document begins in Philadelphia in 1787 with the Constitutional Convention. Jefferson and Adams, ambassadors to England and France, did not attend; Hamilton and Franklin did; Washington presided. It was James Madison who took the lead and addressed the problem of creating a strong central government that would not turn autocratic. Indeed, Madison was a keen reader of the Roman historian Tacitus who pitilessly described the transformation of Roman Senators into sniveling courtiers with the transformation of the Roman Republic into the Roman Empire. Madison also drew on ideas of the Enlightenment philosopher Montesquieu and, in the Federalist Papers, he refined Montesquieu’s “separation of powers” and enunciated the principle of “checks and balances.”
A balance between large and small states was achieved by means of the Connecticut Compromise: a bicameral legislature composed of the Senate and the House of Representatives. As a buffer against “mob rule,” the Senators would be appointed by the state legislatures. However, the House created the problem of computing each state’s population for the purpose of determining representation. The resulting Three-Fifths Compromise stipulated that 3/5ths of the slave population in a state would count toward the state’s total population. This created the need for an electoral college to elect the president, since enslaved African-Americans would not each have three-fifths of a vote!
In September 1787, a modest four page document (without mention of the word Democracy, without a Bill of Rights, without provision for judicial review but with guidelines for impeachment) was submitted to the states; upon ratification the new Congress was seated and George Washington became President in the spring of 1789.
While the Constitution is revered today, it is not without its critics – it makes it too hard to represent the will of the people to the point where the American electorate is one of the most indifferent in the developed world (26th out of 32 in the OECD, the bottom 20%). Simply put, Americans don’t vote!!
For example, the Constitution provides for an Amendment process that requires ratification by 3/4ths of the states. Today the vestigial Electoral College makes a vote for president in Wyoming worth twice that in Delaware: both states have 3 electors and Delaware’s population is twice that of Wyoming. If you do more math, you’ll find that a presidential vote in Wyoming is worth 3.5 times one in Brooklyn and nearly 4 times one in California. Change would require an amendment; however any 13 states can block it and the 13 smallest states, with barely 4% of the population, would not find it in their interest to alter the current system.
Another issue is term limits for members of Congress, something supported by the voters. It can be in a party’s interest to have senators and representatives with seniority so they can accede to powerful committee chairmanships; this is the old Dixiecrat strategy that kept Strom Thurmond in the Senate until he was over 100 years old – but then the root of the word “senator” is the Latin “senex” which does mean “old man.” The Constitution, however, does provide for a second way to pass an amendment: 34 state legislatures would have to vote to hold a constitutional convention; this method has never been used successfully, but a feisty group “U.S. Term Limits” is trying just that.
The Constitution leaves running elections to the states and today we see widespread voter suppression, gerrymandering, etc. The lack of federal technical standards gave us the spectacle of “hanging chads” in Florida in the 2000 presidential election and has people rightly concerned about foreign interference in the 2020 election.
Judicial review came about by fiat in 1803 when John Marshall’s Supreme Court ruled a section of an act of Congress to be unconstitutional: an action itself rather extra-constitutional given that no such authority was set down in the Constitution! Today, any law passed has to go through an interminable legal process. With the Supreme Court politicized the way it is, the most crucial decisions are thus regularly made by five unelected, high church (four Catholics, one Catholic become Episcopalian), male, ideologically conservative, elitist, lifetime appointees of Republican presidents.
The founding fathers did not imagine how powerful the judicial branch of government would become; in fact, Hamilton himself provided assurances that the judiciary would always be the weakest partner in his influential tract Federalist 78. However, a recent (2008) malign example of how the Constitution does not provide protection against usurpation of power by the Supreme Court came in District of Columbia v. Heller where over two hundred years of common understanding were jettisoned when the reference to “militia” in the 2nd amendment was declared irrelevant: “A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.” What makes it particularly outrageous was that this interpretation was put forth as an example of “originalism” where the semantics of the late 18th Century are to be applied to the text of the amendment; quite the opposite is true, Madison’s first draft made it clear that the military connection was the motivating one to the point where he added an exclusion for pacifist Quakers:
“The right of the people to keep and bear arms shall not be infringed; a well armed, and well regulated militia being the best security of a free country: but no person religiously scrupulous of bearing arms, shall be compelled to render military service in person.”
Note too that Madison implies in the original text and in the shorter final text as well that “the right to bear arms” is a collective military “right of the people” rather than an individual right to own firearms – one doesn’t “bear arms” to go duck hunting, not even in the 18th Century. As a result of the Court’s wordplay, today American children go to school in fear; the repeated calls for “thoughts and prayers” have become a national ritual – a sick form of human sacrifice, a reenactment of King Herod’s Massacre of the Innocents.
Furthermore, we now have an imperial presidency; the Legislative Branch is still separate but no longer equal: the Constitution gives only Congress the right to levy tariffs or declare war but, for some administrations now, the president imposes tariffs, sends troops off to endless wars, and governs largely by executive order. All “justified” by the need for efficient decision-making – but, as Tacitus warned, this is what led to the end of the Roman Republic.
The discipline of Philosophy has been part of Western Culture for two and a half millennia now, from the time of the rise of the Greek city states to the present day. Interestingly, a new philosophical system often arises in anticipation of new directions for society and for history. Thus the Stoicism of Zeno and Epictetus prepared the elite of the Mediterranean world for the emerging Roman imperium with its wealth and with its centralization of political and military power. The philosophy of St. Augustine locked Western Christianity into a stern theology which served as an anchor throughout the Middle Ages and then as a guide for reformers Wycliffe, Luther and Calvin. The philosopher Descartes defined the scientific method and the scientific revolution followed in Europe. Hegel and Marx applied dialectical thinking to human history and economics as the industrial revolution created class warfare between labor and capital. The logical philosophy of Gottlob Frege and Bertrand Russell set the stage for the work of Alan Turing and thence the ensuing computer software revolution.
Existentialism (with its rich literary culture of novels and plays, its cafės, its subterranean jazz clubs, its Gauloise cigarettes) steeled people for life in a Europe made absurd by two world wars and it paved the way for second wave feminism: Simone de Beauvoir’s magistral work of 1949 The Second Sex (Le Deuxieme Sexe) provided that existentialist rallying cry for women to take charge of their own lives: “One is not born a woman; one becomes a woman.” (On ne naît pas femme, on le devient.)
By the 1960s, however, French intellectual life was dominated by structuralism, a social science methodology which looks at society as very much a static field that is built on the persistent forms that characterize it. Even Marxist philosophers like Louis Althusser were now labeled structuralists. To some extent, structuralism’s influence was due to the brilliant writing of its practitioners, e.g. semiologist Roland Barthes and anthropologist Claude Levi-Strauss: brilliance was certainly required to interest readers in the mathematical structure of kinship systems such as matrilateral cross-cousin marriage – an algorithm to maximize genetic diversity employed by small population groups.
Today the intellectual movement which most resembles past philosophical beacons of the future is known as Accelerationism. As a philosophy, Accelerationism has its roots in France in the period after the May ’68 student and worker uprising. The movement led to barricades and fighting in the streets of Paris and to the largest general strike in the history of Europe. All of which brought the government to the bargaining table; the students and workers counted on the left-wing leadership of labor unions and Marxist oriented political parties to strike a deal for freedom and radical social progress to lead to a post-capitalist world. Instead this “leadership” was interested in more seats in parliament and incremental improvements – not any truly revolutionary change in society.
The take-away from May ’68 for Gilles Deleuze, Fėlix Guattari, Jean-François Lyotard and other post-structuralist French intellectuals was the realization that capitalism proved itself once again too powerful, too flexible, too unstoppable; its dominance could not be challenged by society in its present form.
The paradoxical response in the 1970s then was to call for an acceleration of the development of technologies and other forces of capitalist progress to bring society as rapidly as possible to a new place. In their 1972 work Anti-Oedipus, Deleuze and Guattari put it this way: “Not to withdraw from the process, but to go further, to ‘accelerate the process’, as Nietzsche put it: in this matter, the truth is that we haven’t seen anything yet.” This then is the fundamental tenet of Accelerationism – push technology to get us to the point where it enables us to get out from under current society’s Iron Heel, something we cannot do now. What kind of technologies will be required for this or best suited for this and how this new world will emerge from them are, naturally, core topics of debate. One much discussed and promising (also menacing) technology is Artificial Intelligence.
Deleuze and Guattari extend the notion of the Oedipus complex beyond the nuclear family and develop schizoanalysis to account for the way modern society induces a form of schizophrenia which helps the power structure maintain the steady biological/sociological/psychological march of modern capitalism. Their Anti-Oedipus presents a truly imaginative and innovative way of looking at the world, a poetic mixture of insights fueled by ideas from myriad diverse sources; as an example, they even turn to Americans Ray Bradbury, Jack Kerouac, Alan Ginsberg, Nicholas Ray and Henry Miller and to immigrants to America Marshall McLuhan, Charles Chaplin, Wilhem Reich and Herbert Marcuse.
In The Libidinal Economy (1974), Lyotard describes events as primary processes of the human libido – again “Freud on steroids.” It is Lyotard who coined the term post-modern which has been applied to include other post-structuralists such as Michel Foucault and Jacques Derrida.
Though boldly original, Accelerationism is very much a child of continental thinking in the great European philosophical tradition, a complex modern line of thought with its own themes and conflicts: what makes it most conflicted is its schizophrenic love-hate relation to capitalism; what makes it most contemporary is its attention to the role played by new technologies; what makes it most unsettling is its nihilism, its position that there is no meaning or purpose to human life; what makes it most radical is its displacement of humanity from center-stage and its abandonment of that ancient cornerstone of Greek philosophy: “Man is the measure of all things.”
By the 1980s, the post-structuralist vision of a society in thrall to capitalism was proving prophetic. What with Thatcher, Reagan, supply-side economics, the surge of the income gap, dramatic reductions in taxes (income, corporate and estate), the twilight of the labor unions and the fall of the Berlin Wall: a stronger, more flexible, neo-liberal capitalism was emerging – a globalized post-industrial capitalism, a financial capitalism, deregulated, risk welcoming, tax avoiding, globalized, off-shoring, outsourcing, … . In a victory lap in 1989, political science professor Francis Fukuyama published The End of History; in this widely acclaimed article, Fukuyama announced that the end-point of history had been reached: market-based Western liberal democracy was the final form of human government – thus turning Marx over on his head, much the way Marx had turned Hegel over on his head! So “over” was Marxism by the 1980s that Marxist stalwart Andrė Gorz (friend of Sartre, co-founder of the Le Nouvel Observateur) declared that the proletariat was no longer the vanguard revolutionary class in his Adieux au proletariat.
With the end of the Soviet Union in 1991, in Western intellectual circles, Karl Marx and his theory of the “dictatorship of the proletariat” gave way to the Austrian-American economist Joseph Schumpeter and his theory of capitalism’s “creative destruction”; this formula captures the churning of capitalism which systematically creates new industries and new social institutions that replace the old – e.g. Sears by Amazon, an America of farmers by an America of city dwellers. Marx argued that capitalism’s contradictions and failures would lead to its demise; Schumpeter, closer to the Accelerationists, argued that capitalism has more to fear from its triumphs: ineluctably the colossal success of capitalism hollows out the social institutions and mores which historically nurtured capitalism such as the nuclear family, church-going and the Protestant Ethic itself. Look at Western Europe today with its precipitously low birth-rate where capitalism is triumphant but where church attendance is reduced to three events: “hatch, match and dispatch,” to put it the playful way Anglicans do. But all this is not all bad from the point of view of Accelerationism – capitalism triumphant should better serve to “accelerate the process.”
At this point entering the 1990s, we have a post-Marxist, post-structuralist school of Parisian philosophical thought that is the preserve of professors, researchers, cultural critics and writers. In fact at that point in time, the movement (such as it was) was simply considered part of post-modernism and was not yet known as Accelerationism.
However, in its current form, Accelerationism has moved much closer to the futurist mainstream. Science fiction is taken very seriously as a source for insights into where things might be headed. In fact, the term Accelerationist itself originated in a 1967 sci-fi novel Lord of Light by Roger Zelazny where a group of revolutionaries wanted to take their society “to a higher level” through technology: Zelazny called them the “accelerationists.” But the name was not applied to the movement until much more recently when it was so christened by Benjamin Noys in his 2013 work Malign Velocities: Accelerationism and Capitalism.
In today’s world, the work of futurist writer Ray Kurzweil and the predications of visionary Yuval Harari intersect the Accelerationist literature in the discussion of the transformation of human life that is coming at us. So how did Accelerationism get out of the salons of Paris and become part of the futurist avant-garde of the English speaking world and even a darling of the Twitterati ? Affaire à suivre, more to come.
In 2016, the State of Maine voted to apply ranked choice voting in congressional and gubernatorial elections and then in 2018 voted to extend this voting process to the allocation of its electoral college votes. Recently, the New York Times ran an editorial calling for the Empire State to consider ranked choice voting; in Massachusetts, there is a drive to collect signatures to have a referendum on this on the 2020 ballot. Ranked choice voting is used effectively in American cities such as Minneapolis and Cambridge and in countries such as Australia and Ireland. So what is it exactly? Mystère.
First let us discuss what it is not. In the UK and the US, elections are decided (with some exceptions) by plurality: the candidate who polls the largest number of votes is the winner even if this is not a majority. Although simple to administer, this can lead to unusual results. By way of example, in Maine in 2010, Republican Paul LePage was elected governor with 38% of the vote. He beat out the Independent candidate who won 36% and the Democratic candidate who won 19%.
One solution to the problems posed by plurality voting is to hold the vote in multiple rounds: if no one wins an absolute majority on the first ballot, then there must be more than two candidates and the candidate with the least votes, Z say, is eliminated and everybody votes again; this time Z’s voters will shift their votes to their second choice among the candidates. If no one gets a majority this time, repeat the process. Eventually, someone has to get a true majority.
Ranked choice voting is also known as instant-runoff voting: it emulates runoff elections but in a single round of balloting. First, if there are only two candidates to begin with, nothing changes – somebody will get a majority. Suppose there are 3 candidates – A, B and Z; then, on the ballot, each voter lists the 3 candidates in the order of that voter’s preference. First, the count is made of the number of first place votes each candidate received; if for one candidate that number is a majority, that candidate wins outright. Otherwise, the candidate with the least number of first place votes, say Z, is eliminated; now we add to A’s first place total the number of ballots that ranked Z first but that listed A as second choice and similarly for B. Now, except in the case of a tie, either A or B will have a clear majority and will be declared the winner. This will give the same result that staging a runoff between A and B would have yielded but in one trip to the voting booth where the voter has to rank the candidates A,B,Z on the ballot rather than choosing only one.
There are other positive side-effects to ranked choice voting. For one thing, voter turnout goes up; another thing is that campaigns are less nasty and partisan – you want your opponents’ supporters to list you second on their ballots! One can also see how this voting system makes good sense for primaries where there are often multiple candidates; for example, with the current Democratic field of presidential candidates, ranked choice voting would give the voter a chance to express his or her opinion and rank a marginal candidate with good ideas first without throwing that vote away.
After the 2010 debacle in Maine (LePage proved a most divisive and most unpopular governor), the Downeasters switched to ranked choice voting. In 2016 in one congressional district, no candidate for the House of Representatives gathered an absolute majority on the first round but a different candidate who received fewer first place votes on that first round won on the second round when he caught up and surged ahead because of the number of voters who made him their second choice. Naturally, all this was challenged by the losing side but they lost in court. For elections, the U.S. Constitution leaves implementation to the states for them to carry out in the manner they deem fit – subject to Congressional oversight but not to judiciary oversight. Per Section 4 of Article 1: “The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof; but the Congress may at any time by Law make or alter such Regulations, …”
Ranked voting systems are not new and have been a serious topic of interest to social scientists and mathematicians for a long time now – there is something mathematically elegant about the way you can simulate a sequence of runoffs in one ballot. Among them, there are the 18th Century French Enlightenment thinker, the Marquis de Condorcet, and the 19th Century English mathematician, Charles Lutwidge Dodgson, author of Dodgson’s Method for analyzing election results. More recently, there was the work of 20th Century mathematical economist Kenneth Arrow. For this and other efforts, Arrow was awarded a Nobel Prize; Condorcet had a street named for him in Paris; however, Dodgson had to take the pen name Lewis Carroll and then proceed to write Alice in Wonderland to rescue himself from the obscurity that usually awaits mathematicians.
St Augustine of Hippo (354-430) was the last great intellectual figure of Western Christianity in the Roman Empire. His writings on election and pre-destination, on original sin, on the theory of a just war and on the Trinity had a great influence on the medieval Church, in particular on St Thomas; he also had a great influence on Protestant reformers such as John Wycliffe and John Calvin. Augustine himself was influenced by the 3rd century Greek philosopher Plotinus and the Neoplatonists, influenced to the point where he ascribed to them some awareness of the persons of the Trinity (Confessions VIII.3; City of God X.23).
After the sack of Rome in 410, Augustine wrote his Sermons on the Fall of Rome. In these episcopal lectures, he absolves Christians of any role in bringing about the event that plunged the Western branch of the Empire into the Dark Ages, laying all the blame on the wicked, wicked ways of the pagans. European historians have begged to differ, however. By way of example, in his masterpiece, both of English prose and of scholarship, The History of the Decline and Fall of the Roman Empire, the great 18th century English historian Edward Gibbon does indeed blame Christianity for weakening the fiber of the people, hastening the Fall of Rome.
In his work on the Trinity, Augustine followed the Nicean formulation and fulminated against the heretics known as Arians who denied the divinity of Christ. Paradoxically, it was Arian missionaries who first reached many of the tribes of barbarians invading the Empire, among them the Vandals. The Vandal horde swept from Spain Eastward along the North African coast and besieged St. Augustine’s bishopric of Hippo (today Annaba in Algeria). Augustine died during the siege and did not live to see the sack of the city.
By the time of St Augustine, in the Western Church the place of the Holy Spirit in the theology of the Holy Trinity was secured. But in popular culture, the role of the Holy Spirit was minor. Jesus and Mary were always front and center along with God the Father. To complicate matters, there emerged the magnificent doctrine of the Communion of Saints: the belief that all Christians, whether here on Earth, in Purgatory or in Heaven could communicate with one another through prayer. Thus, the faithful could pray to the many saints and martyrs who had already reached Heaven and the latter could intercede with God Himself for those who venerated them .
This is the internet and social media prefigured. The doctrine has other modern echoes in Jung’s Collective Unconscious (an inherited shared store of beliefs and instincts) and in Teilhard de Chardin’s noosphere (a collective organism of mind).
The origins of this doctrine are a problem for scholars. Indeed, even the first reference known in Latin to the “Cummunio sanctorum” is ascribed to Nicetas of Remesiana (ca. 335–414), a bishop from an outpost of the Empire on the Danube in modern day Serbia who included it in his Instructions for Candidates for Baptism. Eventually, though, the doctrine made its way into the Greek and Latin versions of the Apostles Creed.
The wording “I believe … in the Communion of Saints” in the Apostles Creed is now a bedrock statement of Christian belief. However, this doctrine was not part of the Old Roman Creed, the earlier and shorter version of the Apostles Creed that dates from the second and third centuries. It also does not appear in the Nicene Creed. The earliest references to the Apostles Creed itself date from 390 and the earliest extant texts referencing the Communion of Saints are later still.
One school of thought is that the doctrine evolved from St Paul’s teaching that Christ and His Christians form a single mystical body (Romans 12.4-13, 1 Corinthians 12). Another candidate is this passage in the Book of Revelation 5.8 where the prayers of the faithful are collected in Heaven:
And when he had taken it, the four living creatures and the twenty-four elders fell down before the Lamb. Each one had a harp and they were holding golden bowls full of incense, which are the prayers of God’s people.
For an illustration from the Book of Hours of the Duc de Berry of St John imagining this scene as he wrote the Book of Revelation on the Ile of Patmos, click HERE .
The naïve view is that the doctrine of the Communion of Saints came from the ground up, from nascent folk Christianity where it helped to wean new converts from their native polytheism. Indeed, as Christianity spread, canonization of saints was a mechanism for absorbing local religious traditions and for making local martyrs and saintly figures recognized members of the Church Triumphant.
With the doctrine of the Communion of Saints, the saints in heaven could intercede for individual Christians with the Godhead; devotion to them developed, complete with hagiographic literature and a rich iconography. Thus, the most celebrated works of Christian art depict saints. Moreover, special devotions have grown up around popular patron saints such as Anthony (patron of lost objects), Jude (patron of hopeless cases), Jean-François Rėgis (patron of lacemakers), Patrick (patron of an island nation), Joan of Arc (patron of a continental nation), … .
The Holy Spirit, on the other hand, pops up in paintings as a dove here and there, at best taking a minor part next to John the Baptist and Jesus of Nazareth or next to the Virgin Mary and the Angel Gabriel. This stands in contrast with the Shekinah, the Jewish precursor of the Holy Spirit, who plays an important role in the Kabbalah and Jewish mysticism.
There is one area of Christianity today, however, where the Holy Spirit is accorded due importance: Pentecostal and Charismatic churches; indeed, the very word Pentecostal is derived from the feast of Pentecost where the Holy Spirit and tongues of fire inspired the apostles to speak in tongues. In these churches, direct personal experience of God is reached through the Holy Spirit and His power to inspire prophecy and insight. In the New Testament, it is written that prophecy is in the domain of the Holy Spirit – to cite 2 Peter 1:21 :
For no prophecy ever came by the will of man: but men spake from God, being moved by the Holy Spirit.
Speaking of prophecy, the Holy Spirit is not mentioned in the Book of Revelation which is most surprising since one would think that apocalyptic prophesy would naturally be associated with the Holy Spirit. For some, this is one more reason that the Council of the Rome (382) under St Pope Damasus I should have thought twice before including the Book of Revelation in the canon. There are other reasons too.
Tertullian, the Father of Western Theology, is not a saint of the Catholic Church – he defended a Charismatic-Pentecostal approach to Christianity, Montanism, which was branded a heresy. This sect had three founders, Montanus and the two sibyls, Priscilla and Maximilla; the sybils would prophesy when the Holy Spirit entered their bodies. Alas, there are no classical statues or Renaissance paintings honoring Priscilla and Maximilla; instead the Church treated them as “seductresses” who, according to Eusebius’ authoritative, 4th century Church History, “left their husbands the moment they were filled with the spirit.” No wonder then that Eusebius is known as the Father of Church History.
While we have no masterpieces depicting Priscilla or Maximilla, for a painting of the Sybil at Delphi, click HERE .
For Tertullian and the Montanists, the Holy Spirit was sent by God the Son to continue the revelation through prophecy. Though steeped in Greek rationalism, Tertullian insisted on the distinction between faith and reason, on the fact that faith required an extra magical step: “I believe because it is absurd” – bolder even than Pascal. He broke with the main body of the Church saying that the role given to the Holy Spirit was too narrow – a position shared by Pentecostal Christians today. In fact, the Holy Spirit is key to Pentecostalism where the faithful are inspired with “Holy Ghost fire” and become “drunk on the Holy Spirit.” Given that this is the only branch of Western Christianity that is growing now as the others recede, it looks as though Tertullian was insightful and should have been listened to more carefully. Perhaps, it is a good time now for the Church of Rome to bring up the subject of his canonization almost two millennia later. Doing so today would go some way toward restoring the Holy Spirit to a rightful place in Christianity, aligning the Holy Spirit’s role with that of the continuing importance of the Shekinah in the Jewish tradition and restoring the spirit of early Christianity.
During Augustus’ reign as Emperor of the Roman Empire, the Pax Romana settled over the Mediterranean world – with the notable exception of Judea (Palestine, the Holy Land). After the beheading of John the Baptist and the Crucifixion of Jesus of Nazareth, unrest continued leading to the Jewish-Roman Wars (66-73, 115-117, 132-135), the destruction of the Temple in Jerusalem (70) and the forced exile of many Jews. Little wonder then that the early Gentile Christians disassociated themselves from Judaism and turned to Greek philosophical models to develop their new theology.
And with God and the Logos (the Word) of Platonism and Stoicism, the Greco-Roman intellectual world was in some sense “ready” for God the Father and God the Son. Indeed, the early Christians identified the Logos with the Christ. In the prologue of the Gospel of St. John in the King James Bible, verses 1 and 14 read
In the beginning was the Word, and the Word was with God, and the Word was God.
And the Word was made flesh, and dwelt among us, (and we beheld his glory, the glory as of the only begotten of the Father,) full of grace and truth.
Christians who undertook the task of explaining their new religion to the Greco-Roman world were known as apologists, from the Greek ἀπολογία meaning “speech in defence.” Thus, in following up on the Gospel of John, Justin Martyr (100-165), a most important 2nd century apologist, drew on Stoic doctrine to make Christian doctrine more approachable; in particular, he held that the Logos was present within God from eternity but emerged as a distinct actor only at the Creation – the Creation according to Genesis, that is. But while often referring to the Spirit, the Holy Spirit, the Divine Spirit and the Prophetic Spirit in his writings, Justin apparently never formulated a theory of the Trinity as such.
So from here, how did early Christians reach the elegant formulation of the doctrine of the Holy Trinity that is so much a part of Catholic, Protestant and Orthodox Christianity? Mystère.
The earliest surviving post-New Testament Christian writings that we have that include the Holy Spirit, the Father and the Son together in a trinity identify the Holy Spirit with Wisdom/Sophia. In fact, the first Christian writer known to use the term trinity was Theophilos of Antioch in about the year 170:
the Trinity [Τριάδος], of God, and His Word, and His wisdom.
In his powerful Against Heresies, Irenaeus (130-202) takes the position that God the Son and God the Holy Spirit are co-eternal with God the Father:
I have also largely demonstrated, that the Word, namely the Son, was always with the Father; and that Wisdom also, which is the Spirit, was present with Him, anterior to all creation,
In A Plea for Christians, the Athenian author Athenagoras (c. 133 – c. 190) wrote
For, as we acknowledge a God, and a Son his Logos, and a Holy Spirit, united in essence, the Father, the Son, the Spirit, because the Son is the Intelligence, Reason, Wisdom of the Father, and the Spirit an effluence, as light from fire
Here the “Wisdom of the Father” has devolved onto God the Son and the Holy Spirit is described simply as emanating from the Father. On the one hand, it is tempting to dismiss this theological shift on the part of Athenagoras. After all, he is not considered the most consistent of writers when it comes to sophiology, matters of Wisdom. To quote Prof. Michel René Barnes:
“Athenagoras has, scholars have noted, a confused sophiology: within the course of a few sentences he can apply the Wisdom of Prov. 8:22 to the Word and the Wisdom of Wisdom of Solomon 7:25 to the Holy Spirit.”
For the full text of Prof. Barnes’ interesting article, click HERE .
On the other hand, the view in A Plea for Christians took hold and going forward the Son of God, the Logos, was identified with Holy Wisdom; indeed, the greatest church of antiquity, the Hagia Sophia in Constantinople, was dedicated to God the Son and not to the Holy Spirit.
But Trinitarianism did not have the field to itself. For one thing, there was still Sabellianism where Father, Son and Holy Spirit were just “manners of speaking” about God. The fight against Sabellianism was led by Tertullian – Quintus Septimius Florens Tertullianus to his family and friends. This Doctor of the Church was the first writer to use the term Trinitas in Latin; he is considered the first great Western Christian theologian and he is known as the Father of the Latin Church. For Tertullian, a most egregious aspect of Sabellianism was that it implied that God the Father also suffered the physical torments of the cross, a heretical position known as patripassionism. Tertullian directly confronted this heresy in his work Contra Praxeas where he famously accused the eponymous target of his attack of “driving out the Holy Spirit and crucifying the Father”:
Paracletum fugavit et patrem crucifixit
Tertullian developed a dual view of the Trinity distinguishing between the “ontological Trinity” of one single being with three “persons” (Father, Son, Holy Spirit) and the “economic Trinity” which distinguishes and ranks the three persons according to each One’s role in salvation: the Father sends the Son for our redemption and the Holy Spirit applies that redemption to us. In the ontological Trinity, there is only one divine substance (substantia) which is shared and which means monotheism is maintained. Here Tertullian is using philosophy to underpin theology: his “substantia” is a Latin translation of the term used by Greek philosophers ουσία (ousia). Interestingly, Tertullian himself was very aware of the threat of philosophy infiltrating theology and he famously asked “What has Athens to do with Jerusalem?”
The Roman empire of the early Christian era was a cauldron of competing philosophical and religious ideas. It was also a time of engineering and scientific achievement: the invention of waterproof cement made great aqueducts and great domes possible; the Ptolemaic system of astronomy provided algorithms for computing the movements of the spheres (the advance that Copernicus made didn’t change the results but simplified the computations); Diophantus of Alexandria is known as the Father of Algebra, … . The level of technology developed at Alexandria in the Roman period was not reached again until the late Renaissance (per the great Annalist historian Fernand Braudel).
In the 3rd century, neo-Platonism emerged as an updated form of Greek philosophy – updated in that this development in Greek thought was influenced by relatively recent Greek thinkers such as the neo-Pythagoreanists and Middle Platonists and likely by others such as the Hellenized Jewish writer Philo of Alexandria, the Gnostics and even the Christians.
The principal architect of neo-Platonism, Plotinus (204–270), developed a triad of the One, Intellect, and Soul, in which the latter two “proceed” from the One, and “are the One and not the One; they are the One because they are from it; they are not the One, because it endowed them with what they have while remaining by Itself” (Enneads, 85). All existence comes from the productive unity of these three. Plotinus describes the elements of the triad as three persons (hypostases), and describes their sameness using homoousios, a sharper way of saying “same substance.” From neo-Platonism came the concept of the hypostatic union, a meld of two into one which Trinitarians would employ to explain how Christ could be both God and man at the same time.
So at this point, the Trinitarian position had taken shape: very roughly put, the three Persons are different but they are co-eternal and share the same substance; the Son of God can be both God and man in a hypostatic union.
But Trinitarianism was still far away from a final victory. The issue of the dual nature of God the Son as both God and man continued to divide Christians. The most serious challenge to the Trinitarian view was mounted in Alexandria: the bishop Arius (c. 250-c. 336) maintained that the Son of God had to be created by the Father at some point in time and so was not co-eternal with the Father nor was the Son of the same substance as the Father; a similar logic applied to the Holy Spirit. Arianism became widely followed, especially in the Eastern Greek Orthodox branch of the Church and lingered for centuries; in more recent times, Isaac Newton professed Arianism in some of his religious writings – these heretical documents were kept under wraps by Newton’s heirs for centuries and were only rediscovered by John Maynard Keynes in 1936 who purchased them at an auction!
While the Empire generally enjoyed the Pax Romana, at the highest levels there were constant struggles for supreme power – in the end, who had the loyalty of the Roman army determined who would be the next Emperor. The story that has come down to us is that in 312, as Constantine was on his way to fight his last rival in the Western Empire, Maxentius, he looked up into the sky and saw a cross and the Greek words “Εν Τούτῳ Νίκα” (which becomes “In Hoc Signo Vinces” in Latin and “In this sign, you will conquer” in English). With his ensuing victory at the Battle of the Milvian Bridge, Constantine gained control over the Western Roman Empire. The following year, with the Edict of Milan, Christianity was no longer subject to persecution and would be looked upon benevolently by Constantine. For a painting of the cross in heaven and the sign in Greek by the School of Raphael, click HERE and zoom in to see the writing on the sign.
Consolidating the Eastern and Western branches of the Empire, Constantine became sole emperor in 324. Now that Christianity was an official religion of the Empire, it was important that it be more uniform in dogma and ritual and that highly divisive issues be resolved. To that end, in 325, Constantine convened a Council at Nicea (modern Iznik, Turkey) to sort out all the loose ends of the very diverse systems of belief that comprised Christianity at that time. One of the disagreements to settle was the ongoing conflict between Arianism and Trinitarianism.
Here the council came down on the side of the Trinitarians: God has one substance but three persons (hypostases); though these persons are distinct, they form one God and so are all co-eternal. There is a distinction of rank to be made: God the Son and God the Holy Spirit both proceed from God the Father. This position was formalized by the Council of Nicea and refined at the Council of Constantinople (381). In the meantime, with the Edict of Thessalonica in 380, Theodosius I officially made Christianity the state religion of the Empire.
Still disagreements continued even among the anti-Arians. There is the interesting example of Marcellus of Ancyra (Ankara in modern Turkey), an important participant in the Council of Nicea and a resolute opponent of the Arians; Marcellus developed a bold view wherein the Trinity was necessary for the Creation and for the Redemption but, at the end of days, the three aspects (πρόσωπα prosopa but not ὑπόστασɛς hypostases, persons) of the Trinity would merge back together. Marcellus’ position has a scriptural basis in St Paul’s assertion in 1 Corinthians 20:28 :
… then the Son himself will be made subject to him [God] who put everything under him [the Son], so that God may be all in all.
This view also harkens back somewhat to Justin Martyr – in fact, writings now attributed to Marcellus were traditionally attributed to Justin Martyr! So, in Marcellus’ view, in the end Christ and the Holy Spirit will return into the Father, restoring the absolute unity of the Godhead. This line of thought opened Marcellus to the charge of Sabellianism; he also had the misfortune of having Eusebius, the Father of Church History, as an opponent and his orthodoxy was placed in doubt. For a tightly argued treatise on this illustrative chapter in Church History and for a tour of the dynamic world of 4th century Christian theologians, there is the in-depth study Contra Marcellum: Marcellus of Ancyra and Fourth-Century Theology by Joseph T. Lienhard S.J.
The original Nicene Creed of 325 as well as the updated version formulated at the Council of Constantinople of 381 had both the Holy Spirit and God the Son proceeding from God the Father. Whether the Holy Spirit proceeds from the Son as well as from the Father is a tough question for Trinitarianism; in Latin filioque means “and from the Son” and this phrase has been a source of great controversy in the Church. A scriptural justification for including the filioque in the Nicean Creed is found in John 20:22 :
And with that he [Jesus] breathed on them and said, “Receive the Holy Spirit …”
Is the filioque a demotion for the Holy Spirit vis-à-vis God the Son? Or is it simply a way of organizing the economic Trinity of Tertullian? In the late 6th century, Western churches added the term filioque to the Nicene Creed but the Greek churches did not follow suit; this lingering controversy was an important issue in the Great Schism of 1054 which led to the definitive and hostile breakup of the two major branches of Christendom. This schism created a fault line in Europe separating Orthodox from Roman Christianity that has endured until modern times. Indeed, it was the massive Russian mobilization in July 1914 in support of Orthodox Christian Serbia which led directly to World War I.