AI V: The Random and the Quantum

As children, we first encounter randomness in flipping a coin to see who goes first in a game or in shuffling cards for Solitaire – nothing terribly dramatic. Biological evolution, on the other hand, uses randomness in some of the most important processes of life such as the selection of genes for transfer from parent to child. Physical processes also exhibit randomness such as collisions of gas molecules or the “noise” in communication signals. And then randomness is a key concept in Quantum Mechanics, the physics of the subatomic realm: the deterministic laws of Newton are replaced by assertions about likely outcomes formulated using mathematical statistics and probability theory – worse, the fundamental axiom of Quantum Mechanics is called the Heisenberg Uncertainty Principle.
But randomness has given Artificial Intelligence (AI) researchers and others a rich set of new algorithmic tools and some real help in dealing with the issues raised by exponential growth and combinatorial explosion.
Indeed, with the advent of modern computing machinery, mathematicians and programmers quickly introduced randomization into algorithms; this is indeed coeval with the Computer Age – mathematicians working laboriously by hand just are not physically able to weave randomness into an algorithmic process. Randomized algorithms require long sequences of random numbers (bits actually). But while nature manages to do this brilliantly, it is a challenge for programmers: computer software pioneer John von Neumann pontifically said “Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.” But sin they did and techniques using random numbers felicitously named “Monte-Carlo algorithms” were developed early on and quickly proved critical in the Manhattan Project and in the design of the Hydrogen Bomb. (“Las Vegas algorithms” came in later!)
Monte-Carlo methods are now in widespread use in fields such as physics, engineering, climate science and (wouldn’t you know it!) finance. In the Artificial Intelligence world, Monte Carlo methods are used for systems that play Go, Tantrix, Battleship and other games. Randomized algorithms are employed in machine learning, robotics and Bayesian networks. AI researchers reverse-engineered some of Nature’s methods and applied them to applications that are subject to combinatorial explosion. For example, the process of evolution itself uses randomness for mutations and gene crossover to drive natural selection; genetic algorithms, introduced in 1960 by John Holland, use these stratagems to develop codes for all sorts of practical challenges – e.g. scheduling, routing, machine learning, etc.
The phenomenon of exponential growth continues to impede progress in AI, in Operations Research and other fields. But hope springs eternal and perhaps the physical world can provide a way around these problems. And, in fact, quantum computing is a technology that has the potential to overcome the combinatorial explosion that limits the practical range of mathematical algorithms. The role of randomness in Quantum Mechanics is the key to the way Quantum Mechanics itself can deal with the problem of an exponential number of possibilities.
Just as the Theory of Relativity is modern physics at a cosmic scale, Quantum Mechanics is modern physics at the atomic and sub-atomic scale. A rich, complex, mind-bending theory, it started with work of Max Planck in 1900 to account for the mysterious phenomenon of “black-body radiation” – mysterious in that the laws of physics could not account for it properly. Planck solved this complicated problem by postulating that energy levels could only increase in whole number multiples of a minimal increment, now called a quantum. Albert Einstein was awarded his Nobel prize for work in 1905 analyzing an equally mysterious phenomenon, the “photo-electric effect,” in terms of light quanta (aka photons). Einstein showed that photons make discrete “quantum leaps” in going from one energy level to another.
The first pervasive practical application of quantum mechanics was the television set – in itself a confounding tale of corporate games playing and human tragedy; for the blog post on this click HERE . Then there were the laser, the transistor, the computer chip, etc.
Digital computers are not able to simulate quantum systems of any complexity since they are confronted with an exponential number of possibilities to consider, another example of combinatorial explosion. In a paper published in 1982, Simulating Physics with Computers, Nobel laureate Richard Feynman looked at this differently; since simulating a quantum experiment is a computational task too difficult for a digital computer, a quantum experiment itself must be actually performing a prodigious act of computation; he concluded that to simulate quantum systems you should try to build a new kind of computational engine, one based on Quantum Mechanics – quantum computers ! (Moreover, for the experts, the existing field of quantum interferometry with its multiparticle interference experiments would provide a starting point! In other words, they knew where to begin.)
In Quantum Mechanics, there is no “half a quantum” and energy jumps from 1 quantum to 2 quanta without passing through “1.5 quanta.” But there are more strange phenomena to deal with such as entanglement and superposition.
The phenomenon of entanglement refers to the fact that two particles in the same state can be separated in space but later a change to one will force the same state change in the other; so although there is no physical link between them, the particles still influence each other. Here we have a return to physics of the magical “action at a distance,” a concept that goes back to the middle ages and William of Ockham (of “razor” renown); but “action at a distance” was something thought banished from serious physics by Maxwell’s Equations in the 19th century. However, entanglement has been verified experimentally with photons, neutrinos, and other sub-atomic particles.
The phenomenon of superposition also applies to sub-atomic particles. But it is traditionally illustrated by the fable of Schrödinger’s Cat. There is a small amount of radioactive material which has a half life of one hour, meaning that the material will decay within one hour with 50% probability – the decay is a random subatomic event that can happen at any time within the hour. A cat is in a box and an apparatus is set up so that the cat will be poisoned if and only if the radioactive material actually decays; an outside observer will not know if any of this has happened. At the end of the hour, the apparatus that would poison the cat is turned off. Naively, when the hour is up, one would think that the cat in the box is either alive or dead. However, if the cat is thought of as a quantum system like an electron or photon, while waiting in the box the cat is in two superposed states simultaneously because of the 50% probability that it is alive and the 50% probability that it is dead; only when the diligent outside observer opens the box and the cat is seen, does the superposition collapse into one state (alive) or the other (dead).
Ironically, this fable was intended to poke fun at the interpretation of Quantum Mechanics put forth by the Copenhagen School led by Niels Bohr but it has become the classic story for illustrating superposition.
Confusing? Well, Niels Bohr famously said “And anyone who thinks they can talk about quantum theory without feeling dizzy hasn’t yet understood the first word about it.” But physicists are proud of the “quantum weirdness” of entanglement and superposition and even have Bell’s Theorem, a fundamental result that proves that quantum weirdness unavoidably comes with the territory.
All this made Albert Einstein himself uncomfortable with Quantum Mechanics describing entanglement as “spooky action at a distance.” He made light of the field itself for its reliance on probability and randomness saying “God does not play dice with the universe,” to which Bohr responded “Einstein, don’t tell God what to do.”
Quantum computing is based on the qubit, a quantum system that emulates the {0,1} of the traditional binary bit of digital computers by means of two distinguishable quantum states. But, because qubits behave quantumly, one can capitalize on “quantum weirdness.”
A quantum computer solves a problem by simultaneously searching through all possibilities in the search space without committing to one which it only does at the end when its state is measured (or observed) at which point, as with Schrödinger’s Cat, it settles into the desired value for the qubits.
Recently, Google announced a milestone in quantum computing: in 200 seconds their quantum system solved a problem that would take a traditional digital supercomputer about 10,000 years to complete! (The problem was to verify the randomness of a very long sequence of numbers.) And they claimed Quantum Supremacy in the race with IBM. The latter contested this claim, of course, saying (among other things) that their digital computers could solve that problem in only 2.5 days! So the jury is still out on all this – but something is happening and both companies are investing most seriously in the field.
With quantum computing, Mathematical Logic might still play an important role in reaching the Technological Singularity (the point where machine intelligence will surpass human intelligence) if the impasse presented by combinatorial explosion can indeed be broken through. One thing is for sure, quantum computing would signal the end of the encryption technique which is the basis of the s (for secure) in “https://” ; this technique (known by its authors’ initials as RSA) exploits combinatorial explosion and the concomitant clumsiness of algorithms – but have no fear, Microsoft is already working on encryption which will be immune to quantum computing.
Scientists have long marveled at the extraordinary fit between mathematics and physics, from the differential equation to the geometry of space-time – to the extent that today theoretical physicists start by looking at the mathematics to point them in new directions. Quantum computing would be an elegant “thank you” on the part of physics; it would be a revolution in our understanding of algorithms and a challenge to the Church-Turing Thesis that the Turing Machine model and, therefore, digital computers represent the theoretical limit of what can ever be computed.
So after a bumpy start in the 1950s and ‘60s and a history of over-promising, AI appears to be well poised for the race to the Technological Singularity: the field has had several decades of accelerating progress; its effects can be seen everywhere, often anthropomorphized as with Siri and Alexa. So what can be expected in the next decade, in the 3rd Wave of AI? More to come. Affaire à suivre.

AI IV: Exponential Growth

The phenomenon of exponential growth is having an impact on the way Artificial Intelligence (AI) is bringing us to the Technological Singularity, the point at which machine intelligence will catch up to human intelligence – and surpass it quickly thereafter.
The phenomenon of exponential growth is also much with us today because of the Corona virus whose spread gives a simple example: 1 person infects 2, 2 infect 4, 4 infect 8, and so on; after 20 iterations of this, over 4 million people are infected. Each step doubles the number of new patients and when you get to 20 the number of patients is huge – and all that from one initial case.
A recent New York Times article (April 25) uses the example of a pond being overrun by lily pads to illustrate the exponential spread of the virus. At first there is only one lily pad but once the pond is half-covered with lily pads, the next day the entire pond is covered.
For another example, in a classic tale from medieval India, the King wants to reward the man who has just invented Chess; the man requests 1 grain of wheat on a corner square of the chess board and then twice the previous amount on each successive square until all squares are accounted for; naively, the King agrees thinking he is getting a bargain of sorts; but after a few squares, they realize that soon there would be no wheat left in the kingdom! According to some accounts, the inventor does not survive the King’s wrath; according to others he becomes a high ranking minister. BTW, in the end the total number of grains of wheat on the chessboard would come to 18,446,744,073,709,551,615 – over 18 quintillion, way more than current annual world wheat production.
Such huge numbers were not new to the Hindu sages of the middle ages; in fact it was they who invented the 10 digit numerals {0,1,2,3,4,5,6,7,8,9} that are universally used today. Their motivation came from Hindu Vedic cosmology where very, very large numbers are required; for example, a day for Brahma, the creator, endures for about 4,320,000,000 solar years – in Roman numerals that would take over 4 million M’s. The trick in the base 10 Hindu-Arabic numerals is that the numbers represented can grow exponentially in size while the lengths of their written representations grow by 1; thus, 10 to the power n is written down using n+1 digits: 100 is 10 squared, 1000 is 10 cubed etc. The same kind of thing applies to the numbers in base 2 used in digital computers: 2 to the power n,  2n , is written down using n+1 bits: 100 is 4, 1000 is 8, 10000 is 16 etc. Roman numerals (and the equivalent systems that were used throughout the medieval world) are mathematically in base 1 so there is no real compression – even with the abbreviation M for 1000, the number represented is only 1000 times the length of the Roman numerals needed to write it down and so 1 quintillion still needs 1 quadrillion M’s. For yet another historical note, the word algorithm is derived from the name of the medieval Persian mathematician Al-Khwarizmi, who wrote a treatise on the Hindu-Arabic numerals in the early 9th Century.
The introduction of the Hindu-Arabic numerals is arguably the most important event in the history of computation – critical to AI where the fundamental idea is that reasoning is a form of computation, a point made already in 1655 by dystopian British philosopher Thomas Hobbes who wrote “By reasoning, I understand computation.”
Compound interest is an example too of exponential growth. The principal on a 30 year balloon mortgage for $100,000 at 7.25 percent interest compounded annually would double in 10 years, double again in 20 years and come to over $800,000 at maturity! The interest feeds on itself. Interestingly, when Fibonacci popularized the magical Hindu-Arabic numerals in Europe with his Book of Calculation (Liber Abaci, 1212), he included the example of compound interest (nigh impossible using Roman numerals – in any case, charging compound interest was outlawed in the Roman Empire). Eventually, the Florentine bankers further up the River Arno from Fibonacci’s home town of Pisa took notice and the Renaissance was born – the rest is history. BTW, the storied sequence of Fibonacci numbers also grows exponentially; for more on this part of the story, click HERE . For the whole story, consult the catchily titled Finding Fibonacci: The Quest to Rediscover the Forgotten Mathematical Genius Who Changed the World by Keith Devlin.
Yet another example is due to the English country parson Thomas Malthus. In An Essay on the Principle of Population (1798), writing in opposition to the feel-good optimism of the Enlightenment, he argued that the food supply will only grow at a slow pace but that the population will increase exponentially leading to an eventual catastrophe. (Malthus employed the terminology of infinite series, describing the growth of the food supply as arithmetic and that of the population as geometric.) However, even as the earth’s population has increased apace since WW II, the food supply has kept up – thanks on the one hand to things like the Green Revolution and on the other hand to things like animal cruelty, antibiotic resistant bacteria, methane pollution, overzealous insecticides, the crash of the bee population, over fishing, GMO frankenfoods, the food divide, deforestation and wide spread spoliation of the environment. Clearly, we have to do a better job; however, despite the fact that all this craziness is well known, protest has been ineffective; “hats off” though to the B-film industry who spoke truth to power with the surprise hit horror movie Frankenfish (2004).
Especially appalling from an historical viewpoint are the packed feed lots where beef cattle are fed corn (which makes them sick) to fatten them for market. The glory of the Indo-European tribes was their cattle and their husbandry – a family’s worth was the size of their herd! From the Caspian Steppe, some went East to India where today the cow is iconic; others went to Europe and then as far as the North American West to find grazing land where the cattle could eat grass and ruminate. The root of the Latin word for money pecunia is the word for cow: pecus; the connection persists to this day in English words like pecuniary and impecunious. So deep is the connection that certainly feeding corn to beef cattle would bring tears to the pagan gods of Mt. Olympus and Valhalla.
A most famous example of exponential growth in technology is Moore’s Law: in 1975, Gordon E. Moore, a founder of INTEL, noted that the number of transistors on a microchip was doubling every two years even as the cost was being halved – adding that this was likely to continue. Amazingly this prediction has held true into the 21st Century and the number of transistors on an integrated circuit has gone from 5 thousand to 1 billion. The phenomenon of Moore’s Law is appearing to level off now but it might well take off again with new insights. Indeed, exponential bursts are part of the evolutionary process that is technology. Like compound interest, progress feeds on itself. The future is getting closer all the time – advances that once would have been the stuff of science fiction can now be expected in a decade or two. This is an important element in the march toward the Technological Singularity.
However, exponential growth of a different kind can be a stumbling block for AI and other areas of Computer Science because it leads to Combinatorial Explosion: situations where the time for an algorithm to return a solution would surpass the life-expectancy of our solar system if not of the galaxy. This can happen if the size of the “search space” grows exponentially: there will simply be too many combinations that the algorithm will have to account for. For example, consider the archetypical Traveling Salesman Problem (TSP): given a list of cities, the distances between them and the city to start from, find the shortest route that visits all the cities and returns to the start city. For n cities, the number of possible routes that any computer algorithm has to reckon with in order to return the optimal solution is the product of all the numbers from 1 through n, aka n factorial, a quantity that grows much faster than the 2n of the examples above.
One reason that we resort often to the TSP as an example is that it is representative of a large class of important combinatorial problems – a good algorithm for one can translate into a good algorithm for all of them. (These challenging problems are known in the trade as NP-Complete.) Current analysis, based on the universal role of problems like the TSP, makes a strong case that fundamentally better algorithms cannot be found for any of them – this is not unrelated to the limits on provability and computability uncovered by Gödel and Turing. Another representative of this class is the problem of provability and unprovability in propositional logic: p → q, truth-tables and all that. As a result, Mathematical Logic as such does not play the role in AI that was once predicted for it; the stunning progress in machine learning and other areas has relied on Connectionism, Bayesian Networks, emulating biological and physical processes etc. rather than on Logic itself.
One side effect of this humbling of Logic is that we are beginning to look with greater respect at models of intelligence different from our own (conscious) way of thinking. Up till now, our intelligence has been the gold standard – for philosophers the definition itself of intelligence, for theologians even the model for the mind of God.
While a direct attack on the phenomenon of Combinatorial Explosion seems unlikely to yield results, researchers and developers have turned to techniques that use randomness, Statistics and Probability to help in decision making in applications. Introducing uncertainty into algorithms might well make Al-Khwarizmi turn in his grave, but it has worked for Quantum Mechanics – the physics that brought us television and the transistor. And it has also worked in AI systems for decision making that deal with uncertainty, notably with Bayesian Networks. So perhaps randomized algorithms and even Quantum Mechanics can open the way to some further progress in AI on this front. Then too the creativity of researchers in exploiting the genius of nature knows no limits and can boggle the imagination: on the horizon are virus built lithium batteries and amoeba inspired algorithms. More to come. Affaire à suivre.

AI III – Connectionism

DARPA stands for Defense Advanced Research Projects Agency, a part of the US Department of Defense that has played a critical role in funding scientific projects since WW II, among them the ARPANET which has morphed into the Internet and the World Wide Web. DARPA has also been an important source of funding for research into Artificial Intelligence (AI). Following a scheme put forth by John Launchbury, then director of the DARPA I2O (Information Innovation Office), the timeline of AI can be divided into three parts like Caesar’s Gaul. The 1st Wave of AI went from 1950 to the turn of the millennium. The 2nd Wave of AI went from 2000 to the present. During this period advances continued in fields like expert systems and Bayesian networks; search based software for games like chess also advanced considerably. However, it is in this period that Connectionism – imitating the way neurons are connected in the brain – came into its own.
The human brain is one of nature’s grandest achievements – a massively parallel, multi-tasking computing device (albeit rather slow by electronic standards) that is the command and control center of the body. Some stats:
It contains about 100 billion nerve cells (neurons) — the “gray matter.”
It contains about 700 billion nerve fibers (1 axon per neuron and 5-7 dendrites per          neuron) — the “white matter.”
The neurons are linked by 100 trillion connections (synapses) — structures that              permit a neuron to pass an electrical or chemical signal to another neuron or to a          target cell.
The Connectionist approach to AI employs networks implemented in software known as “neural networks” or “neural nets” to mimic the way the neurons in the brain function. Connectionism proper begins in 1943 with a paper by Warren McCulloch and Walter Pitts which provided the first mathematical model of an artificial neuron. This inspired the single layer perceptron network of Frank Rosenblatt whose Perceptron Learning Theorem (1958) showed that machines could learn! However, this cognitive model was soon shown to be very limited in what it could do, which did dull the enthusiasm of AI funding sources– but the idea of machine learning by means of neuron-like networks was established and research went on.
So, already by the 1980s, the connectionist model was expanded to include more complex neural networks, composed of large numbers of units together with weights that measure the strength of the connections between the units – in the brain, if enough input accumulates at a neuron, it then sends a signal along the synapses extending from it. These weights model the effects of the synapses that link one neuron to another. Neural nets learn by adjusting the weights according to a feedback method which reacts to the network’s performance on test data, the more data the better – mathematically speaking this is a kind of non-linear optimization plus numerical differentiation, statistics and more.
These net architectures have multiplied and there are now not only classical neural nets but also convolutional neural nets, recurrent neural nets, neural Turing Machines, etc. Along with that, there are multiple new machine learning methods such as deep learning, reinforcement learning, competitive learning, etc. These methods are constantly improving and constitute true engineering achievements. Accordingly, there has been progress in the handling of core applications like text comprehension and translation, vision, sensor technology, voice recognition, face recognition, etc.
Popular apps such as eHarmony, Tinder, ancestry.com and 23AndMe all use AI and machine learning in their mix of algorithms. These algorithms are purported to have learned what makes for a happy marriage and how Italian you really can claim to be.
IBM’s Watson proved it had machine-learned just about everything with its victory on Jeopardy; its engine is now being deployed in areas such as cancer detection, finance and eCommerce.
In 2010, Google purchased DeepMind, a British AI company and soon basked in the success of DeepMind’s Go playing software. First there was AlphaGo which stunned the world by beating champion player Lee Se-dol in a five game match in March, 2016 – something that was thought to be yet years away as the number of possible positions in Go dwarfs that of Chess. But things didn’t stop there: AlphaGo has been followed by AlphaZero a system that defeated world champion Ke Jie in 2019. In fact, AlphaZero can learn how to play multiple games such as Chess and Shogi (Japanese Chess) as well as Go; what is more, AlphaZero does not learn by playing against human beings or other systems: it learns by playing against itself – playing against humans would just be a waste of precious time!
Applying machine learning to create a computer that can win at Go is a milestone. But applying machine learning so that a robot can enjoy “on the job training” is having more of an impact on the world of work. For example, per a recent NY Times article, an AI trained robot has been deployed in Europe to sort articles for packing and shipping for eCommerce. The robot is trained using reinforcement learning, an engineering extension of the mathematical optimization technique of dynamic programming (the one used by GPS systems to find the best route). This is another example where the system learns pretty much on its own; it is also an example of serious job-killing technology – one of the unsettling things about AI’s potential to force changes in society even beyond the typical “creative destruction” of free-market capitalism.
Another way AI is having an impact on society is through surveillance technology: from NSA eavesdropping to hovering surveillance drones to citywide face recognition cameras. London, once the capital of civil liberties and individual freedom, has become the surveillance capital of the world – but (breaking news) Shanghai has already overtaken London in this dystopian competition. What is more we are now subjecting our own selves to constant monitoring: our movements traced by our cellphones, our keystrokes logged by social media.
In the process, the surveillance state has created it own surveillance capitalism: our personal behavorial data are amassed by AI enhanced software – Fitbit, Alexa, Siri, Google, FaceBook, … ; the data are analyzed and sold for targeted advertising and other feeds to guide us in our lives; an example: as one googles work on machine intelligence, Amazon drops ads for books on the topic (e.g. The Sentient Machine) onto one’s FaceBook page. This is only going to get worse as the internet of things puts sensors and listening devices throughout the home and machines start to shepherd us through our day – a GPS for everything, adieu free will! For the in-depth story of this latest chapter in the history of capitalism, consult Shoshana Zuboff’s The Age of Surveillance Capitalism (2019).
Word to the wise: machine intelligence is one thing but do avoid googling eHarmony or Tinder – the surveillance capitalists do not know that’s part of your innocent research endeavors.
Moreover, there is the emerging field of telehealth: the provision of healthcare remotely by means of telecommunications technology. In addition to office visits via Zoom or Skype or WhatsApp, there are wearable devices that monitor one’s heart’s functions and report via the internet to an algorithm that checks for abnormalities etc. Such devices are typically worn for a week or so and then have to be carefully returned. Recently Apple and Stanford Medical have produced an app where an Apple Watch checks constantly for cardiac issues and, if something is detected, it prompts a call to the wearer’s iPhone from a telehealth doctor. Indeed, in the future we will be permanently connected to the internet for monitoring – the surveillance state on steroids.
In fact, all this information about us lives a life parallel to our own out in the cloud – it has become our avatar, and for many purposes it is more important than we are.
The English philosopher Jeremy Bentham is known for his Utilitarian principle: “it is the greatest happiness of the greatest number of people that is the measure of right and wrong.” From the 1780s on, Bentham also promoted the idea of the panopticon, a prison structured so that the inmates would be under constant surveillance by unseen guards – click  HERE and scroll down for a visualization. To update a metaphor from French post-modernist philosopher Michel Foucault, with surveillance technology we have created our own panopticon – one in which we dwell quietly and willingly as our every keystroke, every move is observed.
Some see an upside to all this connectivity: back in 2004, Google’s young founders told Playboy magazine that one day we would have direct access to the Internet through brain implants, with “the entirety of the world’s information as just one of our thoughts.” This hasn’t happened quite yet but one wouldn’t want to bet against Page and Brin. Indeed, we are now entering the 3rd Wave of AI which the DARPA schedule has lasting until 2030 – the waves get shorter as progress builds on itself. So what can be expected in the next decade, in this 3rd Wave? And then what? More to come. Affaire à suivre.

Pandas and Pandemics

After the Chinese invasion and takeover of Tibet in the 1950s, China became a practitioner of panda diplomacy where it would send those cuddly bears to zoos around the world to improve relations with various countries. But back then, China was still something of a sleepy economic backwater. Napoleon once opined “Let China sleep, for when she wakes she will shake the world” – an admonition that has become a prediction. In recent times, China has emerged as the most dynamic country on the planet: Shanghai has long replaced New York and Chicago as the place for daring sky-scraper architecture, the Chinese economy is the second largest in the world, the Silk Road project extends from Beijing across Asia and into Europe itself. Indeed, considerable Silk Road investment in Northern Italy in industries like leather goods and fashion has brought tens of thousands of Chinese workers (many illegally) to the Bel Paese to make luxury goods with the label “Made In Italy”; this has led to scheduled direct flights from Milan Bergamo Airport (BGY) to Wuhan Airport (WUH) – the root cause of the virulence of the corona virus outbreak in Northern Italy.
Indeed, China is the center of a new kind of capitalist system, state controlled capitalism where the government is the principal actor. But the government of China is run by the Chinese Communist Party, the organization founded by Mao Zedong and others in the 1920s to combat the gangster capitalism of the era – for deep background, there is the campy movie classic Shanghai Express (Marlene Dietrich: “It took more than one man to change my name to Shanghai Lily”). So has the party of the people lost its bearings or is something else going on? Mystère.
The Maoist era in China saw the economy mismanaged, saw educated cadres and scientists exiled to rural areas to learn the joys of farming, saw the Great Leap Forward lead to the deaths of millions. Following Mao’s own death in 1976, Deng Xiaoping emerged as “paramount leader” of the Communist Party and began the dramatic transformation of the country into the economic behemoth it is today.
Deng’s modernization program took on new urgency in the early 1990s when the fall of the Soviet Union made Western capitalism a yet more formidable opponent. However, the idea that capitalist economic practices were going to prove necessary on the road to Communism was not new, far from it. Marx himself wrote that further scientific and industrial progress under capitalism was going to be necessary to have the tools in place for the transition to communism. Then too there was the example of Lenin who thought the Russia inherited from the Czars was too backward for socialism; in 1918 he wrote
    “Socialism is inconceivable without large-scale capitalist engineering based on the latest discoveries of modern science.”
Accordingly Lenin resorted to market based incentives with the New Economic Policy in the 1920s in the USSR. So, there is nothing new here: in China, Communism is alive and well – just taking a deep breath, getting its bearings.
Normally we associate capitalism with agile democracies like the US and the UK rather than autocratic monoliths like China. But capitalism has worked its wonders before in autocratic societies: prior to the World Wars of the 20th century, there was a thriving capitalist system in Europe in the imperial countries of Germany and Austria-Hungary which created the society that brought us the internal combustion engine and the Diesel engine, Quantum Mechanics and the Theory of Relativity, Wagner and Mahler, Freud and Nietzsche. All of which bodes well for the new China – to some libertarian thinkers, democracy just inhibits capitalism.
The US formally recognized the People’s Republic of China in 1979, a few years after Nixon’s legendary visit and the gift of pandas Ling-Ling and Hsing-Hsing to the Washington DC zoo. Deng’s policies were bolstered too by events in the capitalist world itself. There was the return of Japan and Germany to dominant positions in high-end manufacturing by the 1970s: machine tools, automobiles, microwave ovens, cameras, and so on – with Korea, Taiwan and Singapore following close behind. Concomitantly, the UK and the US turned from industrial capitalism to financial capitalism in the era of Margaret Thatcher and Ronald Reagan. Industrial production was de-emphasized as more money could be made in financing things than in making them. This created a vacuum and China was poised to fill the void – rural populations were uprooted to work in manufacturing plants in often brutal conditions, ironically creating in China itself the kind of social upheaval and exploitation of labor that Marx and Engels denounced in the 19th century. But the resulting boom in the Chinese economy led to membership in the World Trade Organization in 2001!
The displaced rural populations were crammed into ever more crowded cities. This exacerbated the serious problems China has long had with transmission of animal viruses to humans – Asian Flu, Hong Kong Flu, Bird Flu, SARS and now COVID-19. The fact that China is no longer isolated as it once was but a huge exporter and importer of goods and services from all over the world has made these virus transmissions a frightening global menace.
The corona pandemic is raging as the world finds itself in a situation eerily like that of August 1914 in Europe: two powerful opposing capitalist systems – one led by democracies, the other by an autocratic central government. The idea of a full-scale war between nuclear powers is unthinkable. Instead, there is hope that this latest virus crisis will be for China and the West what William James called “the moral equivalent of war,” leading to joint mobilization and cooperation to make the world a safer place; hopefully, in the process, military posturing will be transformed into healthy economic competition so that the interests of humanity as a whole are served in a kinder gentler world. Hope springs eternal, perhaps naively – but consider the alternative.

AI II : First Wave — 1950 – 2000

Alan Turing was a Computer Science pioneer whose brilliant and tragic life has been the subject of books, plays and films – most recently The Imitation Game with Benedict Cumberbatch. Turing and others were excited by the possibility of Artificial Intelligence (AI) from the outset, in Turing’s case at least from 1941. In his 1950 paper Computing Machinery and Intelligence, Turing proposed his famous test where, roughly put, a machine would be deemed “intelligent” if it could pass for a human in an interactive session he called “The Imitation Game” – whence the movie title. (Futurologist Ray Kurzweil has predicted that a machine will pass the Turing Test by the year 2029.)
Historically, the practice of war has hewn closely to developments in technology. And warfare, in turn, has made demands on technology. Indeed, even men of genius like Archimedes and Leonardo da Vinci developed weapons systems. However, the relationship between matters military and matters technological became almost symbiotic with WWII. Technological feats such as radar, nuclear power, rockets, missiles, jet planes and the digital computer are all associated with the war efforts of the different powers of that conflict. Certainly, the fundamental research behind these achievements was well underway by the 1930s but the war determined which areas of technology should be prioritized, thereby creating special concentrations of brilliant scientific talent. The Manhattan Project itself is studied as a model of large scale R&D; furthermore, the industrial organization of the war period and military operations such as countering submarine warfare gave rise to a new mathematical discipline, aptly called Operations Research, which is now taught in Business Schools under the name Management Science.
In his masterful treatise War in the Age of Intelligent Machines (1991), Manuel DeLanda summarizes it thusly: “The war … forged new bonds between the military and scientific communities. Never before had science been applied at so grand a scale to such a variety of warfare problems.”
However, now the reliance on military funding might be skewing technological progress leading in less fruitful directions than capitalism or science-as-usual itself would. Perhaps this is why, instead of addressing the environmental crisis, post WWII technological progress has perfected drones and fueled the growth of organizations such as the NSA and its surveillance prowess.
All that said, since WWII the US Military has been a very strong supporter of research into AI; in particular funding has come from the Defense Advanced Research Projects Agency (DARPA). It is worth noting that one of their other projects was the ARPANET which was once the sole domain of the military and research universities; this became the internet of today when liberated for general use and eCommerce by the High Performance Computing and Communications Act (“Gore Act”) of 1991.
The field of AI was only founded formally in 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term Artificial Intelligence itself was coined.
Following a scheme put forth by John Launchbury of DARPA, the timeline of AI can be broken into three parts. The 1st Wave (1950-2000) saw the development of four fundamental approaches to AI – one based on powerful Search Algorithms, one on Mathematical Logic, one on algorithms drawn form the natural world and one on Connectionism, imitating the structure of neurons in the human brain. Connectionism develops slowly in the First Wave but explodes in the 2nd Wave (2000-2020). We are now entering the 3rd Wave.
Claude Shannon, a scientist at the legendary Bell Labs, was a participant at the Dartmouth conference. His earlier work on implementing Boolean Logic with electromagnetic switches is the basis of computer circuit design – this was done in his Master’s Thesis at MIT making it probably the most important Master’s Thesis ever written. In 1950, Shannon published a beautiful paper Programming a Computer for Playing Chess, which laid the groundwork for games playing algorithms based on searching ahead and evaluating the quality of possible moves.
Fast Forward: Shannon’s approach led to the triumph in 1997 of IBM’s Deep Blue computer which defeated reigning chess champion Gary Kasparov in a match. And things have accelerated since – one can now run even more powerful codes on a laptop.
Known as the “first AI program”, Logic Theorist was developed in 1956 by Allen Newell, Herbert A. Simon and Cliff Shaw – Simon and Newell were also at the Dartmouth Conference (Shaw wasn’t). The system was able to prove 38 of the first 52 theorems from Russell and Whitehead’s Prinicipia Mathematica and in some cases to find more elegant proofs! Logic Theorist established that digital computers could do more than crunch numbers, that programs could deal with symbols and reasoning.
With characteristic boldness, Simon (who was also a Nobel prize winner in Economics) wrote
     [We] invented a computer program capable of thinking non-numerically, and thereby solved the venerable mind-body problem, explaining how a system composed of matter can have the properties of mind.
Again with his characteristic boldness, Simon predicted in 1957 that computer chess programs would outperform humans within “ten years” but that was wrong by some thirty years! In fact, “over-promising” has plagued AI over the years – but presumably all that is behind us now.
AI has also proved too attractive to researchers and companies. For example, at Xerox PARC in the 1970s, the computer mouse, the Ethernet and WYSIWYG editors (What you see is what you get) were invented. However, rather than commercializing these advances for a large market as Apple would do with the Macintosh, Xerox produced the Dandelion – a $50,000 workstation designed for work on AI by elite programmers.
The Liar’s Paradox (“This statement is false”) was magically transformed into the Incompleteness Theorem by Kurt Gödel in 1931 by exploiting self-reference in systems of mathematical axioms. With Turing Machines, an algorithm can be the input to an algorithm (even to itself). And indeed, the power of self-reference gives rise to variants of the Liar’s Paradox that become theorems about Turing machines and algorithms. Thus, the only algorithm for telling how long an algorithm or program will run will come down to running the program; and, be warned, it might run forever and there is no sure way you can tell that in advance.
In a similar vein, it turns out that the approach through Logic soon ran into the formidable barrier called Combinatorial Explosion where all possible algorithms will necessarily take too long to reach a conclusion on a large family of mathematical problems – for example, there is the Traveling Salesman Problem:
     Given a set of cities and distance between every pair of cities, the problem is to find the shortest possible route that visits every city exactly once and returns to the starting point.
This math problem is not only important to salesmen but is also important for the design of circuit boards, for DNA sequencing, etc. Again the impasse created by Combinatorial Explosion is not unrelated to the issues of limitation in Mathematics and Computer Science uncovered by Gödel and Turing.
Nature encounters difficult mathematical problems all the time and responds with devilishly clever algorithms of its own making. For example, protein folding is a natural process that solves an optimization problem that is as challenging as the Traveling Salesman Problem; the algorithm doesn’t guarantee the best possible solution but always yields a very good solution. The process of evolution itself uses randomness, gene crossover and fitness criteria to drive natural selection; genetic algorithms, introduced in 1960 by John Holland, adopt these ideas to develop codes for all sorts of practical challenges – e.g. scheduling elevator cars. Then there is the technique of simulated annealing – the name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. This technique has been applied to myriad optimization problems including the Traveling Salesman Problem. A common feature of these and many other AI algorithms is the resort to randomness; this is special to the Computer Age – mathematicians working laboriously by hand just are not physically able to weave randomness into an algorithmic process.
Expert Systems are an important technology of the 1st Wave; they are based on the simplified logic of if-then-rules:
    If it’s Tuesday, this must be Belgium.
As the rules are “fired” (applied), a data base of information called a “knowledge base” is updated making it possible to fire more rules. Major steps in this area include the DENDRAL and the MYCIN expert systems developed at Stanford University in the 1960s and 1970s.
A problem for MYCIN which assisted doctors in the identification of bacteria causing infections was that it had to deal with uncertainty and work with chains of propositions such as:
“Presence of A implies Condition B with 50% certainty”
“Condition B implies Condition C with 50% certainty”
One is tempted to say that presence of A implies C with 25% certainty, but (1) that is not mathematically correct in general and (2) if applied to a few more rules in the chain that 25% will soon be down to an unworkable 1.5%.
Still MYCIN was right about 65% of the time, meaning it performed as well as the expert MDs of the time. Another problem came up, though, when a system derived from MYCIN was being deployed in the 1970s: back then MDs did not type! Still this area of research led to the development of Knowledge Engineering Environments which built rules derived from the knowledge of experts in different fields – here one problem was that the experts (stock brokers, for example) often did not have enough expertise to encode to make the enterprise worthwhile, although they could type!
For all that, Rule Based Systems are widespread today. For example, IBM has a software product marketed as a “Business Rules Management System.” A sample application of this software is that it enables an eCommerce firm to update features of the customer interaction with its web page – such as changing the way to compute the discount on a product – on the fly without degrading performance and without calling IBM or having to recompile the system.
To better deal with reasoning and uncertainty, Bayesian Networks were introduced by UCLA Professor Judea Pearl in 1985 to address the problem of updating probabilities when new information becomes available. The term Bayesian comes from a theorem of the 18th century Presbyterian minister Thomas Bayes on what is called “conditional probability” – here is a example of how Bayes’ Theorem works:
    In a footrace, Jim has beaten Bob only 25% of the time but of the 4 days they’ve done this, it was raining twice and Jim was victorious on one of those days. They are racing again tomorrow. What is the likelihood that Jim will win? Oh, one more thing, the forecast is that it will certainly be raining tomorrow.
At first, one would say 25% but given the new information that rain is forecast, a Bayesian Network would update the probability to 50%.
Reasoning under uncertainty is a real challenge. A Nobel Prize in economics was recently awarded to Daniel Kahneman based on his work with the late Amos Tversky on just how ill-equipped humans are to deal with it. (For more on their work, there is Michael Lewis best-selling book The Undoing Project.) As with MYCIN where the human experts themselves were only right 65% of the time, the work of Kahneman and Tversky illustrates that medical people can have a lot of trouble sorting through the likely and unlikely causes of a patient’s condition – these mental gymnastics are just very challenging for humans and we have to hope that AI can come to the rescue.
Bayesian Networks are impressive constructions and play an important role in multiple AI techniques including Machine Learning. Indeed Machine Learning has become an ever more impressive technology and underlies many of the success stories of Connectionism and the 2nd Wave of AI. More to come.

AI I: Pre-History — 500 BC to 1950 AD

Artificial Intelligence (AI) is the technology that is critical to getting humanity to the Promised Land of the Singularity, where machines will be as intelligent as human beings.
The roots of modern AI can be traced to attempts by classical philosophers to describe human thinking in systematic terms.
Aristotle’s syllogism exploited the idea that there is structure to logical reasoning:                “All men are mortal; Socrates is a man; therefore Socrates is mortal.”
The ancient world was puzzled by the way syntax could create semantic confusion; for example, The Liar’s Paradox: “This statement is false” is false if it is true and true if it is false.
The greatest single advance in computational science took place in northern India in the 6th or 7th century AD – the invention of the Hindu-Arabic numerals. The Hindu sages encountered the need for truly large numbers for Hindu Vedic cosmology – e.g. a day for Brahma, the creator, endures for about 4,320,000,000 solar years (in Roman numerals that would take over 4 million M’s). This advance made its way west across the Islamic world to North Africa. The father of the teenage Leonardo of Pisa (aka Fibonacci) was posted to Bejaia (in modern Algeria) as commercial ambassador of the Republic of Pisa. Fibonacci brought this number system back to Europe and in 1212 published his Book of Calculation (Liber Abaci) which introduced Europe to its marvels. With these numerals, all the computation was done by manipulating the symbols themselves – no need for an external device like an abacus or the calculi (pebbles) of the ancient Romans. What is more, with these 10 magic digits, as Fibonacci demonstrates in his book, one could compute compound interest and the Florentine bankers further up the River Arno from Pisa soon took notice.
Also, in the classical world, there was what became the best selling textbook of all time: Euclid’s Elements where all plane geometry flows logically from axioms and postulates.
In the Middle Ages in Western Europe, Aristotle’s Logic was studied intently as some theologians debated the number of angels that could fit on the head of a pin while others formulated proofs of the existence of God, notably St. Anselm and St. Thomas Aquinas. A paradoxical proof of God’s existence was given by Jean Buridan: consider the following pair of sentences:
    God exists. Neither of the sentences in this pair is true.
Since the second one cannot be true, the existence of God follows. Buridan was a true polymath, one making significant contributions to multiple fields in the arts and sciences – a glamorous and mysterious figure in Paris life, though an ordained priest. His work on Physics was the first serious break with Aristotle’s cosmology; he introduced the concept of “inertia” and influenced Copernicus and Galileo. The leading Paris philosopher of the 14th Century, he is known for his work on the doctrine of Free Will, a cornerstone of Christianity. However, the name “Buridan” itself is actually better known for “Buridan’s Ass,” the donkey who could never choose which of two equally tempting piles of hay to eat from and died of starvation as a result – apparently a specious attribution contrived by his opponents as this tale does not appear in any of Buridan’s writings; it appears to be mocking Buridan’s work on free will: Buridan taught that simply realizing which choice was evil and which was moral was not a enough and an actual decision still required an act of will.
Doubtlessly equally unfounded is the tale that Buridan was stuffed in a sack and drowned in the Seine by order of King Louis X because of his affair with the Queen, Marguerite of Burgundy – although this story was immortalized by the immortal poet François Villon in his Ballade des Dames du Temps Jadis, the poem in which the refrain is “Where are the snows of yester-year” (Mais où sont les neiges d’antan); in the poem Villon compares the story of Marguerite and Buridan to that of Hėloise and Abėlard!
It the late middle ages, Ramon Llul, the Catalan polymath (father of Catalan literature, mathematician, artist whose constructions inspired work of superstar architect Daniel Libeskind) published his Ars Magna (1305) which described a mechanical method to help in arguments, especially in ones to win Muslims over to Christianity.
François Viète (aka Vieta in Latin) was another real polymath (lawyer, mathematician, Huguenot, privy councilor to kings). At the end of the 16th Century, he revolutionized Algebra, replacing the awkward Arab system, with a purely symbolic one; Viète was the first to say “Let x be the unknown,” he made Algebra a game of manipulating symbols. Before that, in working out an Algebra problem, one actually thought of “10 squared” as a 10-by-10 square and “10 cubed” as a 10-by-10-by-10 cube.
Llul’s work is referenced by Gottfried Leibniz, the German polymath (great mathematician, philosopher, diplomat) who in the 1670’s proposed a calculus for philosophical reasoning based on his idea of a Characteristica Universalis, a perfect language which would provide for a direct representation of ideas.
Leibniz also references Thomas Hobbes, the English polymath (philosopher, mathematician, very theoretical physicist). In 1655, Hobbes wrote : “By reasoning, I understand computation.” This assertion of Hobbes is the cornerstone of AI today; cast in modern terms: intelligence is an algorithm.
Blaise Pascal, the French polymath (mathematics, philosophy, theology) devised a mechanical calculation engine in 1645; in the 1800s, Thomas Babbage and Ada Lovelace worked on a more ambitious project, the Analytical Engine, a proposed general computing machine.
Also in the early 1800s, there was the extraordinarily original work of Evariste Galois. He boldly applied one field of Mathematics to another, the Theory of Groups to the Theory of Equations. Of greatest interest here is that he showed that there were problems for which no appropriate algorithm existed. With his techniques, one can show, for example, that there is no general method to trisect an angle using a ruler and compass – Euclid’s Elements presents an algorithm of this type for bisecting an angle. Tragically, Galois was embroiled in the violent politics that led up to the destitution of Charles X and was killed in a duel at the age of twenty in 1830. He is considered to be the inspiration for the young hero of Stendahl’s novel Lucien Leuwen.
Later in the 19th Century, we have George Boole whose calculus of Propositional Logic is the basis on which computer chips are built, Gottlob Frege who dramatically extended Boole’s Logic to First Order Logic which allowed for the development of systems such as Alfred North Whitehead and Bertrand Russell’s Principia Mathematica and other Set Theories; these systems provide a framework for axiomatic mathematics. Russell was particularly excited about Frege’s new logic, which is a notable advance over Aristotle: while Aristotle could prove that Socrates was mortal, the syllogism cannot deal with binary relations as in
    “All lions are animals; therefore the tail of a lion is a tail of an animal.”
Aristotle’s syllogistic Logic is still de rigueur, though, in the Vatican where some tribunals yet require that arguments be presented in syllogistic format!
Things took another leap forward with Kurt Gödel’s landmark On Formally Undecidable Propositions of Principia Mathematica and Related Systems, published in 1931 (in German). In this paper, Gödel builds a programming language and gives a data structuring course where everything is coded as a number (formulas, the axioms of number theory, proofs from the axioms, properties like “provable formula”, …). Armed with the power of recursive self-reference, Gödel ingeniously constructed a statement about numbers that asserts its own unprovability. Paradox enters the picture in that “This statement is not provable” is akin to “This sentence is false” as in the Liar’s Paradox. All this self-reference is possible because with Gödel’s encoding scheme everything is a number – formulas, proofs, etc. all live in the same universe, so to speak. First Order Logic and systems like Principia Mathematica make it possible to apply Mathematics to Mathematics itself (aka Metamathematics) which can turn a paradox into a theorem.
For a theorem of ordinary mathematics that is not provable from the axioms of number theory but is not self-referential, Google “The Kanamori-McAloon Theorem.”
The Incompleteness Theorem rattled the foundations of mathematics but also sowed the seeds of the computer software revolution. Gödel’s paper was quickly followed by several different formulations of a mathematical model for computability, a mathematical definition of the concept of algorithm: the Herbrand-Gödel Recursive Functions, Church’s lambda-calculus, Kleene’s μ-recursive functions – all were quickly shown to be equivalent, that any algorithm in one model had an equivalent algorithm in each of the others. That these models must capture the notion of algorithm in its entirety is known as Church’s Thesis or the Church-Turing Thesis.
In 1936 Alan Turing published a paper “On Computable Numbers with an Application to the Entscheidungsproblem” – German was still the principal language for science! Here Turing presented a new mathematical model for computation – the “automatic machine” in the paper, the “Turing Machine” today. Turing proved that this much simpler model of computability is equivalent to the other schemes; it is couched in terms of a device manipulating 0s and 1s; furthermore, Turing demonstrated the existence of a Universal Turing Machine which can emulate the operation of any Turing machine M given the description of M in 0s and 1s along with the intended input: the Universal Turing Machine will decipher the description of M and will perform the same operations on the input as M would, thus yielding the same output: this is the inspiration for stored programming – in the early days of computing machinery one had to rewire the machine, change external tapes, swap plugboards or reset switches if the problem was changed; with Turing’s setup, you just input the algorithm along with the data into the memory of the machine – the algorithm and the data live in the same universe. In 1945, John Von Neumann joined the team under engineers John Mauchly and J. Presper Eckert and wrote up a report on the design of a new digital computer, “First Draft of a Report on the EDVAC.” Stored programming was the crucial innovation of the Von Neumann Architecture. Because of the equivalence of the mathematical models of computability and Church’s Thesis, Von Neumann also knew that this architecture captured all possible algorithms that could be programmed by machine – subject only to limitations of speed and size of memory. (In the future, though, quantum computing could challenge Church’s Thesis.)
With today’s computers, it is the operating system (Windows, Unix, Android, macOS, …) that plays the role of the Universal Turing Machine. Interestingly, the inspiration for the Turing Machine was not a mechanical computing engine but rather the way a pupil in an English school of Turing’s time used a European style notebook with graph paper pages to do math homework.
BTW, Gödel and Turing have both made it into motion pictures. Gödel is played by Lou Jacob in the rom-com I.Q. and Turing is played by Benedict Cumberbatch in The Imitation Game, a movie of a more serious kind.
Computer pioneers were excited by the possibility of Artificial Intelligence from the outset, in Turing’s case at least from 1941. In his 1950 paper Computing Machinery and Intelligence, Turing proposed his famous test where, roughly put, a machine would be deemed “intelligent” if it could pass for a human in an interactive session he called “The Imitation Game” (whence the movie title). Futurologist Ray Kurzweil (now at Google) has predicted that a machine will pass the Turing Test by the year 2029. But gurus have been wrong in the past. Nobel Prize winning economist and AI pioneer, Herbert Simon, boldly predicted in 1957 that computer chess programs would outperform humans within “ten years” but that was wrong by some thirty years!
In his 1951 talk at the University of Manchester entitled Intelligent Machinery: A Heretical Theory, Turing spoke of machines that will eventually surpass human intelligence: “once the machine thinking method has started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control.” From the beginning, the Singularity was viewed with a mixture of wonder and dread.
But the field of AI wasn’t formally founded until 1956; it was at a summer research conference at Dartmouth College, in Hanover, New Hampshire, that the term Artificial Intelligence was coined. Principal participants at the conference included Herbert Simon as well as fellow scientific luminaries Claude Shannon, John McCarthy and Marvin Minsky.
Today, Artificial Intelligence is simultaneously bringing progress and miracles to humankind on the one hand and representing an existential threat to humanity on the other. Investment in AI research is significant and it is proceeding apace in industry and at universities; the latest White House budget includes $1.1B for AI research (NY Times, Feb. 17, 2020) reflecting in part the interest of the military in all this.
The principal military funding agency for AI has been the Defense Research Projects Agency (DARPA). According to a schema devised by DARPA people, AI has already gone through two phases and is in the beginning of the 3rd Phase now. The Singularity is expected in a 4th Phase which will begin around 2030 according to those who know.
More to come.

Toward the Singularity

Futurology is the art of predicting the technology of the future.
N.B. We say “futurology” because the term “futurism” denotes the Italian aesthetic movement “Il Futurismo”: it began with manifestos – Manifesto del Futurismo (1909) which glorified the technology of the automobile and its speed and power followed by two manifestos on technology and music, Musica Futurista (1912), L’arte dei Rumori (1913). The movement’s architectural esthetic can be appreciated at Rockefeller Center; its members also included celebrated artists like Umberto Boccioni whose paintings are part of the permanent collection of the MoMA in New York.
We live in an age of accelerating technological forward motion and this juggernaut is hailed as “bearer of the future.” However, deep down, genuine distrust of science and “progress” has always been there. Going back in history, profound discomfort with technology is expressed in the Greek and Roman myths. The Titan Prometheus brings fire to mankind but he is condemned by the gods to spend eternity with an eagle picking at his liver. Vulcan, the god of fire, is a master craftsman who manufactures marvels at his forge under the Mt. Etna volcano in Sicily. But Vulcan is a figure of scorn: he is homely with a permanent limp for which he is mocked by the other gods; though married to Venus, he is outrageously cuckolded by his own brother Mars; for Botticelli’s interpretation of Olympian adultery, click HERE .
More recently, there is the myth of Frankenstein and its terrors. Then there is the character of the Mad Scientist in movies, magazines and comic books whose depiction mirrors public distrust of what technology is all about.
For all that, in today’s world, even the environmentalists do not call for a return to more idyllic times; rather they want a technological solution to the current crisis – for example, The Green New Deal. Future oriented movements like Accelerationism also call upon free-market capitalism to push change harder and harder rather than wanting to retreat to an earlier bucolic time.
The only voluble animosity towards science and technology comes from Donald Trump and his Republican spear carriers but theirs is opportunistic and dishonest, not something they actually believe in.
Futurology goes back at least to the 19th century with Jules Verne and his marvelous tales of submarines and trips to the moon. H.G. Wells too left an impressive body of work dealing with challenges that might be in the offing. In a different vein, there are the writings of Teilhard de Chardin whose noosphere is a predictor of where the world wide web and social media might be taking us – one unified super-mind. In yet another style, there are the books of the Tofflers from the 1970s such as Future Shock which among other things dealt with humanity’s dealing with the endless change to daily life fueled by technology, change at such a speed as to make the present never quite real.
For leading technologist and futurologist Ray Kurzweil, for the Accelerationists and for most others, the vector of technological change has been free-market capitalism. Another vehicle of technological progress, to some a most important one, is warfare. Violence between groups is not new to our species. Indeed, anthropologists point out that inter-group aggression is also characteristic of our closest relatives, the chimpanzees – so all this likely goes way back to our common ancestor. The evolutionary benefit of such violence is a topic of debate and research among social scientists. The simplest and most simple-minded explanation is that the more fit, surviving males had access to more females and so more offspring. One measure of the evolutionary importance of fighting among males for reproductive success is the relative size of males and females. In elephant seals, where the males stage mammoth fights for the right to mate, the ratio is 3.33 to 1.0; in humans it is roughly 1.15 to 1.0 – this modest ratio implies that the simple-minded link between warfare and reproductive success cannot be the whole story.
Historically, the practice of war has hewn closely to developments in technology. And warfare, in turn, has made demands on technology. Indeed, even men of genius like Archimedes and Leonardo da Vinci developed weapons systems. However, the relationship between matters military and technology became almost symbiotic with WWII. Technological feats such as nuclear power, rockets, missiles, jet planes and the digital computer are all associated with the war efforts of the different powers of that conflict. Certainly, the fundamental research and engineering behind these achievements was well underway in the 1930s but the war efforts determined priorities and thus to which areas of technology should resources and funding be allocated, thereby creating remarkable concentrations of brilliant scientific talent. The Manhattan Project itself is studied as a model of large scale R&D; furthermore, the industrial organization of the war period and military operations such as countering submarine warfare gave rise to a new mathematical discipline, aptly called Operations Research, which is now taught in Business Schools under the name Management Science.
In his masterful treatise War in the Age of Intelligent Machines (1991), Manuel DeLanda summarizes it thusly: “The war … forged new bonds between the military and scientific communities. Never before had science been applied at so grand a scale to such a variety of warfare problems.”
Since WWII we have been in a “relatively” peaceful period. But the technological surge continues. Perhaps we are just coasting on the momentum of the military R&D that followed WWII – the internet, GPS systems, Artificial Intelligence, etc. However, military funding might be skewing technological progress today in less fruitful directions than capitalism or science-as-usual itself would. Perhaps this is why post WWII technological progress has fueled the growth of paramilitary surveillance organizations such as the CIA and NSA and perfected drones rather than addressing the environmental crisis.
Moreover, these new technologies are transforming capitalism itself: the internet and social media and big data have given rise to surveillance capitalism, the subject of a recent book by Harvard Professor Emerita Shoshana Zuboff, The Age of Surveillance Capitalism: our personal behavorial data are amassed by Alexa, Siri, Google, FaceBook et al., analyzed and sold for targeted advertising and other feeds to guide us in our lives; this is only going to get worse as the internet of things puts sensoring and listening devices throughout the home. The 18th Century Utilitarian philosopher Jeremy Bentham promoted the idea of the panopticon, a prison structured so that the inmates would be under constant surveillance by unseen guards – click HERE . To update a metaphor from French post-modernist philosopher Michel Foucault, with surveillance technology we have created our own panopticon, one in which we dwell quietly and willingly as our every keystroke, every move is observed. An example: as one researches work on machine intelligence on the internet, Amazon drops ads for books on the topic (e.g. The Sentient Machine) onto one’s FaceBook page !
The futurologists and Accelerationists, like many fundamentalist Christians, await the coming of the new human condition – for fundamentalists this will happen at the Second Coming; for the others the analog of the Second Coming is the singularity – the moment in time when machine intelligence surpasses human intelligence.
In Mathematics, a singularity occurs at a point that is dramatically different from those around it. John von Neumann, a mathematician and computer science pioneer (who worked on the Manhattan Project) used this mathematical term metaphorically: “the ever accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” For Von Neumann, the singularity will be the moment when “technological progress will become incomprehensibly rapid and complicated.” Like Von Neumann, Alan Turing was a mathematician and a computer science pioneer; famous for his work on breaking the German Enigma Code during WWII, he is the subject of plays, books and movies. In 1951, Turing wrote “once the machine thinking method has started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control … .” The term singularity was then used by Vernor Vinge in an article in Omni Magazine in 1983, a piece that develops Von Neumann’s and Turing’s remarks further: “We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.” The concept of the singularity was brought into the mainstream by the work of Ray Kurzweil with his book entitled The Singularity Is Near (2005).
In Kurzweil’s writings, he emphasizes how much technological development is growing exponentially. A most famous example of exponential technological growth is Moore’s Law: in 1975, Gordon E. Moore, a founder of INTEL, noted that the number of transistors on a microchip was doubling every two years even as the cost was being halved – and that this was likely to continue. Amazingly this prediction has held true into the 21st Century and the number of transistors on an integrated circuit has gone from 5 thousand to 1 billion: for a graph, click HERE . Another example of exponential growth is given by compound interest: at 10% compounded annually, your money will more than double in 5 years, more than quadruple in 10 years and so on.
Kurzweil argues that exponential growth also applies to many other areas, indeed to technology as a whole. Here his thinking is reminiscent of that of the French post-structuralist accelerationists Deleuze and Guattari who also view humanity-cum-technology as a grand bio-physical evolutionary process. To make his point, Kurzweil employs compelling charts and graphs to illustrate that growth is indeed exponential (click HERE ); because of this the future is getting closer all the time – advances that once would have been the stuff of science fiction can now be expected in a decade or two. So when the first French post-structuralist, post-modern philosophers began calling for an increase in the speed of technological change to right society’s ills in the early 1970s, the acceleration had already begun!
But what will happen as we go past the technological singularity? Mystère. More to come.

Accelerationism II

Accelerationism is a philosophical movement that emerged from the work of late 20th century disillusioned Marxist-oriented French philosophers who were confronted with the realization that capitalism cannot be controlled by current political institutions nor supplanted by the long awaited revolution: for centuries now, the driving force of modernity has been capitalism, the take-no-prisoners social-economic system that produces ever faster technological progress with dramatic physical and social side-effects – the individual is disoriented; social structure is weakened; the present yields constantly to the onrushing future; “the center cannot hold.” However, for the accelerationists, the response is not to slow things down to return to a pre-capitalist past but rather to push capitalism to quicken the pace of progress so a technological singularity can be reached, one where machine intelligence surpasses human intelligence and begins to spark its own development and that of everything else; it will do this at machine speed as opposed to the clumsy pace of development today. This goal will not be reached if human events or natural disasters dictate otherwise, speed is of the essence.

Nick Land, then a lecturer in Continental Philosophy at the University of Warwick in the UK picked up on the work in France and published an accelerationist landmark in 1992, The Thirst for Annihilation: Georges Bataille and Virulent Nihilism. Land builds on work of the French eroticist writer Georges Bataille and emphasizes that Accelerationism does not necessarily predict a “happy ending” for humanity: all is proceeding nihilistically, without direction or value, humanity can be but a cog in a planetary process of Spaceship Earth. Accelerationism is different from Marxism, Adventism, Mormonism, Futurism – all optimistic forward-looking world views.

Land pushed beyond the boundaries of academic life and methodology. In 1995, he and colleague Sadie Plant founded the Cybernetic Culture Research Unit (CCRU) which became an intellectual warren of forward thinking young people – exploring themes such as “cyberfeminism” and “libidinal-materialist Deleuzian thinking.” Though disbanded by 2003, alums of the group have stayed the course and publish regularly in the present day accelerationist literature. In fact, a collection of writings of Land himself has been published under the title Fanged Noumena and today, from his aerie in Shanghai, he comments on things via Twitter. (In Kant’s philosophy noumena as opposed to phenomena are the underlying essences of things that the human mind does not have direct access to.)

Accelerationists have much in common with the Futurist movement: they expect the convergence of computer technology and medicine to bring us into the “bionic age” where a physical merge of man and robot can begin with chip-implantation, gene manipulation and much more. Their literature of choice is dystopian science-fiction, particularly the cyberpunk subgenre: William Gibson’s pioneering Neuromancer has the status of scripture; Rudy Rucker’s thoughtful The Ware Tetralogy is required reading and Richard Morgan’s ferocious Market Forces is considered a minor masterpiece.

Accelerationism is composed today of multiple branches.

Unconditional Accelerationism (aka U/Acc) is the most free-form, the most indifferent to politics. It celebrates modernity and the wild ride we are on. It tempers its nihilism with a certain philosophical playfulness and its mantra, if it had one, would be “do your own thing”!

Left Accelerationism (aka L/Acc) harkens back to Marx as precursor: indeed, Marx did not call for a return to the past but rather claimed that capitalism had to move society further along until it had created the tools – scientific, industrial, organizational – needed for the new centralized communist economy. Even Lenin wrote (in his 1918 text “Left Wing” Childishness)

    Socialism is inconceivable without large-scale capitalist engineering based on the latest discoveries of modern science.

So Lenin certainly realized that Holy Russia was nowhere near the level of industrialization and organization necessary for a Marxist revolution in 1917 but plunge ahead he did. Maybe that venerable conspiracy theory where Lenin was transported back to Russia from Switzerland by the Germans in order to get the Russians out of WW I has some truth to it! Indeed, Lenin was calling for an end to the war even before returning; with the October Revolution and still in the month of October, Lenin proposed an immediate withdrawal of Russia from the war which was followed by an armistice the next month between Soviet Russia and the Central Powers. All this freed up German and Austrian men and resources for the Western Front.

An important contribution to L/Acc is the paper of Alex Williams and Nick Srinek (Manifesto for an Accelerationist Politics, 2013) in which they argue that “accelerationist politics seeks to preserve the gains of late capitalism while going further than its value system, governance structures, and mass pathologies will allow.” Challenging the conceit that capitalism is the only system able to generate technological change at a fast enough speed, they write “Our technological development is being suppressed by capitalism, as much as it has been unleashed. Accelerationism is the basic belief that these capacities can and should be let loose by moving beyond the limitations imposed by capitalist society.” They dare to boldly go beyond earthbound considerations asserting that capitalism is not able to realize the opening provided by space travel nor can it pursue “the quest of Homo Sapiens towards expansion beyond the limitations of the earth and our immediate bodily forms.” The Left accelerationists want politics and the acceleration, both, to be liberated from capitalism.

Right Accelerationism (aka R/Acc) can claim Nick Land as one of its own – he dismisses L/Acc as warmed over socialism. In his frank, libertarian essay, The Dark Enlightenment (click HERE), Land broaches the difficult subject of Human Bio-Diversity (HBD) with its grim interest in biological differences among human population groups and potential eugenic implications. But Land’s interest is not frivolous and he is dealing with issues that will have to be encountered as biology, medicine and technology continue to merge and as the cost of bionic enhancements drives a wedge between social classes and racial groups.

This interest of the accelerationists in capitalism brings up a “chicken or egg” problem: Which comes first – democratic political institutions or free market capitalism?

People (among them the L/Acc) would likely say that democracy has been necessary for capitalism to develop, having in mind the Holland of the Dutch Republic with its Tulip Bubble, the England of the Glorious Revolution of 1689 which established the power of parliament over the purse, the US of the Founding Fathers. However, 20th Century conservative thinkers such as Friedrich Hayek and Milton Friedman argued that free markets are a necessary precondition for democracy. Indeed, the case can be made that even the democracy of Athens and other Greek city states was made possible by the invention of gold coins by the neighboring Lydians of Midas and Croesus fame: currency led to a democratic society built around the agora/marketplace and commerce rather than the palace and tribute.

In The Dark Enlightenment, Land also pushes the thinking of Hayek and Friedman further and argues that democracy is a parasite on capitalism: with time, democratic government contributes to an ever growing and ever more corrupt state apparatus which is inimical to capitalism and its accelerationist mission. In fact, Land and other accelerationists put forth the thesis that societies like China and Singapore provide a better platform for the acceleration required of late capitalism: getting politics out of everyday life is liberating – if the state is well run and essential services are provided efficiently, citizens are free to go about the important business of life.

An historical example of capitalism in autocratic societies is provided by the German and Austro-Hungarian empires of the half century leading up to WWI: it was in this world that the link was made between basic scientific research (notably at universities) and industrial development that continues to be a critical source of new technologies (the internet is an example). In this period, the modern chemical and pharmaceutical industries were created (Bayer aspirin and all that); the automobile was pioneered by Karl Benz’ internal combustion engine and steam power was challenged by Rudolf Diesel’s compression-ignition engine. Add the mathematics (Cantor and new infinities, Riemann and new geometries), physics (Hertz and radio waves, Planck and quantum mechanics, Einstein and relativity), the early Nobel prizes in medicine garnered by Koch and Ehrlich (two heroes of Paul De Kruif’s classic book Microbe Hunters), the triumphant music (Brahms, Wagner, Bruckner, Mahler). Certainly this was a golden age for progress, an example of how capitalism and technology can thrive in autocratic societies.

Starkly, we are now in a situation reminiscent of the first quarter of the 20th Century – two branches of capitalism in conflict, the one led by liberal democracies, the other by autocratic states (this time China and Singapore instead of Germany and Austria). For Land and his school, the question is which model of capitalism is better positioned to further the acceleration; for them and the rest of us, the question is how to avoid a replay of the Guns of August 1914, all the pieces being ominously in place.

The Constitution – then and now

In the US, the Constitution plays the role of sacred scripture and the word unconstitutional has the force of a curse. The origin story of this document begins in Philadelphia in 1787 with the Constitutional Convention. Jefferson and Adams, ambassadors to England and France, did not attend; Hamilton and Franklin did; Washington presided. It was James Madison who took the lead and addressed the problem of creating a strong central government that would not turn autocratic. Indeed, Madison was a keen reader of the Roman historian Tacitus who pitilessly described the transformation of Roman Senators into sniveling courtiers with the transformation of the Roman Republic into the Roman Empire. Madison also drew on ideas of the Enlightenment philosopher Montesquieu and, in the Federalist Papers, he refined Montesquieu’s “separation of powers” and enunciated the principle of “checks and balances.”

A balance between large and small states was achieved by means of the Connecticut Compromise: a bicameral legislature composed of the Senate and the House of Representatives. As a buffer against “mob rule,” the Senators would be appointed by the state legislatures. However, the House created the problem of computing each state’s population for the purpose of determining representation. The resulting Three-Fifths Compromise stipulated that 3/5ths of the slave population in a state would count toward the state’s total population. This created the need for an electoral college to elect the president, since enslaved African-Americans would not each have three-fifths of a vote!

In September 1787, a modest four page document (without mention of the word Democracy, without a Bill of Rights, without provision for judicial review but with guidelines for impeachment) was submitted to the states; upon ratification the new Congress was seated and George Washington became President in the spring of 1789.

While the Constitution is revered today, it is not without its critics – it makes it too hard to represent the will of the people to the point where the American electorate is one of the most indifferent in the developed world (26th out of 32 in the OECD, the bottom 20%). Simply put, Americans don’t vote!!

For example, the Constitution provides for an Amendment process that requires ratification by 3/4ths of the states. Today the vestigial Electoral College makes a vote for president in Wyoming worth twice that in Delaware: both states have 3 electors and Delaware’s population is twice that of Wyoming. If you do more math, you’ll find that a presidential vote in Wyoming is worth 3.5 times one in Brooklyn and nearly 4 times one in California. Change would require an amendment; however any 13 states can block it and the 13 smallest states, with barely 4% of the population, would not find it in their interest to alter the current system.

Another issue is term limits for members of Congress, something supported by the voters. It can be in a party’s interest to have senators and representatives with seniority so they can accede to powerful committee chairmanships; this is the old Dixiecrat strategy that kept Strom Thurmond in the Senate until he was over 100 years old – but then the root of the word “senator” is the Latin “senex” which does mean “old man.” The Constitution, however, does provide for a second way to pass an amendment: 34 state legislatures would have to vote to hold a constitutional convention; this method has never been used successfully, but a feisty group “U.S. Term Limits” is trying just that.

The Constitution leaves running elections to the states and today we see widespread voter suppression, gerrymandering, etc. The lack of federal technical standards gave us the spectacle of “hanging chads” in Florida in the 2000 presidential election and has people rightly concerned about foreign interference in the 2020 election.

Judicial review came about by fiat in 1803 when John Marshall’s Supreme Court ruled a section of an act of Congress to be unconstitutional: an action itself rather extra-constitutional given that no such authority was set down in the Constitution! Today, any law passed has to go through an interminable legal process. With the Supreme Court politicized the way it is, the most crucial decisions are thus regularly made by five unelected, high church (four Catholics, one Catholic become Episcopalian), male, ideologically conservative, elitist, lifetime appointees of Republican presidents.

The founding fathers did not imagine how powerful the judicial branch of government would become; in fact, Hamilton himself provided assurances that the judiciary would always be the weakest partner in his influential tract Federalist 78. However, a recent (2008) malign example of how the Constitution does not provide protection against usurpation of power by the Supreme Court came in District of Columbia v. Heller where over two hundred years of common understanding were jettisoned when the reference to “militia” in the 2nd amendment was declared irrelevant: “A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.” What makes it particularly outrageous was that this interpretation was put forth as an example of “originalism” where the semantics of the late 18th Century are to be applied to the text of the amendment; quite the opposite is true, Madison’s first draft made it clear that the military connection was the motivating one to the point where he added an exclusion for pacifist Quakers:

    “The right of the people to keep and bear arms shall not be      infringed; a well armed, and well regulated militia being the best security of a free country: but no person religiously scrupulous of bearing arms, shall be compelled to render military service in person.”

Note too that Madison implies in the original text and in the shorter final text as well that “the right to bear arms” is a collective military “right of the people” rather than an individual right to own firearms – one doesn’t “bear arms” to go duck hunting, not even in the 18th Century. As a result of the Court’s wordplay, today American children go to school in fear; the repeated calls for “thoughts and prayers” have become a national ritual – a sick form of human sacrifice, a reenactment of King Herod’s Massacre of the Innocents.

Furthermore, we now have an imperial presidency; the Legislative Branch is still separate but no longer equal: the Constitution gives only Congress the right to levy tariffs or declare war but, for some administrations now, the president imposes tariffs, sends troops off to endless wars, and governs largely by executive order. All “justified” by the need for efficient decision-making – but, as Tacitus warned, this is what led to the end of the Roman Republic.

Accelerationism I

The discipline of Philosophy has been part of Western Culture for two and a half millennia now, from the time of the rise of the Greek city states to the present day. Interestingly, a new philosophical system often arises in anticipation of new directions for society and for history. Thus the Stoicism of Zeno and Epictetus prepared the elite of the Mediterranean world for the emerging Roman imperium with its wealth and with its centralization of political and military power. The philosophy of St. Augustine locked Western Christianity into a stern theology which served as an anchor throughout the Middle Ages and then as a guide for reformers Wycliffe, Luther and Calvin. The philosopher Descartes defined the scientific method and the scientific revolution followed in Europe. Hegel and Marx applied dialectical thinking to human history and economics as the industrial revolution created class warfare between labor and capital. The logical philosophy of Gottlob Frege and Bertrand Russell set the stage for the work of Alan Turing and thence the ensuing computer software revolution.

Existentialism (with its rich literary culture of novels and plays, its cafės, its subterranean jazz clubs, its Gauloise cigarettes) steeled people for life in a Europe made absurd by two world wars and it paved the way for second wave feminism: Simone de Beauvoir’s magistral work of 1949 The Second Sex (Le Deuxieme Sexe) provided that existentialist rallying cry for women to take charge of their own lives: “One is not born a woman; one becomes a woman.” (On ne naît pas femme, on le devient.)

By the 1960s, however, French intellectual life was dominated by structuralism, a social science methodology which looks at society as very much a static field that is built on the persistent forms that characterize it. Even Marxist philosophers like Louis Althusser were now labeled structuralists. To some extent, structuralism’s influence was due to the brilliant writing of its practitioners, e.g. semiologist Roland Barthes and anthropologist Claude Levi-Strauss: brilliance was certainly required to interest readers in the mathematical structure of kinship systems such as matrilateral cross-cousin marriage – an algorithm to maximize genetic diversity employed by small population groups.

Today the intellectual movement which most resembles past philosophical beacons of the future is known as Accelerationism. As a philosophy, Accelerationism has its roots in France in the period after the May ’68 student and worker uprising. The movement led to barricades and fighting in the streets of Paris and to the largest general strike in the history of Europe. All of which brought the government to the bargaining table; the students and workers counted on the left-wing leadership of labor unions and Marxist oriented political parties to strike a deal for freedom and radical social progress to lead to a post-capitalist world. Instead this “leadership” was interested in more seats in parliament and incremental improvements – not any truly revolutionary change in society.

The take-away from May ’68 for Gilles Deleuze, Fėlix Guattari, Jean-François Lyotard and other post-structuralist French intellectuals was the realization that capitalism proved itself once again too powerful, too flexible, too unstoppable; its dominance could not be challenged by society in its present form.

The paradoxical response in the 1970s then was to call for an acceleration of the development of technologies and other forces of capitalist progress to bring society as rapidly as possible to a new place. In their 1972 work Anti-Oedipus, Deleuze and Guattari put it this way: “Not to withdraw from the process, but to go further, to ‘accelerate the process’, as Nietzsche put it: in this matter, the truth is that we haven’t seen anything yet.” This then is the fundamental tenet of Accelerationism – push technology to get us to the point where it enables us to get out from under current society’s Iron Heel, something we cannot do now. What kind of technologies will be required for this or best suited for this and how this new world will emerge from them are, naturally, core topics of debate. One much discussed and promising (also menacing) technology is Artificial Intelligence.

Deleuze and Guattari extend the notion of the Oedipus complex beyond the nuclear family and develop schizoanalysis to account for the way modern society induces a form of schizophrenia which helps the power structure maintain the steady biological/sociological/psychological march of modern capitalism. Their Anti-Oedipus presents a truly imaginative and innovative way of looking at the world, a poetic mixture of insights fueled by ideas from myriad diverse sources; as an example, they even turn to Americans Ray Bradbury, Jack Kerouac, Alan Ginsberg, Nicholas Ray and Henry Miller and to immigrants to America Marshall McLuhan, Charles Chaplin, Wilhem Reich and Herbert Marcuse.

In The Libidinal Economy (1974), Lyotard describes events as primary processes of the human libido – again “Freud on steroids.” It is Lyotard who coined the term post-modern which has been applied to include other post-structuralists such as Michel Foucault and Jacques Derrida.

Though boldly original, Accelerationism is very much a child of continental thinking in the great European philosophical tradition, a complex modern line of thought with its own themes and conflicts: what makes it most conflicted is its schizophrenic love-hate relation to capitalism; what makes it most contemporary is its attention to the role played by new technologies; what makes it most unsettling is its nihilism, its position that there is no meaning or purpose to human life; what makes it most radical is its displacement of humanity from center-stage and its abandonment of that ancient cornerstone of Greek philosophy: “Man is the measure of all things.”

By the 1980s, the post-structuralist vision of a society in thrall to capitalism was proving prophetic. What with Thatcher, Reagan, supply-side economics, the surge of the income gap, dramatic reductions in taxes (income, corporate and estate), the twilight of the labor unions and the fall of the Berlin Wall: a stronger, more flexible, neo-liberal capitalism was emerging – a globalized post-industrial capitalism, a financial capitalism, deregulated, risk welcoming, tax avoiding, globalized, off-shoring, outsourcing, … . In a victory lap in 1989, political science professor Francis Fukuyama published The End of History; in this widely acclaimed article, Fukuyama announced that the end-point of history had been reached: market-based Western liberal democracy was the final form of human government – thus turning Marx over on his head, much the way Marx had turned Hegel over on his head! So “over” was Marxism by the 1980s that Marxist stalwart Andrė Gorz (friend of Sartre, co-founder of the Le Nouvel Observateur) declared that the proletariat was no longer the vanguard revolutionary class in his Adieux au proletariat.

With the end of the Soviet Union in 1991, in Western intellectual circles, Karl Marx and his theory of the “dictatorship of the proletariat” gave way to the Austrian-American economist Joseph Schumpeter and his theory of capitalism’s “creative destruction”; this formula captures the churning of capitalism which systematically creates new industries and new social institutions that replace the old – e.g. Sears by Amazon, an America of farmers by an America of city dwellers. Marx argued that capitalism’s contradictions and failures would lead to its demise; Schumpeter, closer to the Accelerationists, argued that capitalism has more to fear from its triumphs: ineluctably the colossal success of capitalism hollows out the social institutions and mores which historically nurtured capitalism such as the nuclear family, church-going and the Protestant Ethic itself. Look at Western Europe today with its precipitously low birth-rate where capitalism is triumphant but where church attendance is reduced to three events: “hatch, match and dispatch,” to put it the playful way Anglicans do. But all this is not all bad from the point of view of Accelerationism – capitalism triumphant should better serve to “accelerate the process.”

At this point entering the 1990s, we have a post-Marxist, post-structuralist school of Parisian philosophical thought that is the preserve of professors, researchers, cultural critics and writers. In fact at that point in time, the movement (such as it was) was simply considered part of post-modernism and was not yet known as Accelerationism.

However, in its current form, Accelerationism has moved much closer to the futurist mainstream. Science fiction is taken very seriously as a source for insights into where things might be headed. In fact, the term Accelerationist itself originated in a 1967 sci-fi novel Lord of Light by Roger Zelazny where a group of revolutionaries wanted to take their society “to a higher level” through technology: Zelazny called them the “accelerationists.” But the name was not applied to the movement until much more recently when it was so christened by Benjamin Noys in his 2013 work Malign Velocities: Accelerationism and Capitalism.

In today’s world, the work of futurist writer Ray Kurzweil and the predications of visionary Yuval Harari intersect the Accelerationist literature in the discussion of the transformation of human life that is coming at us. So how did Accelerationism get out of the salons of Paris and become part of the futurist avant-garde of the English speaking world and even a darling of the Twitterati ? Affaire à suivre, more to come.

Ranked Choice Voting

In 2016, the State of Maine voted to apply ranked choice voting in congressional and gubernatorial elections and then in 2018 voted to extend this voting process to the allocation of its electoral college votes. Recently, the New York Times ran an editorial calling for the Empire State to consider ranked choice voting; in Massachusetts, there is a drive to collect signatures to have a referendum on this on the 2020 ballot. Ranked choice voting is used effectively in American cities such as Minneapolis and Cambridge and in countries such as Australia and Ireland. So what is it exactly? Mystère.

First let us discuss what it is not. In the UK and the US, elections are decided (with some exceptions) by plurality: the candidate who polls the largest number of votes is the winner even if this is not a majority. Although simple to administer, this can lead to unusual results. By way of example, in Maine in 2010, Republican Paul LePage was elected governor with 38% of the vote. He beat out the Independent candidate who won 36% and the Democratic candidate who won 19%.

One solution to the problems posed by plurality voting is to hold the vote in multiple rounds: if no one wins an absolute majority on the first ballot, then there must be more than two candidates and the candidate with the least votes, Z say, is eliminated and everybody votes again; this time Z’s voters will shift their votes to their second choice among the candidates. If no one gets a majority this time, repeat the process. Eventually, someone has to get a true majority.

Ranked choice voting is also known as instant-runoff voting: it emulates runoff elections but in a single round of balloting. First, if there are only two candidates to begin with, nothing changes – somebody will get a majority. Suppose there are 3 candidates – A, B and Z; then, on the ballot, each voter lists the 3 candidates in the order of that voter’s preference. First, the count is made of the number of first place votes each candidate received; if for one candidate that number is a majority, that candidate wins outright. Otherwise, the candidate with the least number of first place votes, say Z, is eliminated; now we add to A’s first place total the number of ballots that ranked Z first but that listed A as second choice and similarly for B. Now, except in the case of a tie, either A or B will have a clear majority and will be declared the winner. This will give the same result that staging a runoff between A and B would have yielded but in one trip to the voting booth where the voter has to rank the candidates A,B,Z on the ballot rather than choosing only one.

There are other positive side-effects to ranked choice voting. For one thing, voter turnout goes up; another thing is that campaigns are less nasty and partisan – you want your opponents’ supporters to list you second on their ballots! One can also see how this voting system makes good sense for primaries where there are often multiple candidates; for example, with the current Democratic field of presidential candidates, ranked choice voting would give the voter a chance to express his or her opinion and rank a marginal candidate with good ideas first without throwing that vote away.

After the 2010 debacle in Maine (LePage proved a most divisive and most unpopular governor), the Downeasters switched to ranked choice voting. In 2016 in one congressional district, no candidate for the House of Representatives gathered an absolute majority on the first round but a different candidate who received fewer first place votes on that first round won on the second round when he caught up and surged ahead because of the number of voters who made him their second choice. Naturally, all this was challenged by the losing side but they lost in court. For elections, the U.S. Constitution leaves implementation to the states for them to carry out in the manner they deem fit – subject to Congressional oversight but not to judiciary oversight. Per Section 4 of Article 1: “The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof; but the Congress may at any time by Law make or alter such Regulations, …”

Ranked voting systems are not new and have been a serious topic of interest to social scientists and mathematicians for a long time now – there is something mathematically elegant about the way you can simulate a sequence of runoffs in one ballot. Among them, there are the 18th Century French Enlightenment thinker, the Marquis de Condorcet, and the 19th Century English mathematician, Charles Lutwidge Dodgson, author of Dodgson’s Method for analyzing election results. More recently, there was the work of 20th Century mathematical economist Kenneth Arrow. For this and other efforts, Arrow was awarded a Nobel Prize; Condorcet had a street named for him in Paris; however, Dodgson had to take the pen name Lewis Carroll and then proceed to write Alice in Wonderland to rescue himself from the obscurity that usually awaits mathematicians.

The Third Person VIII: The Fall of Rome

St Augustine of Hippo (354-430) was the last great intellectual figure of Western Christianity in the Roman Empire. His writings on election and pre-destination, on original sin, on the theory of a just war and on the Trinity had a great influence on the medieval Church, in particular on St Thomas; he also had a great influence on Protestant reformers such as John Wycliffe and John Calvin. Augustine himself was influenced by the 3rd century Greek philosopher Plotinus and the Neoplatonists, influenced to the point where he ascribed to them some awareness of the persons of the Trinity (Confessions VIII.3; City of God X.23).

After the sack of Rome in 410, Augustine wrote his Sermons on the Fall of Rome. In these episcopal lectures, he absolves Christians of any role in bringing about the event that plunged the Western branch of the Empire into the Dark Ages, laying all the blame on the wicked, wicked ways of the pagans. European historians have begged to differ, however. By way of example, in his masterpiece, both of English prose and of scholarship, The History of the Decline and Fall of the Roman Empire, the great 18th century English historian Edward Gibbon does indeed blame Christianity for weakening the fiber of the people, hastening the Fall of Rome.

In his work on the Trinity, Augustine followed the Nicean formulation and fulminated against the heretics known as Arians who denied the divinity of Christ. Paradoxically, it was Arian missionaries who first reached many of the tribes of barbarians invading the Empire, among them the Vandals. The Vandal horde swept from Spain Eastward along the North African coast and besieged St. Augustine’s bishopric of Hippo (today Annaba in Algeria). Augustine died during the siege and did not live to see the sack of the city.

By the time of St Augustine, in the Western Church the place of the Holy Spirit in the theology of the Holy Trinity was secured. But in popular culture, the role of the Holy Spirit was minor. Jesus and Mary were always front and center along with God the Father. To complicate matters, there emerged the magnificent doctrine of the Communion of Saints: the belief that all Christians, whether here on Earth, in Purgatory or in Heaven could communicate with one another through prayer. Thus, the faithful could pray to the many saints and martyrs who had already reached Heaven and the latter could intercede with God Himself for those who venerated them .

This is the internet and social media prefigured. The doctrine has other modern echoes in Jung’s Collective Unconscious (an inherited shared store of beliefs and instincts) and in Teilhard de Chardin’s noosphere (a collective organism of mind).

The origins of this doctrine are a problem for scholars. Indeed, even the first reference known in Latin to the “Cummunio sanctorum” is ascribed to Nicetas of Remesiana (ca. 335–414), a bishop from an outpost of the Empire on the Danube in modern day Serbia who included it in his Instructions for Candidates for Baptism. Eventually, though, the doctrine made its way into the Greek and Latin versions of the Apostles Creed.

The wording “I believe … in the Communion of Saints” in the Apostles Creed is now a bedrock statement of Christian belief. However, this doctrine was not part of the Old Roman Creed, the earlier and shorter version of the Apostles Creed that dates from the second and third centuries. It also does not appear in the Nicene Creed. The earliest references to the Apostles Creed itself date from 390 and the earliest extant texts referencing the Communion of Saints are later still.

One school of thought is that the doctrine evolved from St Paul’s teaching that Christ and His Christians form a single mystical body (Romans 12.4-13, 1 Corinthians 12). Another candidate is this passage in the Book of Revelation 5.8 where the prayers of the faithful are collected in Heaven:

      And when he had taken it, the four living creatures and the twenty-four elders fell down before the Lamb. Each one had a harp and they were holding golden bowls full of incense, which are the prayers of God’s people.

For an illustration from the Book of Hours of the Duc de Berry of St John imagining this scene as he wrote the Book of Revelation on the Ile of Patmos, click HERE .

The naïve view is that the doctrine of the Communion of Saints came from the ground up, from nascent folk Christianity where it helped to wean new converts from their native polytheism. Indeed, as Christianity spread, canonization of saints was a mechanism for absorbing local religious traditions and for making local martyrs and saintly figures recognized members of the Church Triumphant.

With the doctrine of the Communion of Saints, the saints in heaven could intercede for individual Christians with the Godhead; devotion to them developed, complete with hagiographic literature and a rich iconography. Thus, the most celebrated works of Christian art depict saints. Moreover, special devotions have grown up around popular patron saints such as Anthony (patron of lost objects), Jude (patron of hopeless cases), Jean-François Rėgis (patron of lacemakers), Patrick (patron of an island nation), Joan of Arc (patron of a continental nation), … .

The Holy Spirit, on the other hand, pops up in paintings as a dove here and there, at best taking a minor part next to John the Baptist and Jesus of Nazareth or next to the Virgin Mary and the Angel Gabriel. This stands in contrast with the Shekinah, the Jewish precursor of the Holy Spirit, who plays an important role in the Kabbalah and Jewish mysticism.

There is one area of Christianity today, however, where the Holy Spirit is accorded due importance: Pentecostal and Charismatic churches; indeed, the very word Pentecostal is derived from the feast of Pentecost where the Holy Spirit and tongues of fire inspired the apostles to speak in tongues. In these churches, direct personal experience of God is reached through the Holy Spirit and His power to inspire prophecy and insight. In the New Testament, it is written that prophecy is in the domain of the Holy Spirit – to cite 2 Peter 1:21 :

      For no prophecy ever came by the will of man: but men spake from God, being moved by the Holy Spirit.

Speaking of prophecy, the Holy Spirit is not mentioned in the Book of Revelation which is most surprising since one would think that apocalyptic prophesy would naturally be associated with the Holy Spirit. For some, this is one more reason that the Council of the Rome (382) under St Pope Damasus I should have thought twice before including the Book of Revelation in the canon. There are other reasons too.

Tertullian, the Father of Western Theology, is not a saint of the Catholic Church – he defended a Charismatic-Pentecostal approach to Christianity, Montanism, which was branded a heresy. This sect had three founders, Montanus and the two sibyls, Priscilla and Maximilla; the sybils would prophesy when the Holy Spirit entered their bodies. Alas, there are no classical statues or Renaissance paintings honoring Priscilla and Maximilla; instead the Church treated them as “seductresses” who, according to Eusebius’ authoritative, 4th century Church History, “left their husbands the moment they were filled with the spirit.” No wonder then that Eusebius is known as the Father of Church History.

While we have no masterpieces depicting Priscilla or Maximilla, for a painting of the Sybil at Delphi, click HERE .

For Tertullian and the Montanists, the Holy Spirit was sent by God the Son to continue the revelation through prophecy. Though steeped in Greek rationalism, Tertullian insisted on the distinction between faith and reason, on the fact that faith required an extra magical step: “I believe because it is absurd” – bolder even than Pascal. He broke with the main body of the Church saying that the role given to the Holy Spirit was too narrow – a position shared by Pentecostal Christians today. In fact, the Holy Spirit is key to Pentecostalism where the faithful are inspired with “Holy Ghost fire” and become “drunk on the Holy Spirit.” Given that this is the only branch of Western Christianity that is growing now as the others recede, it looks as though Tertullian was insightful and should have been listened to more carefully. Perhaps, it is a good time now for the Church of Rome to bring up the subject of his canonization almost two millennia later. Doing so today would go some way toward restoring the Holy Spirit to a rightful place in Christianity, aligning the Holy Spirit’s role with that of the continuing importance of the Shekinah in the Jewish tradition and restoring the spirit of early Christianity.

The Third Person VII: The Established Religion

During Augustus’ reign as Emperor of the Roman Empire, the Pax Romana settled over the Mediterranean world – with the notable exception of Judea (Palestine, the Holy Land). After the beheading of John the Baptist and the Crucifixion of Jesus of Nazareth, unrest continued leading to the Jewish-Roman Wars (66-73, 115-117, 132-135), the destruction of the Temple in Jerusalem (70) and the forced exile of many Jews. Little wonder then that the early Gentile Christians disassociated themselves from Judaism and turned to Greek philosophical models to develop their new theology.

And with God and the Logos (the Word) of Platonism and Stoicism, the Greco-Roman intellectual world was in some sense “ready” for God the Father and God the Son. Indeed, the early Christians identified the Logos with the Christ. In the prologue of the Gospel of St. John in the King James Bible, verses 1 and 14 read

    In the beginning was the Word, and the Word was with God, and the Word was God.

    And the Word was made flesh, and dwelt among us, (and we beheld his glory, the glory as of the only begotten of the Father,) full of grace and truth.

Christians who undertook the task of explaining their new religion to the Greco-Roman world were known as apologists, from the Greek ἀπολογία meaning “speech in defence.” Thus, in following up on the Gospel of John, Justin Martyr (100-165), a most important 2nd century apologist, drew on Stoic doctrine to make Christian doctrine more approachable; in particular, he held that the Logos was present within God from eternity but emerged as a distinct actor only at the Creation – the Creation according to Genesis, that is. But while often referring to the Spirit, the Holy Spirit, the Divine Spirit and the Prophetic Spirit in his writings, Justin apparently never formulated a theory of the Trinity as such.

So from here, how did early Christians reach the elegant formulation of the doctrine of the Holy Trinity that is so much a part of Catholic, Protestant and Orthodox Christianity? Mystère.

The earliest surviving post-New Testament Christian writings that we have that include the Holy Spirit, the Father and the Son together in a trinity identify the Holy Spirit with Wisdom/Sophia. In fact, the first Christian writer known to use the term trinity was Theophilos of Antioch in about the year 170:

    the Trinity [Τριάδος], of God, and His Word, and His wisdom.

In his powerful Against Heresies, Irenaeus (130-202) takes the position that God the Son and God the Holy Spirit are co-eternal with God the Father:

    I have also largely demonstrated, that the Word, namely the Son, was always with the Father; and that Wisdom also, which is the Spirit, was present with Him, anterior to all creation,

In A Plea for Christians, the Athenian author Athenagoras (c. 133 – c. 190) wrote

    For, as we acknowledge a God, and a Son his Logos, and a Holy Spirit, united in essence, the Father, the Son, the Spirit, because the Son is the Intelligence, Reason, Wisdom of the Father, and the Spirit an effluence, as light from fire

Here the “Wisdom of the Father” has devolved onto God the Son and the Holy Spirit is described simply as emanating from the Father. On the one hand, it is tempting to dismiss this theological shift on the part of Athenagoras. After all, he is not considered the most consistent of writers when it comes to sophiology, matters of Wisdom. To quote Prof. Michel René Barnes:

    “Athenagoras has, scholars have noted, a confused sophiology: within the course of a few sentences he can apply the Wisdom of Prov. 8:22 to the Word and the Wisdom of Wisdom of Solomon 7:25 to the Holy Spirit.”

For the full text of Prof. Barnes’ interesting article, click HERE .

On the other hand, the view in A Plea for Christians took hold and going forward the Son of God, the Logos, was identified with Holy Wisdom; indeed, the greatest church of antiquity, the Hagia Sophia in Constantinople, was dedicated to God the Son and not to the Holy Spirit.

But Trinitarianism did not have the field to itself. For one thing, there was still Sabellianism where Father, Son and Holy Spirit were just “manners of speaking” about God. The fight against Sabellianism was led by Tertullian – Quintus Septimius Florens Tertullianus to his family and friends. This Doctor of the Church was the first writer to use the term Trinitas in Latin; he is considered the first great Western Christian theologian and he is known as the Father of the Latin Church. For Tertullian, a most egregious aspect of Sabellianism was that it implied that God the Father also suffered the physical torments of the cross, a heretical position known as patripassionism. Tertullian directly confronted this heresy in his work Contra Praxeas where he famously accused the eponymous target of his attack of “driving out the Holy Spirit and crucifying the Father”:

    Paracletum fugavit et patrem crucifixit

Tertullian developed a dual view of the Trinity distinguishing between the “ontological Trinity” of one single being with three “persons” (Father, Son, Holy Spirit) and the “economic Trinity” which distinguishes and ranks the three persons according to each One’s role in salvation: the Father sends the Son for our redemption and the Holy Spirit applies that redemption to us. In the ontological Trinity, there is only one divine substance (substantia) which is shared and which means monotheism is maintained. Here Tertullian is using philosophy to underpin theology: his “substantia” is a Latin translation of the term used by Greek philosophers ουσία (ousia). Interestingly, Tertullian himself was very aware of the threat of philosophy infiltrating theology and he famously asked “What has Athens to do with Jerusalem?”

The Roman empire of the early Christian era was a cauldron of competing philosophical and religious ideas. It was also a time of engineering and scientific achievement: the invention of waterproof cement made great aqueducts and great domes possible; the Ptolemaic system of astronomy provided algorithms for computing the movements of the spheres (the advance that Copernicus made didn’t change the results but simplified the computations); Diophantus of Alexandria is known as the Father of Algebra, … . The level of technology developed at Alexandria in the Roman period was not reached again until the late Renaissance (per the great Annalist historian Fernand Braudel).

In the 3rd century, neo-Platonism emerged as an updated form of Greek philosophy – updated in that this development in Greek thought was influenced by relatively recent Greek thinkers such as the neo-Pythagoreanists and Middle Platonists and likely by others such as the Hellenized Jewish writer Philo of Alexandria, the Gnostics and even the Christians.

The principal architect of neo-Platonism, Plotinus (204–270), developed a triad of the One, Intellect, and Soul, in which the latter two “proceed” from the One, and “are the One and not the One; they are the One because they are from it; they are not the One, because it endowed them with what they have while remaining by Itself” (Enneads, 85). All existence comes from the productive unity of these three. Plotinus describes the elements of the triad as three persons (hypostases), and describes their sameness using homoousios, a sharper way of saying “same substance.” From neo-Platonism came the concept of the hypostatic union, a meld of two into one which Trinitarians would employ to explain how Christ could be both God and man at the same time.

So at this point, the Trinitarian position had taken shape: very roughly put, the three Persons are different but they are co-eternal and share the same substance; the Son of God can be both God and man in a hypostatic union.

But Trinitarianism was still far away from a final victory. The issue of the dual nature of God the Son as both God and man continued to divide Christians. The most serious challenge to the Trinitarian view was mounted in Alexandria: the bishop Arius (c. 250-c. 336) maintained that the Son of God had to be created by the Father at some point in time and so was not co-eternal with the Father nor was the Son of the same substance as the Father; a similar logic applied to the Holy Spirit. Arianism became widely followed, especially in the Eastern Greek Orthodox branch of the Church and lingered for centuries; in more recent times, Isaac Newton professed Arianism in some of his religious writings – these heretical documents were kept under wraps by Newton’s heirs for centuries and were only rediscovered by John Maynard Keynes in 1936 who purchased them at an auction!

While the Empire generally enjoyed the Pax Romana, at the highest levels there were constant struggles for supreme power – in the end, who had the loyalty of the Roman army determined who would be the next Emperor. The story that has come down to us is that in 312, as Constantine was on his way to fight his last rival in the Western Empire, Maxentius, he looked up into the sky and saw a cross and the Greek words “Εν Τούτῳ Νίκα” (which becomes “In Hoc Signo Vinces” in Latin and “In this sign, you will conquer” in English). With his ensuing victory at the Battle of the Milvian Bridge, Constantine gained control over the Western Roman Empire. The following year, with the Edict of Milan, Christianity was no longer subject to persecution and would be looked upon benevolently by Constantine. For a painting of the cross in heaven and the sign in Greek by the School of Raphael, click HERE and zoom in to see the writing on the sign.

Consolidating the Eastern and Western branches of the Empire, Constantine became sole emperor in 324. Now that Christianity was an official religion of the Empire, it was important that it be more uniform in dogma and ritual and that highly divisive issues be resolved. To that end, in 325, Constantine convened a Council at Nicea (modern Iznik, Turkey) to sort out all the loose ends of the very diverse systems of belief that comprised Christianity at that time. One of the disagreements to settle was the ongoing conflict between Arianism and Trinitarianism.

Here the council came down on the side of the Trinitarians: God has one substance but three persons (hypostases); though these persons are distinct, they form one God and so are all co-eternal. There is a distinction of rank to be made: God the Son and God the Holy Spirit both proceed from God the Father. This position was formalized by the Council of Nicea and refined at the Council of Constantinople (381). In the meantime, with the Edict of Thessalonica in 380, Theodosius I officially made Christianity the state religion of the Empire.

Still disagreements continued even among the anti-Arians. There is the interesting example of Marcellus of Ancyra (Ankara in modern Turkey), an important participant in the Council of Nicea and a resolute opponent of the Arians; Marcellus developed a bold view wherein the Trinity was necessary for the Creation and for the Redemption but, at the end of days, the three aspects (πρόσωπα prosopa but not ὑπόστασɛς hypostases, persons) of the Trinity would merge back together. Marcellus’ position has a scriptural basis in St Paul’s assertion in 1 Corinthians 20:28 :

    … then the Son himself will be made subject to him [God] who put everything under him [the Son], so that God may be all in all.

This view also harkens back somewhat to Justin Martyr – in fact, writings now attributed to Marcellus were traditionally attributed to Justin Martyr! So, in Marcellus’ view, in the end Christ and the Holy Spirit will return into the Father, restoring the absolute unity of the Godhead. This line of thought opened Marcellus to the charge of Sabellianism; he also had the misfortune of having Eusebius, the Father of Church History, as an opponent and his orthodoxy was placed in doubt. For a tightly argued treatise on this illustrative chapter in Church History and for a tour of the dynamic world of 4th century Christian theologians, there is the in-depth study Contra Marcellum: Marcellus of Ancyra and Fourth-Century Theology by Joseph T. Lienhard S.J.

The original Nicene Creed of 325 as well as the updated version formulated at the Council of Constantinople of 381 had both the Holy Spirit and God the Son proceeding from God the Father. Whether the Holy Spirit proceeds from the Son as well as from the Father is a tough question for Trinitarianism; in Latin filioque means “and from the Son” and this phrase has been a source of great controversy in the Church. A scriptural justification for including the filioque in the Nicean Creed is found in John 20:22 :

    And with that he [Jesus] breathed on them and said, “Receive the Holy Spirit …”

Is the filioque a demotion for the Holy Spirit vis-à-vis God the Son? Or is it simply a way of organizing the economic Trinity of Tertullian? In the late 6th century, Western churches added the term filioque to the Nicene Creed but the Greek churches did not follow suit; this lingering controversy was an important issue in the Great Schism of 1054 which led to the definitive and hostile breakup of the two major branches of Christendom. This schism created a fault line in Europe separating Orthodox from Roman Christianity that has endured until modern times. Indeed, it was the massive Russian mobilization in July 1914 in support of Orthodox Christian Serbia which led directly to World War I.

Brexit

Brexit is a portmanteau word meaning “British exit from the European Union.” The referendum on Brexit in the UK in 2016 won by means of a majority of less than 2% of the votes cast representing less than 34% of the voting age population. The process took place in the worst possible conditions – false advertising, fake news, dismal voter participation, demagogy and xenophobia. Brexit is yet another example of the dangers of mixing representative government with government by plebiscite – other examples include the infamous Proposition 13 in California which turned one of the best public school systems in the nation into one of the worst and the vote for independence in Quėbec which, with a simple majority, would have torn Canada apart. Problem is, these simple majority referenda can amount to a form of mob rule.

The two components of the United Kingdom that will be most negatively affected by Brexit are the Celtic areas of Scotland and Northern Ireland; both voted to stay in the European Union – 62% and 55.8% respectively. Brexit will push Scotland toward independence risking the breakup of the UK itself. The threat of Brexit had already reignited The Troubles in Northern Ireland and the enactment of Brexit will bring back more violence and bloodshed.

We learn in school about English imperialism and colonialism – how the sun never sets on the British Empire and all that. Historically, the first targets of English imperialism were the Celtic peoples of the British Isles – the Welsh, the Scots and the Irish.

Incursion into Welsh territory began with William the Conqueror himself in 1081 and by 1283 all Wales was under the control of the English King Edward I – known as Edward Longshanks for his great height for the time (6’2”).

A series of 13 invasions into Scotland began in 1296 under the that same celtiphobe English king, Edward I, who went to war against the Scottish heroes William Wallace (Mel Gibson in Braveheart) and Robert the Bruce (Angus Macfadyen in Robert the Bruce). So memorable was Edward’s hostility toward the Scots that on his tomb in Westminster Abbey is written

    Edwardus Primus Scottorum malleus hic est, pactum serva
(Here is Edward I, Hammerer of the Scots. Keep The Faith)

In that most scholarly history of England, 1066 and All That, there is a perfect tribute to Edward I – a droll cartoon of him hammering Scots.

Edward I didn’t only have it in for the Welsh and the Scots but for Jews as well: in 1290 he issued the Edict of Expulsion, by which Jews were expelled from Merry England. His warmongering caught up with him though: Edward died in 1307 during a campaign against Robert the Bruce – though not of a surfeit but rather of dysentery.

The series continued until 1650 with an invasion led by Oliver Cromwell. To his credit, however, the Lord Protector (Military Dictator per Winston Churchill) revoked Edward’s Edict of Expulsion in 1657.

The story of English aggression in Ireland is even more damning. It started with incursions beginning in 1169 and the full scale invasion launched by King Henry II in 1171. Henry (Peter O’Toole in The Lion in Winter and in Becket) had motivation beyond the usual expansionism for this undertaking: he was instructed by Pope Adrian IV by means of the papal bull Laudabiliter to invade and govern Ireland; the goal was to enforce papal authority over the too autonomous Irish Church. Adrian was the only English pope ever and certainly his motives were “complex.” His bull was a forerunner of the Discovery Doctrine of European and American jurisprudence which justifies Christian takeover of native lands (click HERE ). English invasions continued through to full conquest by Henry VIII, the repression of rebellions under Elizabeth I, and the horrific campaign of Oliver Cromwell. There followed the plantation system in Northern Ireland as the six counties of Ulster became known, and a long period of repressive government under the Protestant Ascendancy. The Irish Free State was only formed in 1922 after a prolonged violent struggle and at the price of partition of the Emerald Isle into Northern Ireland and the Free State; the modern Republic of Ireland only dates from 1949. The “low-level war” known as The Troubles that began in the 1960s was triggered by the discrimination against Catholics that the English-backed regime in the Ulster parliament maintained. This kind of discrimination was endemic: according to memory, no Catholic was hired to work on the building of the Titanic in the Belfast shipyards; according to legend, the Titanic had “F__ the Pope” written on it; according to history, blasphemy does not pay. The Troubles were a violent and bitter period of conflict between loyalists/unionists (mainly Protestants who wanted to stay in the UK) and nationalists/republicans (mainly Catholics who wanted a united Ireland). The Troubles finally ended with the Good Friday Agreement of 1998 that was possible because of joint membership in the EU which made Northern Ireland and the Republic both part of a larger political unit and which, for all practical purposes, ended the frontier separating them – a frontier which until then was manned by armed British soldiers.

Economically and politically, the EU has been good for the Irish Republic and it has become a prosperous modern Scandinavian style European country – indeed Irish born New York Times writer Timothy Egan titled his July 20th op-ed “Send me back to the country I came from.”

Scotland too benefits from EU membership, from infrastructure investments, worker protection regulations and environmental standards – things dear to the socially conscious Scots.

All this history makes the Brexit vote of 2016 simply amoral and sadistic. To add to that, the main reason Theresa May’s proposal for a “soft Brexit” with a “backstop” was repeatedly shot down was its customs union clause that it would have forestalled the border closing in Northern Ireland. On the other hand, reversing the process because of the harm it would inflict on peoples who have been victims of British imperialism over the centuries would have been a gesture of Truth and Reconciliation by the English. Alas, Brexit is now a fact, Boris Johnson having won a majority in Parliament with less than 50% of the vote – the English rotten boroughs are the U.K. analog of the U.S. Electoral College.

The Third Person VI: The Pax Romana

From the outset in the New Testament, the Epistles and Gospels talk of “the Father, the Son and the Holy Spirit.” God the Father came from Yahweh of the Hebrew Bible; the Son was Jesus of Nazareth, an historical figure. For the Holy Spirit, things are more complicated. For sources, there are the Dead Sea Scrolls of the Essenes with the indwelling universal presence of the Holy Spirit; there is the Aramaic language literature the Targums with the Lord’s Shekinah who stands in for Him in dealing with the material world and who enables prophecy by humans; there is the Wisdom literature such as the Wisdom of Solomon where Sophia provides a feminine divine presence.

As the Shekinah becomes an independent deity in the Kabbalah, so in the New Testament the Holy Spirit is a full-fledged divine actor. Like the Shekinah, the Holy Spirit’s announced role is to represent the Godhead in the material world, to guide the lives of the believers and to inspire their prophesying.

The New Testament is written in Greek, not Aramaic and not Hebrew. In the first century, the leadership of the nascent Christian community quickly passes from the apostles and deacons in the Holy Land to the Hellenized Jewish converts of the Diaspora and the Greek speaking gentiles of the Roman Empire. Traditional Jewish practices such as male circumcision are dropped, observing the Law of Moses is no longer obligatory and the Sabbath is moved to Sunday, the day of rest of the Gentiles.

But now God has become three –the Father, the Son, the Holy Spirit. Monotheism is definitely in peril here. Add to that the Virgin Mary together with the Immaculate Conception and the Assumption and you have four divinities to deal with.

Indeed, the early Christians assigned to Mary functions once assumed in the world of Biblical Palestine by Asherah, the Queen of Heaven, and then by the Shekinah of Jewish lore. Christianity would thus not suffer from the unnatural absence of a feminine principle as did the Judaism of the Pharisees of the Temple. However, despite accusations of Mariolatry, Christianity has never deified the Mother of God, only canonized her. Even so, that still leaves us with three divinities where once there was one.

When it comes to the messianic role of Jesus, the New Testament writers do strive to calibrate their narratives with the prophecies and pronouncements of the Hebrew Bible. The situation is different when it comes to the Holy Spirit. In Acts 2, the Holy Spirit descends upon the apostles in the form of tongues of fire when they are assembled in Jerusalem for the Shavuot holiday (aka The Feast of Weeks). This holiday takes place on the fiftieth day after Passover; it both celebrates the spring harvest and the day that God gave The Torah to Moses and the nation of Israel. This is a link to the Essenes’ doctrine where this feast was a special time of connection between the believer and the indwelling Holy Spirit. But that link is not made clear in Acts and even the origin of that Jewish feast is obscured by that fact that the New Testament gives it the Greek name of Pentecost, simply meaning “fifty.”

In general, in the New Testament, there is no explicit association of the Holy Spirit with the Shekinah or with Wisdom/Sophia. Moreover, the way the early Christians handled this complex situation involving the Holy Spirit would not be to go back to the practices of folk or formal Judaism, or to the Essene scrolls or to the Hebrew scriptures to sort it out; rabbinical sources such as the Talmud would not be consulted; Aramaic language sources such as the Targums would not be mined. Rather in dealing with the Holy Spirit and with the charge of polytheism, the Gentile Christians would follow the lead of Greek philosophy and formulate their theology in a way so as to make Christianity intellectually reputable in Greek cultural terms.

With such an important role in the New Testament, the Holy Spirit becomes theologically significant in Christianity – but, in terms of the beliefs and practices of the faithful, the Holy Spirit becomes something of a silent partner. The feminine role of the Shekinah and that of Wisdom/Sophia are taken over by Mary, the Mother of Jesus; the earliest Christian writers place the Holy Spirit in the Godhead with God the Father and God the Son and they do so as Wisdom/Sophia; but later Wisdom becomes identified with the Son of God. In the end, for the faithful, the role of the Holy Spirit as indwelling individual guide would be usurped by patron saints and guardian angels; for the Holy Spirit, the only substantial role left is to round out the Holy Trinity.

As the war with Cleopatra and Mark Anthony comes to an end, Augustus becomes the Roman Emperor and a (relative) peace that would last four hundred years, the Pax Romana, leads to accelerated commercial and cultural exchange throughout the Mediterranean world. Indeed, the cultural world of the Roman Empire is in full ebullition. As the Pax Romana has facilitated the spread of Christian ideas throughout the empire, so too it has provided a platform for competing philosophies, theologies and mystical practices of many sorts. So encounters with developments in Greek and Roman philosophy, with the flow of  new ideas from Messianic Judaism and alternative Jewish/Christian groups, with eastern religions, with mystery religions and on and on would lead to difficult theological arguments and would drive centrifugal forces within the Christian movement itself leading to almost countless heresies to be denounced.

Given the different theological roles of the Father, Son and Holy Spirit in their theology, the early Christians were naturally accused of polytheism. To counter this, the most simple position was called Sabellianism or monarchianism or modalism – Father, Son and Holy Spirit are just manners of speaking, façons de parler, to describe God as He takes on different roles. However, this view came to be condemned multiple times as a heresy by Church authorities basically because it implies that God the Father had to somehow endure the pain of the crucifixion..

So alternative solutions were proposed. The adoptionist position was that God the Son was a human elevated to the rank of Son of God, the Holy Spirit also being a creation of God the Father. In the form of Arianism where God the Son is not co-eternal with God the Father but was begotten by the Father at some point in time,  this kind of position stayed current in Christianity for centuries. The position that eventually emerged victorious was trinitarianism: “three co-equal persons in one God.” This last phrase seems straightforward enough today, but a proper parsing requires some explanation. For one thing, this formulation does not come from the Hebrew scriptures, the Targums or rabbinical sources. To be fair, there is one place in the Hebrew Bible where God does appear as a threesome: in Genesis 18, the Lord visits Abraham to announce that his wife Sarah shall bear a child. The first two verses are

       The LORD appeared to Abraham near the great trees of Mamre while he was sitting at the entrance to his tent in the heat of the day. Abraham looked up and saw three men standing nearby. When he saw them, he hurried from the entrance of his tent to meet them and bowed low to the ground.

This was taken as a pointer to the Trinity by some early Christian writers (St. Augustine among them) but hardly suffices as an explanation of the evolution of the doctrine. So the development of the theology of the Holy Trinity is still a mystère of its own.

In the Greco-Roman world of the time of Christ, there was a view that prefigured the Christian Trinity that originated with Plato and that was followed by the Stoics: there was an abstract, non-material God from all eternity from whom came the Logos (the Word of God) who was responsible for the Creation of the material universe. So already from the Greek philosophical world, we have the idea of a dyadic Godhead.

This cosmogony infiltrated the Hellenized Jewish milieu as well. In a kind of last attempt to reconcile Jewish and Greek culture in the Hellenistic world, a contemporary of Jesus of Nazareth, Philo of Alexandria (aka Philo Judaeus) developed an entire philosophy complete with a trinity: there was Yahweh of the ineffable name, the Wisdom/Sophia of the Wisdom of Solomon, and the Logos (the Word) of Plato who was responsible for the actual creation of the physical universe. Philo wrote (Flight and Finding XX (108, 109))

    because, I imagine, he [the Logos, the Word of God] has received imperishable and wholly pure parents, God being his father, who is also the father of all things, and wisdom being his mother, by means of whom the universe arrived at creation

The theology of the Holy Spirit is called pneumatology from pneuma the Greek word for spirit (or breath or wind). On the Christian side of the fence then, Philo’s view – where the Holy Spirit takes on the role of queen consort of El/Yahweh – is known as consort pneumatology.

Another movement with roots in the Jewish/Christian world of Alexandria that impacted early Christianity and Rabbinical Judaism was Gnosticism.  This topic is certainly worth an internet search. Very, very simply put, at its core there was the belief that God, the Supreme Being, is unknowable, that the material world was created by a lesser figure known as a demiurge, that the material world is evil in itself and that only knowledge (gnosis in Greek) coming from God can lead to salvation. At its root, Gnosticism is based on a dualism between the forces of good and the forces of evil, between God and lesser deities. Gnosticism was an important movement in antiquity with an impact on Christianity, Judaism and Islam. The battle between St Michael and Lucifer in the Book of Revelation can be understood in Gnostic terms. Furthermore, Gnosticism contributed to the elaboration of the Kabbalah and Gnostic strains abound in the Quran. It gave rise to religious systems such as Manichaeism (with its competing forces of good and evil).

Manichaeism’s last stand in the Western Christian world was the Cathar (Albigensian) movement of the South of France in the Middle Ages. To aid in the destruction of Cathar civilization with its indigent holy men and holy women and its troubadour poets, Pope Innocent III created the first Papal Inquistion and in 1209 had the French king launch a horrific crusade against them. For his part, in imitation of the Cathars, St. Dominic founded the mendicant order of the Dominicans but then it was this same order that led the Inquisition’s persecution of the Cathars – all this is bizarrely celebrated in her pop hit “Dominique” by Soeur Sourire, the Singing Nun:

    “Dominique … combattit les Albigeois”

For the vocal click here.

The Mandaeans are a Gnostic, dualistic sect that is still active in Iraq. One of the many terrible side effects of the War in Iraq is the oppression (approaching ethnocide) of religious groups such as the Chaldean Christians, the Yazidis and the Mandaeans. The Mandaeans numbered some 60,000 in 2003 but, with the war and Islamic extremism, many have fled and their numbers are down to an estimated 5,000 today. When you add to this how conflict in the Middle East has led to the end of the once thriving Jewish communities of Mesopotamia, we are witnessing a terrible loss of religious diversity akin to the disappearance of species.

For the history of Christian Gnosticism in the early Christian era, we have the writings of their opponents and texts known as the Gnostic Gospels. Again very simply put, to the Gnostic scheme these Christians add that the Supreme Being sent Christ to bring humans the knowledge (gnosis) necessary for redemption. Concerning the Holy Spirit, we know from the Gnostic Gospel of St. Philip that theirs, like Philo’s, was a consort pneumatology. Some of the names of these Gnostic Christians pop up even today; for example, Jack Palance plays one of them, Simon Magus, in the 1954 movie The Silver Chalice. Being portrayed by Jack Palance certainly means you are villain enough, but one can add to that the fact that the sin of simony (selling holy offices) is named for Simon Magus (Acts 8:18).

So with all these competing philosophies and theologies – from within Christianity and from outside Christianity – to contend with, just how did trinitarianism emerge as the canonical position? Mystère.

The Drums of War

The history of civilization was long taught in schools as the history of its wars – battles and dates. Humans are unique this way: male animals fight amongst themselves for access to females but leave each other exhausted and maybe wounded but not dead. What cultural or biological function does war among humans actually have?

In his classic dystopian novel, 1984, Orwell described a nation committed to endless war, the situation the US finds itself in today. In the book, the purpose of these wars is to control the population with patriotic rallies and surveillance, to cover up failings of the leadership and to get rid of excess industrial production. Some would also point to the powerful side-effects of technology first developed for the military that contribute to the head-long plunge into an Orwellian future such as the Internet.

Our defense industry today is perfectly suited for the third task Orwell lists: after all, it makes things that destroy themselves. This industry is so dominated by a small number of giant companies that they can dictate costs and prices to the Pentagon, knowing the military budget will bloat to oblige them. This military-industrial complex lives outside the market-based capitalist system; the current move to merge Raytheon and United Technologies will be one more step in concentration of this oligopoly, one that does not brook competition. We can’t say that Eisenhower didn’t warn us.

But the human price of war is very, very high. So how does a modern nation-state structure its sociology to enable it to endure wars? For one point of view, the French feminist author Virginie Despentes puts it this way: with the citizens’ army, men have a “deal” – be willing to fight the nation’s wars in exchange for a position of prestige in society. With this setup, the position of women is made subordinate to that of men.

However, this compact is being eroded today: the US has a professional army, as do France, England, Germany et al. Moreover, the professional American army recruits women to bolster the level of IQ in the military which is so important for today’s technology based warfare! The erosion (in the US since the Vietnam draft) of the citizens’ army as a source of power for men in society could be a factor in the rise of feminist activism.

The irony is that while endless wars continue, the world is actually less violent today than it has been for a long time now. According to Prof. Steven Pinker, author of The Better Angels of Our Nature, the rate of death in war has fallen by a factor of 100 over a span of 25 years. The wars of today just do not require the great conscript armies of the world wars such as the massive force of over 34 million men and women that the Soviet Union put together in WWII while suffering casualties estimated to be as high as 11 million. But at least no one has yet called for a return to major wars in order to “right things” for men in Western societies.

For its part, the US has been at war constantly since the invasion of Afghanistan in 2001 and, indeed since December 1941 but for some gaps – very few when you include covert operations such as support for Saddam Hussein during the decade-long Iran-Iraq War of the 1980s and the not-so-covert downing of an Iranian civil airliner with 288 people on board by the USS Vincennes in 1988; for more in the Reagan era, add the Iran-Contra scandal where the US backed counter-revolutionaries attempting to overthrow the democratically elected government in Nicaragua (carried out in flagrant violation of US law but, have no fear, all convicted perpetrators were pardoned by G.H.W. Bush). However, the armed forces are no longer a citizen’s army but rather a professional force in the service of the US President, Congress having given up its right alone to declare war (same for tariffs). This arrangement distances the wealthy and members of the government from war itself and insulates them and most of the population from war’s human consequences.

In addition to serving Orwell’s purposes, US wars have consistently been designed to further the interests of corporations – Dick Cheney, Halliburton and Iraq; multiple incursions in Central America and the Caribbean to benefit United Fruit; the annexation of Hawaii to benefit the Dole Food Company; shipping and a convenient revolution in Colombia to create Panama in order to build the canal. As another such example, the source of the US-Cuba conflict stems from the Cuban nationalizations of United Fruit plantations and the Cuban expropriations of hotels and gambling operations belonging to Meyer Lansky and the Mafia, back in 1959. From there things escalated to the Cuban Missile Crisis of 1962 and on to the current bizarre situation.

As the forever wars endure, on the home front the military are obsequiously accorded veneration once reserved for priests and ministers. Armistice Day, which celebrated the end of a war, now is Veterans Day making for two holidays honoring the armed forces, one in Spring and one in Fall. Sports events routinely are opened by Marine Color Guards accompanied by Navy jet flyovers. In fact, the military actually pays the National Football League for this sort of pageantry designed to identify patriotism with militarism.

The courage and skill of US armed forces members in the field are exemplary. But what makes all this veneration for the military suspect is how little success these martial efforts have been having. Let’s not talk about Vietnam. The first Iraq War (Iraq I) did drive the Iraqis out of Kuwait but all that was made necessary by the U.S. ambassador’s giving Saddam Hussein an opening to invade in the first place – Iraq I also failed to remove the “brutal dictator” Saddam, a misstep which later became a reason for Iraq II. The war in Afghanistan began with the failure to capture Osama Bin Laden at Tora Bora and today the Taliban are as powerful as ever and the poppy trade continues unabated. Iraq II led to pro-Iranian Shiite control of the government, Sunni disaffection and ISIS. Any progress in Syria or Iraq against ISIS has been spearheaded by the Kurds, allies whom the US has thrown under the bus lest the US incur the wrath of the Turks. The Libya that NATO forces bombed during 7 months in 2011 is now a failed state.

There are 195 countries in the world today. The US military is deployed in 150 of them. The outsized 2018 US military budget of some $6.8B was larger than the sum of the next seven such budgets around the world. Von Clausewitz famously wrote that war is an extension of diplomacy. But military action has all but replaced diplomacy for the US – “if you have a hammer, everything looks like a nail.” For budget details, click  HERE .

Recently, the US was busy bombing Libya and pacifying Iraq. Right now the US is involved in hostilities in Afghanistan, Syria and Yemen. Provocations involving oil tankers in the Gulf of Oman appear to be leading to armed conflict with Iran; this is all so worrisomely reminiscent of WMDs and so sadly similar to the fraudulent claim of attack in the Gulf of Tonkin that led to the escalation of the Vietnam War (as revealed by the Pentagon Papers, McNamara’s memoirs and NSA documents made public in 2005).

London bookmakers are notorious for taking bets on American politics. Perhaps, they also could take bets on where the US is likely to invade next. Boots on the ground in Yemen? Instead would the smart money be on oil-rich Libya with its warlords and the Benghazi incident? But Libya was already bombed exhaustively – interestingly right after Gaddafi tried to establish a gold-based pan-African currency, the dinar, for oil and gas transactions. Others would bet on Iran since the drums of war have already started beating and the unilateral withdrawal of the US from the Iran nuclear deal does untie the President’s hands; moreover and ominously for them, Iran just dropped the dollar as its exchange currency. Also invading the Shiite stronghold would ingratiate the US with Sunni ally Saudi Arabia (thus putting the US right in the middle of a war of religion); then too it could please Bibi Netanyahu who believes attacking Persia would parallel the story line of the Book of Esther. What about an invasion of Iran simply to carve out an independent Kurdistan straddling Iran and Iraq – to make it up to the Kurds? And weren’t American soldiers recently killed in action in Niger? And then there’s Somalia. Venezuela next? Etc.

As Pete Seeger sang, “When will they ever learn”?

The Third Person V: The Redacted Goddess

From the Hebrew Bible itself, it is clear that Canaanite polytheism persisted among the Israelites throughout nearly all the Biblical period; this is attested to by the golden calf, by the constant re-appearances of Baal, by the lamentations and exhortations of the prophets, etc. What recent scholarship has brought to the fore, however, is that the female Canaanite goddess Asherah was also an important part of the polytheism of the Israelites and their world.
In addition to more rigorous readings of the Hebrew Bible by historians and translators, modern archaeological scholarship has done much to fill in the picture of Asherah’s importance in Biblical Palestine. Her connection with the Shekinah and the Shekinah’s persistence over a long time in Judaism have also been developed by historians like Raphael Patai, notably with his seminal study The Hebrew Goddess (1967).

In the Hebrew Bible, there are those multiple  references to Baal and to other pagan gods – is Asherah among them? Mystère.

On her own, Asherah is actually referenced some 40 times in the Hebrew Bible in the pluralized form Asherim, but her presence has been covered over by editorial slights of pen.

There are references to Asherah in the First Book of Kings, Chapter 11, where Solomon indulges in idolatry and builds worship sites for her – in this text she is invoked as the goddess Ashtoreth of the Zidonians (a rival Canaanite group). Indeed he built altars for multiple gods and goddesses; Solomon, it appears, was working overtime to indulge his very multiple foreign wives and concubines.

But the term Asherim could refer both to Asherah herself (as with Elohim and El) and to eponymous objects associated with her cult, in particular to shrines under trees known as Asherah Trees and wooden figures known as Asherah Poles. The trick of the interpreters and translators of the Hebrew Bible has been to systematically render Asherim as “wooden poles” or “wooden groves” or simply as the anonymous “groves.”

The Septuagint translation of the Hebrew Bible (3rd and 2nd centuries BC) into Greek religiously follows this practice and this trick was perpetuated both by St. Jerome in his Latin translation and by the authors of the King James Bible.

For example, in the Hebrew Bible, in Judges 3:7 we have a reference to Baal and Asherah; in the New King James Bible (1982), the Hebrew text is translated as

    So the children of Israel did evil in the sight of the LORD. They forgot the LORD their God, and served the Baals and Asherahs.

While in the classic King James version, we have

    And the children of Israel did evil in the sight of the LORD, and forgat the LORD their God, and served Baalim and the groves.

The Catholic Douay-Rheims Bible follows the Latin Vulgate of St. Jerome and it too renders “Asherim” as “groves” in this verse and elsewhere.

In 1 Kings, worship of Asherah was encouraged in the court of King Ahab by his queen Jezebel which led the prophet Elijah to rail against the presence of prophets of Baal and Asherah at the court: the New King James translation of 1 Kings 18:19 reads

    Now therefore, send and gather all Israel to me [Elijah ] on Mount Carmel, the four hundred and fifty prophets of Baal, and the four hundred prophets of Asherah, who eat at Jezebel’s table.

Again, note that this is a new translation; the original King James reads

    Now therefore send, and gather to me [Elijah] all Israel unto mount Carmel, and the prophets of Baal four hundred and fifty, and the prophets of the groves four hundred, which eat at Jezebel’s table.

For a picture of Jezebel, Ahab and Elijah getting together, click HERE

Asherah was even present in the Temple in Jerusalem – statues of her were erected there during the time of King Manasseh (2 Kings 21:7) but then smashed during the reign of his grandson, the reformer King Josiah (2 Kings 23:14). This outbreak of iconoclasm is given in the New King James Bible as

    He [Josiah] also tore down the quarters of the male shrine prostitutes that were in the temple of the LORD, the quarters where women did weaving for Asherah.

Which can be contrasted with the King James text

    And he brake down the houses of the sodomites, that were by the house of the LORD, where the women wove hangings for the grove.

Here too it is only modern translations of the Hebrew text that bring Asherah out into the open.

Josiah’s reform did not long survive his reign, as the following four kings “did what was evil in the eyes of Yahweh” (2 Kings 23:32, 37; 24:9, 19). It is not clear just what those evils were – but they must have been pretty bad to earn a reference in the Bible.

Monotheism was thus slow to become dominant among the population. Right up to the destruction of Solomon’s Temple in Jerusalem by the Babylonians and the Babylonian Captivity, we have Biblical references to idolatry among the Israelites and, in particular, to the worship of Asherah. Indeed, one of Asherah’s titles was Queen of Heaven and this is how she is referred to in Jeremaiah 7:18 when the prophet is lamenting the Israelites’ continuing idolatry in the period just before the destruction of the Temple:

    The children gather wood, and the fathers kindle the fire, and the women knead their dough, to make cakes to the queen of heaven, and to pour out drink offerings unto other gods, that they may provoke me to anger.

After the exile in Babylon was brought to an end by the pro-Israelite Persian King Cyrus the Great, the construction of the 2nd Temple in Jerusalem began and was completed in 515 BC; it is at this time that monotheism built around Yahweh finally becomes firmly established as the official version of Judaism. So by the time of the Septuagint two centuries later, “Asherim” is systematically translated into Greek as “groves.” For official Judaism and its male-based monotheism, it was important to redact references to El/Yahweh’s consort Asherah, the Queen of Heaven.

But while the female principle of Asherah might have been expunged from the Judaism of the Hellenized diaspora and from “high temple” Judaism in Jerusalem itself, popular religion in the countryside of Biblical Palestine was another thing entirely. As heiress to Asherah, the Shekinah emerged in the religious practices of the Aramaic speaking Jews of the region – the world of John the Baptist and Jesus of Nazareth. The Shekinah first figures in the Targums, Aramaic writings from the Biblical Palestine, and she appears in the Talmud as well as in the Kaballah. As Yahweh became more distant from the material world, more modern in transcendence, it is this Shekinah who assured the link between Yahweh and that same material world and thereby provided the link between the Jewish world in the Palestine of the time of Christ and the Christian Holy Spirit.

Then too, following the suppression of Asherah in official Judaism, from Hellenic and Mediterranean sources there came the feminine principle Wisdom/Sophia which infiltrated Proverbs and Isaiah and which is the center piece of the Wisdom of Solomon (aka the Book of Wisdom ), written in Greek in the 1st century BC. Early Christians confirmed the link between Wisdom/Sophia and the Holy Spirit in writings referring to the Father and Son and Holy Spirit, in identifying the Seven Pillars of Wisdom with the Seven Gifts of the Holy Spirit, etc.

A third influence on the Christian Holy Spirit would most likely have come from the Essenes who insisted on the role of the Holy Spirit as dweller in the hearts of men and women; this aspect of the Holy Spirit is shared by the Shekinah whose name literally means “indweller”.

However, while the term Shekinah and the term Holy Spirit became interchangeable in the Aramaic and the Hebrew Talmudic writings, the Shekinah as such did not even make the transition from the folk Judaism of the Holy Land to the Greek speaking Gentile world, only the form Holy Spirit did.

The Gospels and the Epistles were all written in Greek. Their mission facilitated by the Pax Romana, the Greek speaking teams under Roman citizen Paul of Tarsus “hijacked” Christianity and delivered it to activist converts of the Greek speaking world at the end of the Hellenistic Era. After the Crucifixion and the Resurrection, the Christians were a small group in Jerusalem clustered around St James – the apostle referred to as the “the brother of Jesus” by Protestant scholars and as “James the Lesser” by Catholic scholars. Things moved quickly. In the period from the Crucifixion to the completion of the Epistles and Gospels, Christianity was taken from the Aramaic speaking Jewish population of Judea where it had begun and turned over to the Greek speaking population of the Mediterranean.

With the First Jewish War (66-73 AD) and the destruction of the Temple (70 AD), the early Gentile Christians of the Roman Empire would have had every political reason to separate their young movement from its Jewish roots. There was also the theological motivation of securing control over the interpretation of the Hebrew scriptures and prophecies.

In fact, in making the break from the Jewish world, the first thing the new Christians did was to eliminate the Semitic practices of dietary laws and male circumcision. In contrast, when the 3rd Abrahamic religion, Islam, rose among a Semitic people some 500 years later, both practices continued to be enforced.

As related in the post “Joshua and Jesus” on this blog site, the Aramaic/Hebrew name of Jesus is Yeshua, which is transliterated directly into English as Joshua but which becomes Jesus after being passed through the Greek language filter. Indeed, had the Gospels been written in Aramaic, the language of the Targums and the language of Yeshua and his disciples, we would have a much better sense today of what the Christian Savior actually said, taught and did – and why. In particular, the conflict between the Jews of the countryside and the Pharisees of the Temple in Jerusalem would have been more substantiated. At the nativity. it would have been “Messaiah Adonai” in the language the shepherds spoke rather than the “Christ the Lord” of the Greek version, the Aramaic being much better at capturing the spirit of the Hebrew scriptures. As it is, the one time Jesus speaks in Aramaic comes as He is dying on the cross and cries out:

    Eli, Eli, lama sabachthani? that is to say, My God, my God, why hast thou forsaken me?

In an Aramaic New Testament, the feminine noun “Shekinah” would likely have been used instead of “Holy Spirit”; in contrast the noun “spirit” is neuter in Greek and masculine in Latin. Severed, however, from its roots, the theology of the Holy Spirit took on a life of its own in the hands of rationalist intellectuals of the Graeco-Roman world. Affaire à suivre.

The Third Person IV: El and Yahweh

Quietly, in post-Biblical Judaism, there arises an actor, God’s Shekinah, who substitutes for God in interactions with the material world. The term Shekinah does not occur in the Hebrew Bible. It did not originate among the Hellenized Jews of Alexandria or Antioch. It did not originate among the Pharisees at the Temple in Jerusalem. It did not originate in Talmudic writings. It did not come from the scrolls of the Essenes. Rather, it came from the Aramaic speaking land of Biblical Palestine in the years leading up to the time of Christ; it first appears in the Targums, an Aramaic oral and written literature consisting of commentary on passages from the Bible and of homilies to be delivered in the synagogues in the towns and villages of Biblical Palestine; this is the world of John the Baptist and of Jesus of Nazareth, the two cousins who both met a politically charged death. This Shekinah of the folk Judaism of Palestine is a precursor of the Holy Spirit, the manifestation of God in the physical world as a presence or actor. The God of Jewish monotheism is a male figure; however, the Shekinah is female (and in the Kaballah actually becomes a full fledged goddess in her own right). So how did this female principle enter Judaism in the post-Biblical period, in the years before the advent of Christianity at a time when a monotheism built around an aloof male deity was at last firmly established in Judaism? Mystère.

To get to the bottom of this, we have to put the history of Judaism in a larger context. First, in the Middle East and the Mediterranean area, the local deities were generally headed up by a husband-wife pair: Zeus and Hera among the Greeks, Jupiter and Juno among the Romans, Anu and Ki among the Sumerians, Osiris and Isis among the Egyptians, El and Asherah among the Canaanites.

The Hebrew Bible assigns a Sumerian origin to Abraham who hails from the legendary city Ur of the Chaldees. Then there is the enslavement of the Israelites in Egypt and their escape from the Pharaoh followed by the (bloody) conquest of the Promised Land of Canaan. However, from the historical and archeological evidence, it is best to consider the Israelites simply as part of the larger the Canaanite world. For a lecture by Professor William Dever on supporting archaeological research, click HERE ; for historical scholarship by Professor Richard Elliott Bernstein, see The Exodus (Harper Collins, 2017). Admittedly, such revisionism provides for a less exciting a story than the Biblical tale with its plagues and wars, but it does place the Israelites among the creators of the alphabet, one of the most powerful intellectual achievements of civilization.

As the Israelites differentiated themselves from other dwellers in the Land of Canaan, they replaced polytheism with monotheism. But ridding themselves of the principal god El would prove complicated. First, this name for God is used some 2500 times in the Hebrew Bible in its grammatically plural form Elohim; when Elohim is the subject of a sentence, the verb, however, is third-person singular – this is still true in Hebrew in Israel today. In a similar way, the god Baal is referred to as Baalim throughout the Hebrew Bible.

It is also noteworthy that the Arabic term for Allah is derived from “El” and means “The [single] God.” In fact, the name Israel itself is usually construed to mean “may El rule”; however, others claim, with scholarship at hand, that the origin is the triad of gods Is Ra El, the first two being Egyptian deities. For its part, Beth-El means “House of God”; some scholars trace Babel back to the Akkadian for “Tower of God.”

“El” also survives in the names of angels such as Michael, Raphael and Gabriel: respectively, “who is like God,” “healer from God,” and “God is my strength.” Interestingly, despite their frequent appearances there, angels are not given names in the Hebrew Bible until books written in the 2nd century B.C: in the Book of Daniel, Michael and Gabriel appear; Raphael appears in the Book of Tobit – despite its charm, this work is deemed apocryphal by Jewish and Protestant scholars. In the Quran, Gabriel and Michael are both mentioned by name. In the Gospels, the angel Gabriel plays an important role while Michael is featured in the Book of Revelation in the story of the fallen angels and in the Epistle of St. Jude where he is promoted to Archangel.

Racing forward to modern times, the angelic el-based naming pattern continues with the names of Superman’s father Jor-El and Superman’s own name Kal-El. That the authors of Superman, Joe Shuster and Jerry Siegel were both children of Jewish immigrants may well have had something to do with this.

But in addition to Elohim, there is another denotation in the Hebrew Bible for the God of the Israelites – the four Hebrew consonants יהוה (YHWH in the Latin alphabet); in the Christian world this name is rendered simply as God or as Yahweh or less frequently as Jehovah. This designation for God is used over 6500 times in the Hebrew Bible!

One reason for the multiple ways of designating God is that the Hebrew Bible itself is not the work of a single author. In fact, scholars discern four main authorships for the Torah, the first five books, that are called the Elohist, the Yahwist, the Deuteronomist and the Priestly Source. The texts of these different authors (or groups of authors) were culled and merged over time by the compilers of the texts we have today. As the nomenclature suggests, the Elohist source generally uses Elohim to refer to God while the Yahwist source generally uses YHWH.

This kind of textual analysis serves to explain some discrepancies and redundancies in the Hebrew Bible. For example, there are two versions of the creation of humankind in Genesis. The first one appears at the beginning (Genesis 1: 26-28) and concludes the sixth day of Creation; it is attributed to the Elohist author – there is no reference to Adam or Eve, and women and men are created together; : the King James Bible has

    Then God said, “Let Us make man in Our image, according to Our likeness; let them have dominion over the fish of the sea, over the birds of the air, and over the cattle, over all the earth and over every creeping thing that creeps on the earth.” So God created man in His own image; in the image of God He created him; male and female He created them. Then God blessed them, and God said to them, “Be fruitful and multiply; fill the earth and subdue it; have dominion over the fish of the sea, over the birds of the air, and over every living thing that moves on the earth.”

While these poetic verses are well known, there is that more dramatic version: Adam and Eve, the tree of knowledge of good and evil, the serpent, the apple, original sin, naked bodies, fig leaves, etc. This vivid account is in the following chapter, Genesis 2:4-25, and this version of events is attributed to the Yahwist author.

For Michaelangelo’s treatment of the creation, click  HERE

Another example of a twice told tale in Genesis is the account of the deluge and Noah’s Ark – but in this case doing things in pairs is most appropriate!

There are various theories as to the origin of the figure of Yahweh. One school of thought holds that a god with the name Yahweh was a Canaanite deity of lower rank than El who became the special god of the Israelites and displaced El.

However, in the Bible itself, in Exodus, the Yahwist writer traces the name to the time that God in the form of a burning bush is speaking to Moses; here is the King James text of Exodus 3:14

    And God said unto Moses, I AM THAT I AM: and he said, Thus shalt thou say unto the children of Israel, I AM hath sent me unto you.

The thinking is that the consonants YHWH form a code – a kind of acronym – for the Yahwist’s “I AM”. With this interpretation, these four letters become the tetragrammaton – the “ineffable name of God,” the name that cannot be said. Indeed, when reading the Hebrew text in a religious setting, there where the tetragrammaton YHWH is written, one verbalizes it as “Adonai” (“Lord”), thus never pronouncing the name of God itself. With this, the God of the Israelites emerges as a transcendent being completely different from the earthy pagan deities – a momentous step theologically.

But in the original Canaanite Pantheon, El was accompanied by his consort Asherah, the powerful Mother Goddess and Queen of Heaven. For images of Asherah, click HERE.

Moreover, Asherah is also described as the consort of Yahweh on pottery found in Biblical Palestine. Indeed, inscriptions from several places including Kuntillet ‘Ajrud in the northeast Sinai have the phrase “YHWH and his Asherah.” For an excellent presentation of the archeological record of Asherah worship by professor and author William Dever, click HERE .

It would seem that Asherah is a natural candidate to provide the missing link to the Shekinah which would solve our current mystery. But Judaism is presented to us as a totally androcentric monotheism with no place for Asherah or other female component. Indeed, Asherah will not be found in the Septuagint, the masterful Greek translation of the Hebrew Bible done in Alexandria in the 3rd and 2nd centuries BC; Asherah will not be found in the Latin Vulgate, the magisterial translation of the Bible into Latin done in the 4th Century by St. Jerome; and Asherah will not be found in the scholarly King James Bible. Something’s afoot! A case of cherchez la femme! Have references to her simply been redacted from the Hebrew Bible and its translations? Mystère.

The Third Person III: The Shekinah

To help track the Jewish origins of the Christian Holy Spirit, there is a rich rabbinical literature to consider, a literature which emerged as the period of Biblical writing came to an end in the centuries just before the advent of Christianity. Already in the pre-Christian era, the rabbis approached the Tanakh (the Hebrew Bible) with a method called Midrash for developing interpretations, commentaries and homilies. Midrashic practice and writings had a real influence on the New Testament. St. Paul himself studied with the famous rabbi Gamaliel (Acts 22: 3) who in turn was the grandson of the great Talmudic scholar Hillel the Elder, one of the greatest figures in Jewish history; his simple and elegant formulation of the Golden Rule is often cited today:

    “What is hateful to you, do not do to your fellow: this is the whole      Torah; the rest is the explanation; go and learn”

Hiller the Elder is also famous for his laconic leading question

    “If not now, when?”

For an image of Hillel, click HERE

In the writings of St Paul, this passage from the Epistle to the Galatians (5:22-23) is considered an example of Midrash:

    But the fruit of the Spirit is love, joy, peace, longsuffering, gentleness, goodness, faith,

    Meekness, temperance: against such there is no law.

The virtues on this list are called the Fruits of the Holy Spirit and the early Christians understood this Midrash to be a reference to the Holy Spirit. Much like the way the Seven Pillars of Wisdom in the Book of Isaiah provide an answer to Question 177 in the Baltimore Catechism, so these verses of Paul provide the answer to Question 719:

Q. Which are the twelve fruits of the Holy Ghost?

A. The twelve fruits of the Holy Ghost are Charity, Joy, Peace, Patience, Benignity, Goodness, Long-suffering, Mildness, Faith, Modesty, Continency, and Chastity.

The numerically alert will have noted that St Paul only lists nine such virtues. But, it seems that St. Jerome added those three extra fruits to the list when translating Paul’s epistle from Greek into Latin and so the list is longer in the Catholic Bible’s wording of the Epistle!

It is also possible that Jerome was working with a text that already had the interpolations. The number 12 does occur in key places in the scriptures – the 12 sons of Jacob whence the 12 tribes of Israel, the 12 apostles, …. . Excessive numerological zeal on the part of early Christians plus a certain prudishness could well have led to the insertion of modesty, continence and chastity into the list. In the Protestant tradition and in the Greek Orthodox Church, the number still stands at 9, but that is still numerically pious since there are 9 orders of Angels as well.

For some centuries before the time of Christ, the Jewish population of Biblical Palestine was not Hebrew speaking. Instead, Aramaic, another Semitic language spoken all over the Middle East, had become the native language of the people; Hebrew was preserved, of course, by the Scribes, Pharisees, Essenes, Sadducees, rabbis and others directly involved with the Hebrew texts and with the oral tradition of Judaism.

The Targums were paraphrases of passages from the Tanakh together with comments or homilies that were recited in synagogues in Aramaic, beginning some time before the Christian era. The leader, the meturgeman, who presented the Targum would paraphrase a text from the Tanakh and add commentary, all in Aramaic to make it more comprehensible and more relevant to those assembled in the synagogue.

Originally, the Targums were strictly oral and writing them down was prohibited. However, Targumatic texts appear well before the time that Paul was sending letters to converts around the Mediterranean. In fact, a Targumatic text from the first century BC, known as the Targum of Job, was discovered at Qumran, the site of the Dead Sea Scrolls.

By the time of the Targums, Judaism had gone through centuries of development and change – and it was still evolving. In fact, the orthodoxy of the post-biblical period demanded fresh readings even of the scriptures themselves. Indeed, the text of the Tanakh still raises theological problems – in particular in those places where the text anthropomorphizes God (Yahweh); for example, there is God’s promise to Moses in Exodus 33:14

    And he [The Lord] said, My presence shall go with thee, and I will give thee rest.

In the Targums and in the Talmudic literature, the writers are careful to avoid language which plunks Yahweh down into the physical world. The Targums tackle this problem head on. They introduce a new force in Jewish religious writing, the Shekinah. This Hebrew noun is derived from the Hebrew verb shakan which means “to dwell” and the noun form Shekinah is translated as “The one who dwells” or more insistently as “The one who indwells”; it can refer both to the way God’s spirit can inhabit a believer and to the way God can occupy a physical location.

The verb “indwell” is also often used in English language discussions and writing about the Judaic Shekinah and about the Christian Holy Spirit. This word goes back to Middle English and it was rescued from obsolescence by John Wycliffe, the 14th century reformer who was the first to translate the Bible into English from the Latin Vulgate of St. Jerome. It is popular today in prayers and in Calls to Worship in Protestant churches in the US; e.g.

    “May your Holy Spirit surround and indwell this congregation now and forevermore.”

In the Targums, the term Shekinah is systematically applied as a substitute for names of God to indicate that the reference is not to God himself; rather the reference is shifted to this agency, aspect, emanation, viz. the Shekinah. Put simply, when the original Hebrew text says “God did this”, the Targum will say something like “The Shekinah did this” or “The Lord’s Shekinah did this.”

In his study Targum and Testament, in analyzing the example of Exodus 33:14 above, Martin McNamara translates the Neofiti Targum’s version of the text this way:

    “The glory of my Shekinah will accompany you and will prepare a resting place for you.”

Here is an example given in the Jewish Encyclopedia (click HERE ) involving Noah’s son Japeth. Thus, in Genesis 9:27, we have

    May God extend Japheth’s territory; may Japheth live in the tents of Shem, and may Canaan be the slave of Japheth.

In the Onkelos Targum, the Hebrew term for God “Elohim” in Genesis is replaced by “the Lord’s Shekinah” and the paraphrase of the meturgeman becomes (roughly)

    “May the Lord’s Shekinah extend Japheth’s territory; may Japheth live in the tents of Shem, and may Canaan be the slave of Japheth.”

For a 16th century French woodcut that depicts this son of Noah, click HERE .

Another function of the Shekinah is to represent the presence of God in holy places. In Jewish tradition, the Spirit of God occupied a special location in the First Temple. This traces back to Exodus 25:8 where it is written

    And let them make Me a sanctuary; that I may dwell among them.

Again following the Jewish Encyclopedia’s analysis, the Onkelos Targum paraphrases this declaration of Yahweh’s as

    “And they shall make before Me a sanctuary and I shall cause My Shekinah to dwell among them.”

This sanctuary will become the Temple built by Solomon in Jerusalem. Indeed, in many instances, the Temple is called the “House of the Shekinah” in the Targums.

Jesus was often addressed as “Rabbi” in the New Tesyament; in John 3:2, the Pharisee Nicodemus says “Rabbi, we know that you are a teacher who has come from God”. Today, the role of Midrash and the Targums on Jesus’ teachings and on the language of the four Gospels is a rich area of research. Given that the historical Jesus was an Aramaic speaker, the Targums clearly would be a natural source for homilies and a natural methodology for Jesus to employ.

Let us recapitulate and try to connect some threads in the Jewish literature that lead up to the Christian Holy Spirit: The Essene view of the Holy Spirit as in-dwelling is very consistent with the Shekinah and with the Christian Holy Spirit. The Essene personification of Wisdom as a precursor to the Holy Spirit is as well. The Targumatic and Talmudic view of the indwelling Shekinah as the manifestation of God in the physical world has much in common with the Christian view of the Holy Spirit; in point of fact, this is the primary role of the Holy Spirit as taught in Sunday Schools and in Parochial Schools.

One more thing: the term “Holy Spirit” itself only appears three times in the Tanakh and there it is used much in the way Shekinah is employed in the rabbinical sources. The term is used often, however, by the Essenes in the Dead Sea Scrolls. Then it appears in the Talmudic literature where it is associated with prophecy as in Christianity: in Peter’s 2nd Epistle (1:21) we have

    For the prophecy came not in old time by the will of man: but holy men of God spake as they were moved by the Holy Ghost

Moreover, in Talmudic writings, the terms Holy Spirit and Shekinah eventually became interchangeable – for scholarship, consult Raphael Patai, The Hebrew Goddess.

What is also interesting is that in Hebrew the words Wisdom (chokmâh), Spirit (ruach) and Shekinah are all feminine. However, grammatical gender is not the same as biological gender – a flagrant example is that in German “young girl” is neuter (das Mädchen). So, we could not infer from grammatical gender alone that the Holy Spirit was a female force.

However, in the Wisdom Literature, Wisdom/Sophia is a female figure. Following Patai again, one can add to that an assertion by Philo of Alexandria, the Hellenized Jewish philosopher who lived in the first half of the 1st century: in his work On the Cherubim, Philo flatly states that God is the husband of Wisdom. Moreover, in the Talmud and the Kaballah, the Shekinah has a female identity which is developed to the point that in the late medieval Kaballah, the Shekinah becomes a full-fledged female deity.

So there are two female threads, Wisdom/Sophia and the Shekinah, coming from the pre-Christian Jewish literature that are both strongly identified with the Holy Spirit of nascent Christianity, identifications which persisted as Christianity spread throughout the Roman Empire. Wisdom/Sophia of the Wisdom Literature looks to be an importation from the Greek culture which dominated the Eastern Mediterranean Sea in the Hellenistic Age, an importation beginning at the end of the biblical era in Judaism. The Shekinah, though, only appears in the Talmudic and Targumatic literatures. So the next question is what is the origin of the concept of the Shekinah. And then, are the Shekinah and Wisdom/Sophia interconnected? Mystères. More to come.

Esther, Trump and Blasphemy

Israeli Prime Minister Benjamin Netanyahu recently (March 21, 2019) asserted that Donald Trump’s support for Israeli annexation of the Golan-Heights has anti-Iranian biblical antecedents. The annexation would strengthen Israel’s position vis-à-vis pro-Iranian forces in Syria. Calling it a “Purim Miracle,” Netanyahu cited the Book of Esther where purportedly Jews killed Persians in what is today Iran rather than the other way around as the Persian viceroy Haman planned. This came to pass thanks to Esther’s finding favor with the Persian King named Ahasuerus, the ruler considered today by scholars to be Xerxes, the grandson of King Cyrus the Great; in need of a new queen Xerxes selected the Jewish orphan Esther as the most beautiful of all the young women in his empire. High ranking government officials like Mike Pence and Mike Pompeo rally to the idea that Trump was created by God to save the Jewish people. Indeed, BaptistNews.com reports that, when asked about it in an interview with the Christian Broadcasting Network, Secretary of State Pompeo said it is possible that God raised up President Trump, just as He had Esther, to help save the Jewish people from the menace of Iran, as Persia is known today. Pompeo added “I am confident that the Lord is at work here.”

However, the account in the Book of Esther is contested by scholars since there are there are no historical records to back up the biblical story; and the Cinderella elements in the narrative should require outside verification.  Moreover, the Book of Esther itself is not considered canonical by many (Martin Luther among them) and parts of it are excluded from the Protestant Bible. These are details though – the key point is that the Feast of Purim is an important event, celebrating as it does the special relationship between God and His Chosen People; bringing Donald Trump into this is simply blasphemy.

For its part, the reign of Xerxes is well documented – he was the Persian invader of Greece whose forces prevailed at Thermopylae but were then defeated at the Battle of Salamis. According to Herodotus, Xerxes watched battles perched on a great throne and, at Thermopylae he “thrice leaped from the throne on which he sat in terror for his army.”

Moreover, there is well-documented history where the Persians came to the aid of the Jews. Some background: the First Temple in Jerusalem was built during the reign of King Solomon (970-931 BC); the Temple was destroyed by the Babylonians in 598 BC and a large portion of the population was exiled to Babylon. With the conquest of Babylon in 539 BC by the Persian King Cyrus the Great, the Babylonian Captivity came to an end, Jews returned to Jerusalem and began the construction of the Second Temple (515 BC). It was also Cyrus himself who urged the rebuilding of the Temple and, for his efforts on behalf of the Jews, Cyrus is the only non-Jew considered a Messiah in the Hebrew Bible (Isaiah 45:1). So here we have a reason for Israelis and Iranians to celebrate history they share.

The Third Person II: The Wisdom Literature

The rabbinical term for the Hebrew Bible is the Tanakh; the term was introduced in the Middle Ages and is an acronym drawn from the Hebrew names for the three sections of the canonical Jewish scriptures: Torah (Teachings), Neviim (Prophets) and Ketuvim (Writings). The standard Hebrew text of the Tanakh was compiled in the Middle East in the Middle Ages and is known as the Masoretic Text (from the Hebrew word for tradition).

Since the Holy Spirit does not appear in the Tanakh as a standalone actor and is only alluded to there three times, the question arises whether the Holy Spirit plays a role in other pre-Christian Jewish sources. Mystère.

In 1947, Bedouin lads discovered the first of the Dead Sea Scrolls in a cave (which one of them had fallen into) at the site of Qumran on the West Bank of the River Jordan near the Dead Sea. These texts were compiled in the centuries just before the Christian era by a monastic Jewish group called the Essenes and they include copies of parts of the Tanakh. On the other hand, the texts of the Dead Sea Scrolls also include non-biblical documents detailing the way of life of the Essenes and their special beliefs. In the scrolls, there are multiple mentions of the Holy Spirit and the role of the Holy Spirit there has much in common with the later Christian concept: the Essenes believed themselves to be holy because the Holy Spirit dwelt within each of them; indeed, from a scroll The Community Rule, we learn that each member of the group had first to be made pure by the Holy Spirit. Another interesting intersection with early Christianity is that the Essenes celebrated the annual renewal of their covenant with God at the Jewish harvest feast of Shavuot which also commemorates the day when God gave the Torah to the Israelites establishing the Mosaic covenant. The holiday takes place fifty days after Passover; in the Greek of the New Testament, this feast is called Pentecost and it is at that celebration that the Holy Spirit establishes a covenant with the Apostles.

Wisdom, aka Holy Wisdom, emerges as a concept and guiding principle in the late Biblical period. Wisdom is identified with the Christian Holy Spirit, for example, through the Seven Pillars of Wisdom; thus, after relocating to North America and leaping ahead many centuries to 1885 and the Baltimore Catechism, it is Isaiah 11:2 that provides the answer to Question 177:

Q. Which are the gifts of the Holy Ghost?
A. The gifts of the Holy Ghost are Wisdom, Understanding, Counsel, Fortitude, Knowledge, Piety and Fear of the Lord.

Isaiah 11:3 and Proverbs 9:10 make it clear that, among these Seven Pillars of Wisdom, Fear of the Lord is the most fundamental of these gifts – for once, we can’t blame this sort of thing on Catholics and Calvinists !

Wisdom as a personification plays an important role in the Wisdom Literature, a role that also links pre-Christian Jewish writings to the Christian Holy Spirit. The Book of Proverbs, itself in the Tanakh, is part of this literature. But there is something surprising going on: in Hebrew grammar, the gender of the word for Wisdom, Chokmâh, is feminine; in the Wisdom Literature, Wisdom has feminine gender, not only grammatically but sexually as well. Indeed, in Chapter 8 of Proverbs, Wisdom puts “forth her voice”

1 Doth not wisdom cry? And understanding put forth her voice?
2 She standeth in the top of high places, by the way in the places of the paths.
3 She crieth at the gates, at the entry of the city, at the coming in at the doors.
4 Unto you, O men, I call; and my voice is to the sons of man.

and declaims that she was there before the Creation

22 The LORD possessed me in the beginning of his way, before his works of old.
23 I was set up from everlasting, from the beginning, or ever the earth was.
24 When there were no depths, I was brought forth; when there were no fountains abounding with water.
25 Before the mountains were settled, before the hills was I brought forth.

So the author of this part of the Book of Proverbs (4th century BC) clearly sees Wisdom as a kind of goddess.

The Tanakh, which corresponds basically to the Protestant Old Testament, excludes Wisdom books that are included in the Catholic Bible such as the Book of Sirach (aka Book of Ecclesiasticus) and the Wisdom of Solomon (aka Book of Wisdom). These texts, however, develop this “goddess” theme further.

The Book of Sirach, which dates from the late 2nd century BC, has these verses in the very first chapter where this preternatural female note is struck cleanly:

5 To whom has wisdom’s root been revealed? Who knows her subtleties?
6 There is but one, wise and truly awe-inspiring, seated upon his throne:
7 It is the LORD; he created her, has seen her and taken note of her.

The meme of Wisdom as a feminine goddess-like being also occurs in Psalm 155 one of the Five Aprocryphal Psalms of David, texts which date from the pre-Christian era:

5 For it is to make known the glory of Yahweh that wisdom has been given;
6 and it is for recounting his many deeds, that she has been revealed to humans:

and

12 From the gates of the righteous her voice is heard, and her song from the assembly of the pious.
13 When they eat until they are full, she is mentioned, and when they drink in community

What is more, this theme also appears in the Essene Wisdom texts of the Dead Sea Scrolls, e.g. the Great Psalms Scroll and Scroll 4Q425. In fact, the latter begins with a poem to Wisdom in the idiom of the Beatitudes

    “Blessed are those who hold to Wisdom’s precepts
and do not hold to the ways of iniquity….
Blessed are those who rejoice in her…
Blessed are those who seek her …. “

Then too the word for “Wisdom” in the Greek of the Wisdom of Solomon is “Sophia”, the name for a mythological female figure and a central female concept in Stoicism and in Greek philosophy more generally. Indeed, “philosophy” itself means “love of Sophia.” For a 2nd century statue of Sophia, click HERE . For a painting by Veronese, click HERE .

Most dramatically, Chapter 8 of the Wisdom of Solomon begins

1 Wisdom reacheth from one end to another mightily: and sweetly doth she order all things.
2 I loved her, and sought her out from my youth, I desired to make her my spouse, and I was a lover of her beauty.
3 In that she is conversant with God, she magnifieth her nobility: yea, the Lord of all things himself loved her.
4 For she is privy to the mysteries of the knowledge of God, and a lover of his works.

So these sources all imply that the Holy Spirit derives from a female precedent.

What is more, in Christian Gnosticism, Sophia becomes both the Bride of Christ and the Holy Spirit of the Holy Trinity. This movement denied the virgin-birth on the one hand and taught that the Holy Spirit was female on the other. The Gnostic Gospel of St. Phillip is one of the texts found in 1945 at Nag Hammadi in Egypt and the manuscript itself dates from the 2nd century; in this Gospel the statement of the Angel of the Lord to Joseph in Matthew 1:20

    “the child conceived in her is from the Holy Spirit”

is turned on its head by the argument that this is impossible because the Holy Spirit is female:

    “Some said Mary became pregnant by the holy spirit. They are wrong and do not know what they are saying. When did a woman ever get pregnant by a woman?”

Not unsurprisingly, Gnosticism was branded as heretical by stalwart defenders of orthodoxy such as Tertullian and Irenaeus. Tertullian, “the Father of Western Theology,” was the first Christian author known to use the Latin term “Trinitas” for the Triune Christian God – naturally, the thought of a female Third Person was anathema to him, leading as it would to a blasphemous mėnage à trois.

In the Hebrew language literature, Wisdom/Sophia as a personification enters the Tanakh relatively late in the game in the Book of Proverbs and then somewhat later in the Aprocrypha and in the Dead Sea Scrolls. She appears as Sophia in the Greek language Wisdom of Solomon of the late 1st century BC; so Wisdom/Sophia appears to be an influence from the Hellenistic world and its Greek language, religion and philosophy. But why does this begin to happen at the end of the Biblical period? Is Wisdom/Sophia filling a vacuum that was somehow created in Jewish religious life? But the literature search does not end here. In the pre-Christian period there also emerged rabbinical practices such as Midrash and writings such as the Jerusalem Talmud and the Targums. Are further threads leading to the Holy Spirit of Christianity to be found there? Further examples of links to a female diety? Links to Wisdom/Sophia herself? Mystères. More to come.

The Third Person I : The Holy Spirit

To Christians, the Holy Spirit (once known as the Holy Ghost in the English speaking world) is the Third Person of the Holy Trinity, along with God the Father and God the Son.
Indeed for Protestants and Catholics, the Nicene Creed reads
    “We believe in the Holy Spirit, the Lord, the giver of life,
who proceeds from the Father and the Son,
who with the Father and the Son is worshiped and glorified,
who has spoken through the prophets.”
Or more simply in the Apostles’ Creed
    ” I believe in the Holy Spirit.”
The phrase “and the Son” does not appear in all versions of the Nicene Creed and it was a key factor in the Great Schism of 1054 A.D that separated the Greek Orthodox Church from the Roman Catholic Church. A mere twenty years later in another break with the Orthodox Church, Pope Gregory VII instituted the requirement of celibacy for Catholic priests. It makes one think that, had the schism not taken place, the Catholic Church would not have made that move from an all male priesthood to the celibate all male priesthood which is plaguing it today.
The earliest Christian texts are the Epistles of St. Paul: his first Epistle dates from AD 50, while the earliest gospel (that of St. Mark) dates from AD 66-70. However, since the epistles were written after the events described in the gospels, they come later in editions of the New Testament.
Paul wrote that first epistle, known as 1 Thessalonians, in Greek to converted Jews of the diaspora and the other new Christians in the Macedonian city of Thessaloniki (Salonica) on the Aegean Sea. Boldly, at the very beginning of the letter, Paul lays out the doctrine of the Holy Trinity: in the New Revised Standard Version, we have
      1 To the church of the Thessalonians in God the Father and the Lord Jesus Christ: Grace to you and peace.
     2 We always give thanks to God for all of you and mention you in our prayers, constantly
    3 remembering before our God and Father your work of faith and labor of love and steadfastness of hope in our Lord Jesus Christ.
    4 For we know, brothers and sisters beloved by God, that he has chosen you,
    5 because our message of the gospel came to you not in word only, but also in power and in the Holy Spirit and with full conviction; just as you know what kind of persons we proved to be among you for your sake.
    6 And you became imitators of us and of the Lord, for in spite of persecution you received the word with joy inspired by the Holy Spirit.
Judaism is famously monotheistic and we understand that when Paul refers to “God” in his epistle, he is referring to Yahweh, the God of Judaism – “God the Father” to Christians. Likewise, the reference to “Jesus Christ” is clear – Jesus was a historical figure. But Paul also expected people in these congregations to understand his reference to the Holy Spirit and to the power associated with the Holy Spirit.
So who were in these congregations that Paul was writing to who would understand what Paul was trying to say? Mystère.
By the time of Christ, Jews had long established enclaves in many cities around the Mediterranean including, famously, Alexandria, Corinth, Athens, Tarsus, Antioch and Rome itself. Under Julius Caesar, Judaism was declared to be a recognized religion, religio licita, which formalized its status in the Empire. Many Jews like Paul himself were Roman citizens. In Augustus’ time, the Jews of Rome even made it into the writings of Horace, one of the leading lights of the Golden Age of Latin Literature: for one thing, he chides the Jews of Rome for being insistent in their attempts at converting pagans – something that sounds unusual today, but the case has been made that proselytism is a natural characteristic of monotheism, which makes sense when you think about it.
Estimates for the Jewish share of the population of the Roman Empire at the time of Christ range from 5% to 10% – which is most impressive. (For a cliometric analysis of this diaspora and of early Christianity, see Cities of God by Rodney Stark.) These Hellenized, Greek speaking Jews used Hebrew for religious services and readings. Their presence across the Roman Empire was to prove critical to the spread of Christianity during the Pax Romana, a spread so rapid that already in 64 A.D. Nero blamed the Christians for the fire that destroyed much of Rome, the fire that he himself had commanded.
During the Hellenistic Period, the three centuries preceding the Christian era, Alexandria, in particular, became a great center of Jewish culture and learning – there the first five books of the Hebrew Bible were translated into Greek (independently and identically by 70 different scholars according to the Babylonian Talmud) yielding the Septuagint and creating en passant the Greek neologism diaspora as the term for the dispersion of the Jewish people. Throughout the Mediterranean world, the Jewish people’s place of worship became the synagogue (a Greek word meaning assembly).
In fact, Greek became the lingua franca of the Roman Empire itself. St. Paul even wrote his Epistle to the Romans in Greek. The emperor Marcus Aurelius wrote the twelve books of his Meditations in Greek; Julius Caesar and Mark Anthony wooed Cleopatra in Greek. In Shakespeare’s Julius Caesar, the Senator and conspirator Casca reports that Cicero addressed the crowd in Greek adding that he himself did not understand the great orator because “It was Greek to me” – here Shakespeare is putting us on because Plutarch reports that Casca did indeed speak Greek. As for Cicero, for once neither defending a political thug (e.g. Milo) nor attacking one (e.g. Cataline), he delivered his great oration on behalf of a liberal education, the Pro Archaia, to gain Roman citizenship for his personal tutor, Archias a Greek from Antioch.
The spread of Christianity in the Greek speaking world was spearheaded by St. Paul as attested to by his Epistles and by the Acts of the Apostles. Indeed, Paul’s strategy in a new city was first to preach in synagogues. Although St Paul referred to himself as the Apostle to the Gentiles, he could still better be called the Apostle to the Urban Hellenized Jews, Jews like himself. Where he did prove himself an apostle to the Gentiles was when Paul, in opposition to some of the original apostles, declared that Christians did not have to follow Jewish dietary laws and that Christians need not practice the Semitic tribal practice of male circumcision; Islam, which also originated in the Semitic world, enforces both dietary laws and male circumcision.
The theology of the Holy Spirit is called pneumatology from pneuma the Greek word for spirit (or breath or wind) ; pneuma is the Septuagint’s translation of the Hebrew ruach. Scholars consider his pneumatology as central to Paul’s thinking.
In fact, Paul refers to the Holy Spirit time and again in his writings and in Paul’s Epistles the Holy Spirit is an independent force; the same applies to the Gospels: the Holy Spirit is a participant at the Annunciation, at the baptism of Christ, at the Temptation of Christ. In the Acts of the Apostles, it is the Holy Spirit who descends on the Apostles in the form of tongues of fire when they are gathered in Jerusalem for the Jewish harvest feast of Shavuot which takes places fifty days after Passover; in the Greek of the New Testament, this feast is called Pentecost (meaning “fifty”) and it is at this celebration that the Holy Spirit gives the Apostles the Gift of Tongues (meaning “languages”) and launches them on their careers as fishers of men.
For a Renaissance painting of the baptism of Jesus with the Holy Spirit present in the form of a dove, a work of Andrea del Verrocchio and his student Leonardo da Vinci, click HERE . According to the father of art history Giorgio Vasari, after this composition Verrocchio resolved never to paint again for his pupil had far surpassed him! While we’re dropping names, click HERE for Caravaggio’s depiction of Paul fallen from his horse after Jesus revealed Himself to him on the Road to Damascus.
Although important in the New Testament, reference to the “Holy Spirit” only occurs three times in the Old Testament and it is never used as a standalone noun phrase as it is in the New Testament; instead, it is used with possessive pronouns that refer to Yahweh such as “His Holy Spirit” (Isaiah 63:10,11) and “Thy Holy Spirit” (Psalms 51:11); for example, in the King James Bible, this last verse reads
    Cast me not away from thy presence; and take not thy holy spirit from me.
This is key – from the outset, in Christianity, the Holy Spirit is autonomous, part of the Godhead, not just a messenger of God such as an angel would be. And Paul and the evangelists assume that their readers know what they are writing about; they don’t go into long explanations to explain who the Holy Spirit is or where the Holy Spirit is coming from.
The monotheism of Judaism has a place only for Yahweh, the God of the chosen people. But Christianity and its theology started in the Jewish world of the first century A.D.; so the concept of the Holy Spirit must have its roots in that world even though it is not there in Biblical Judaism. Jewish religious culture was as dynamic as ever in the post-Biblical period leading up to the birth of Christianity and beyond. New texts were written in Aramaic as well as in Greek and in Hebrew, creating a significant body of work.
So the place to start to look for the origin of the Holy Spirit is in the post-Biblical literature of Judaism. More to come.

Liberal Semantics

The word “liberal” originated in Latin, then made its way into French and from there into English. The Oxford English Dictionary gives this as its primary definition
“Willing to respect or accept behaviour or opinions different from one’s own; open to new ideas.”
However, it also has a political usage as in “the liberal senator from Massachusetts.” This meaning and usage must be relatively new: for one thing, we know that “liberal” was not given a political connotation by Dr. Samuel Johnson in his celebrated dictionary of 1755:
    Liberal, adj. [liberalis, Latin, libėral, French]
1. Not mean; not low in birth; not low in mind.
2. Becoming a gentleman.
3. Munificent; generous; bountiful; not parcimonious.
So when did the good word take on that political connotation? Mystère.
We owe the attribution of a political meaning to the word to the Scottish Enlightenment and two of its leading lights, the historian William Robertson and the political economist Adam Smith. Robertson and Smith were friends and correspondents as well as colleagues at the University of Edinburgh; they used “liberal” to refer to a society with safeguards for private property and an economy based on market capitalism and free-trade. Roberts is given priority today for using it this way in his 1769 book The History of the Reign of the Emperor Charles V. On the other hand, many in the US follow the lead of conservative icon Friedrich Hayek who credited Smith based on the fact that the term appears in The Wealth of Nations (1776); Hayek wrote The Road to Serfdom (1944), a seminal work arguing that economic freedom is a prerequisite for individual liberty.
Today, the related term “classical liberalism” is applied to the philosophy of John Locke (1632-1704) and he is often referred to as the “father of liberalism.” His defense of individual liberty, his opposition to absolute monarchy, his insistence on separation of church and state, and his analysis of the role of “the social contract” provided the U.S. founding fathers with philosophical tools crucial for the Declaration of Independence, the Articles of Confederation and ultimately the Constitution. It is this classical liberalism that also inspired Simon Bolivar, Bernardo O’Higgins and other liberators of Latin America.
In the early 19th century, the Whig and Tory parties were dominant in the English parliament. Something revolutionary happened when the Whigs engineered the passage of the Reform Act of 1832 which was an important step toward making the U.K. a democracy in the modern sense of the term. According to historians, this began the peaceful transfer of power from the landed aristocracy to the emergent bourgeois class of merchants and industrialists. It also coincided with the end of the Romantic Movement, the era of the magical poetry of Keats and Shelley, and led into the Victorian Period and the well intentioned poetry of Arnold and Tennyson.
Since no good deed goes unpunished (especially in politics), passage of the Reform Act of 1832 also led to the demise of the Whig Party: the admission of the propertied middle class into the electorate and into the House of Commons itself split the Whigs and the new Liberal Party emerged. The Liberal Party was a powerful force in English political life into the 20th century. Throughout, the party’s hallmark was its stance on individual liberties, free-markets and free-trade.
Accordingly, in the latter part of the 19th century in Europe and the US, the term “liberalism” came to mean commitment to individual freedoms (in the spirit of Locke) together with support of free-market capitalism mixed in with social Darwinism. Small government became a goal: “That government is best that governs least” to steal a line from Henry David Thoreau.
Resistance to laissez-faire capitalism developed and led to movements like socialism and labor unions. In the US social inequality also fueled populist movements such as that led by William Jennings Bryan, the champion of Free Silver and other causes. Bryant, a brilliant orator, was celebrated for his “Cross of Gold” speech, an attack of the gold standard, in which he intoned
    “you shall not crucify mankind upon a cross of gold.”
He was a national figure for many years and ran for President on the Democratic ticket three times; he earned multiple nicknames such as The Fundamentalist Pope, the Boy Orator of the Platte, The Silver Knight of the West and the Great Commoner.
At the turn of the century, in the US public intellectuals like John Dewey began to criticize the basis of laissez-faire liberalism as too individualistic and too threatening to an egalitarian society. President Theodore Roosevelt joined the fray, led the “progressive” movement, initiated “trust-busting” and began regulatory constraints to rein big business in. The Sixteenth Amendment which authorized a progressive income tax made it through Congress and the state legislatures during this presidency.
At this time, the meaning of the word “liberal” took on its modern political meaning: “liberal” and “liberalism” came to refer to the non-socialist, non-communist political left – a position that both defends market capitalism and supports infrastructure investment and social programs that benefit large swaths of the population; in Europe the corresponding phenomenon is Social Democracy, though the Social Democrats tend to be more to the left and stronger supporters of the social safety net, not far from the people who call themselves “democratic socialists” in the US today.
On the other hand, the 19th century meaning of “liberalism” has been taken on by the term “neo-liberalism” which is used to designate aggressive free-market capitalism in the age of globalization.
In the first term of Woodrow Wilson’s presidency, Congress passed the Clayton Anti-Trust Act as well as legislation establishing the Federal Reserve System and the progressive income tax. Wilson is thus credited with being the founder of the modern Democratic Party’s liberalism – this despite his anti-immigrant stance, his anti-Catholic stance and his notoriously racist anti-African-American stance.
The great political achievement of the era was the 19th Amendment which established the right of women to vote. The movement had to overcome entrenched resistance, finally securing the support of Woodrow Wilson and getting the necessary votes in Congress in 1919. Perhaps, it is this that has earned Wilson his standing in the ranks of Democratic Party liberals.
Bryan, for his part a strong supporter of Wilson and his liberal agenda in the 1912 election, then served as Wilson’s first Secretary of State, resigning over the handling of the Lusitania sinking. His reputation has suffered over the years because of his humiliating battle with Clarence Darrow in the Scopes “Monkey” Trial of 1925 (Fredric March and Spencer Tracy resp. in “Inherit the Wind”); at the trial, religious fundamentalist Bryan argued against teaching human evolution in public schools. It is likely this has kept him off the list of heroes of liberal politics in the US, especially given that this motion picture, a Stanley Kramer “message film,” was an allegory about the McCarthy era witch-hunts. Speaking of allegories, a good case can be made that the Wizard of Oz is an allegory about the populist movement and the Cowardly Lion represents Bryan himself – note, for one thing, that in L. Frank Baum’s book Dorothy wears Silver Shoes and not Ruby Slippers!
The truly great American liberal was FDR whose mission it was to save capitalism from itself by enacting social programs called for by socialist and labor groups and by setting up regulations and guard rails for business and markets. The New Deal programs provided jobs and funded projects that seeded future economic growth; the regulations forced capitalism to deal with its problem of cyclical crises, panics and depressions. He called for a “bank holiday,” kept the country more or less on the gold standard by issuing an executive order to buy up nearly all the privately held the gold in the country (hard to believe today), began Social Security and unemployment insurance, instituted centralized controls for industry, launched major public works projects (from the Lincoln Tunnel to the Grand Coulee Dam), brought electricity to farms, archived the nation’s folk music and folklore, sponsored projects which brought live theater to millions (launching the careers of Arthur Miller, Orson Welles, Eliza Kazan and many others) and more. This was certainly not a time of government shutdowns.
In the post WWII period and into the 1960s, there were even “liberal Republicans” such as Jacob Javits and Nelson Rockefeller; today “liberal Republican” is an oxymoron. The most daring of the liberal Republicans was Earl Warren, the one-time Governor of California who in 1953 became Chief Justice of the Supreme Court. In that role, Warren created the modern activist court, stepping in to achieve justice for minorities, an imperative which the President and the Congress were too cowardly to take on. But his legacy of judicial activism has led to a politicized Supreme Court with liberals on the losing side in today’s run of 5-4 decisions.
Modern day liberalism in the U.S. is also exemplified by LBJ’s Great Society which instituted Medicare and Medicaid and which turned goals of the Civil Rights Movement into law with the Civil Rights Act of 1964 and the Voting Rights Act of 1965.
JFK and LBJ were slow to rally to the cause of the Civil Rights Movement (Eleanor Roosevelt was the great liberal champion of civil rights) but in the end they did. Richard Nixon and the Republicans then exploited anti-African-American resentment in the once Democratic “solid South” and implemented their “Southern strategy” which, as LBJ feared, has turned those states solidly Republican ever since. The liberals’ political clout was also gravely wounded by the ebbing of the power of once mighty labor unions across the heartland of the country. Further, the conservative movement was energized by the involvement of ideologues with deep pockets like the Koch brothers and by the emergence of charismatic candidates like Ronald Reagan. The end result has been that only the West Coast and the Northeast can be counted on to elect liberal candidates consistently, places like San Francisco and Brooklyn
What is more, liberal politicians have lost their sense of mission and have failed America in many ways since that time as they have moved further and further to the right in the wake of electoral defeats, cozying up to Wall Street along the way. For example, it was Bill Clinton who signed the bill repealing the Glass-Steagall Act undoing one of the cornerstones of the New Deal; he signed the bill annulling the Aid to Families With Dependent Children Act which also went back to the New Deal; he signed the bill that has made the US the incarceration capital of the world, the Violent Crime Control and Law Enforcement Act.
Over the years, the venerable term “liberal” itself has been subjected to constant abuse from detractors. The list of mocking gibes includes tax-and-spend liberal, bleeding heart liberal, hopey-changey liberal, limousine liberal, Chardonnay sipping liberal, Massachusetts liberal, Hollywood liberal, … . There is even a book of such insults and a web site for coming up with new ones. And there was the humiliating defeat of the liberal standard bearer Hillary Clinton in 2016.
So battered is it today that “liberal” is giving way to “progressive,” the label of choice for so many of the men and women of the class of 2018 of the House of Representatives. Perhaps, one hundred years is the limit to the shelf life of a major American political label which would mean “liberal” has reached the end of the line – time to give it a rest and go back to Samuel Johnson’s definition?

Conservative Semantics

Conservatism as a political philosophy traces its roots to the late 18th century: its intellectual leaders were the Anglo-Irish Member of Parliament Edmund Burke and the Scottish economist and philosopher Adam Smith.

In his speeches and writings, Burke extolled tradition, the “natural law” and “natural rights”; he championed social hierarchy, an established church, gradual social change and free markets; he excoriated the French Revolution in his influential pamphlet Reflections on the Revolution in France, a defense of monarchy and the institutions that protect good social order.

Burke is also well known in the U.S. for his support for the colonists in the period before the American Revolution notably in his Speech on Conciliation with the Colonies (1775) where he alerts Parliament to the “fierce spirit of liberty” that characterizes Americans.

Adam Smith, a giant figure of the Scottish Enlightenment, was the first great intellectual champion of laissez-faire capitalism and author of the classic The Wealth of Nations (1776).

Burke and Smith formed a mutual admiration society. According to a biographer of Burke, Smith thought that “on subjects of political economy, [Burke] was the only man who, without communication, thought on these subjects exactly as he did”; Burke, for his part, called Smith’s opus “perhaps the most important book ever written.” Their view of things became the standard one for conservatives throughout the 19th century and well into the 20th.

However, there is an internal inconsistency in traditional conservatism. The problem is that, in the end, laissez-faire capitalism upends the very social structures that traditional conservatism seeks to maintain. The catch-phrase of the day among pundits has become “creative destruction”; this formula, coined by the Austrian-American economist Joseph Schumpeter, captures the churning of capitalism which systematically creates new industries and new social institutions that replace the old – e.g. Sears by Amazon, an America of farmers by an America of city dwellers. Marx argued that capitalism’s failures would lead to its demise; Schumpeter argued that capitalism has more to fear from its triumphs: ineluctably the colossal success of capitalism hollows out the social institutions and mores which nurture capitalism such as church-going and the Protestant Ethic itself. Look at Western Europe today where capitalism is triumphant but where church attendance is reduced to three events: “hatch, match and dispatch,” to put it the playful way Anglicans do.

The Midas touch is still very much with us: U.S. capitalism tends to transform every activity it comes upon into a money-making version of itself. Thus something once innocent and playful like college athletics has been turned into a lucrative monopoly: the NCAA rules over a network of plantations staffed by indentured workers and signs billion dollar television contracts. Health care, too, has been transformed into a money-making machine with lamentable results: Americans pay twice as much for doctors’ care and prescription drugs as those in other advanced industrialized countries and the outcomes are grim in comparison – infant mortality and death in childbirth are off the charts in the U.S. and life expectancy is low compared to those other countries.

On the other hand, a modern capitalist economy can work well for its citizens. We have the examples of Scandinavia and of countries like Japan and Germany. Economists like Thomas Piketty write about the “thirty glorious” years after 1945 when post WWII capitalism built up a solid, prosperous middle class in Western Europe. Add to this what is known as the “French paradox” – the French drink more than Americans, smoke more, have sex more and still live some years longer. To make things worse, their cuisine is better, their work week is shorter and they take much longer vacations – one more example of how a nation can make capitalism work in the interest of its citizenry.

In American political life, in the 1930s, the label “conservative” was grabbed by forces opposed to FDR and the New Deal. Led by Senator Josiah W. Bailey of North Carolina, democrats with some Republican support published “The Conservative Manifesto,” a document which extolled the virtues of free enterprise, limited government and the balance of power among the branches of government.

In the post-war period the standard bearer of conservatism in the U.S. was Republican Senator Robert Taft of Ohio who was anti-New-Deal, anti-union, pro-business and who, as a “fiscal conservative,” stood for reduced government spending and low taxes; he also stood for a non-interventionist foreign policy. His conservatism harked back to Burke’s ideals of community: he supported Social Security, a minimum wage, public housing and federal aid to public education.

However, the philosophy of the current “conservative” political leadership in the U.S. supports all the destructive social Darwinism of laissez-faire capitalism, reflecting the 17th century English philosopher Thomas Hobbes and his dystopian vision much more than either Burke or Smith. Contemporary “conservatism” in the U.S. is hardly traditional conservatism. What happened? Mystère.

A more formal manifesto of Burkean conservatism, The Conservative Mind, was published in 1953 by Russell Kirk, then a professor at Michigan State. But conservative thought was soon co-opted and transformed by a wealthy young Texan whose family money came from oil prospecting – in Mexico and Venezuela! William F. Buckley, like Kirk a Roman Catholic, was founder and long time editor-in-chief of the seminal conservative weekly The National Review. Buckley is credited with (or accused of) transforming traditional Burkean conservatism into what goes by the name of “conservatism” in the U.S. today; he replaced the traditional emphasis on community with his libertarian view point “individualism” and replaced Taft’s non-interventionism with an aggressive Cold War political philosophy – the struggle against godless communism became the great moral cause of the “conservative movement.” For a portrait of the man, click HERE .

To his credit, Buckley kept his distance from fringe groups such as the John Birch Society; Buckley also eschewed Ayn Rand and her hyper-individualistic, atheistic philosophy of Objectivism; a man of letters himself, Buckley was likely appalled by her wooden prose – admittedly Russian and not English was her first language, but still she was no Vladimir Nabokov. On the other hand, Buckley had a long friendship with Norman Mailer, the literary icon from Brooklyn, the opposite of Buckley in almost every way.

Buckley as a cold war warrior was very different from libertarians Ron Paul and Rand Paul who both have an isolationist philosophy that opposes military intervention. On the other hand, Buckley always defended a position of white racial supremacy and the Pauls  have expressed eccentric views on race presumably justified by their shared libertarian concept of the right of individuals to do whatever they choose to do even if it includes discrimination against others. For example, Rand Paul has stated that he would have voted against the Civil Rights Act of 1964 which outlawed the Jim Crow Laws of the segregationist states “because of the property rights element … .”

In the 1960s, 70s and 80s, Buckley’s influence spread. The future president Ronald Reagan was weaned off New Deal Liberalism through reading The National Review; in turn Buckley became a supporter of Reagan and they appeared together on Buckley’s TV program The Firing Line. The “conservative movement” was also propelled by ideologues with deep pockets and long term vision like the Koch brothers – for a interesting history of all this, see Nancy MacLean’s Democracy in Chains.

To the Buckley conservatives today, destruction of social institutions, “creative” or otherwise, is somehow not a problem and militarism is somehow virtuous.

As for destruction, among the social structures that have fallen victim recently to creative destruction is the American middle class itself, as income inequality has grown apace. This process began at the tail end of the 1960s and has been accelerating since Ronald Reagan’s presidency as Keynesian economics has given way to “supply side” economics; moreover, the guardrails for capitalism imposed by the New Deal have come undone: the Glass-Steagall Act has been repealed, the labor movement has been marginalized, and high taxes on the wealthy have become a thing of the past – contrast this with the fact that Colonel Tom Parker, the manager of Elvis Presley, considered it his patriotic duty to keep The King in the 90% tax bracket back in the day.

As for militarism, despite VE Day and VJ Day, since the 1950s, the U.S. has been engaged in an endless sequence of wars – big (Korea) and small (Grenada), long (Vietnam) and short (the First Gulf War), visible (Afghanistan) and invisible (Niger), loud (Iraq) and quiet (Somalia), … . All of which has created a situation much like the permanent state of war of Orwell’s 1984.

Moreover, since Buckley’s time, the American “conservatives” have even moved further right: reading Ayn Rand (firmly atheist and pro-choice though she was) in high-school or college has become a rite of passage, e.g. ex-Speaker Paul Ryan. An interventionist, even war-mongering, wing of the “conservative movement” has emerged, the “neo-conservatives” or “neo-cons.” Led by Dick Cheney, they were the champions of George W. Bush’s invasion of Iraq and they applaud all troop “surges” and new military interventions.

As David Brooks recently pointed out in his New York Times column (Nov. 16, 2018), the end of the Cold War deprived the “conservative movement” of its great moral cause, the struggle against godless communist collectivism. And what was a cause has morphed into expensive military adventurism. Indeed, the end of the Cold War failed to yield a “peace dividend” and the military budget today threatens the economic survival of the nation – the histories of France, Spain and many other countries bear witness to how this works itself out, alas! In days of yore, it would have been the fiscal restraint of people known as conservatives that kept government spending in check; today “conservative” members of Congress continue to sound like Robert Taft on the subject of government spending when attacking programs sponsored by their opponents, but they do not hesitate to drive the national debt higher by over-funding the military and pursuing tax cuts for corporations and the wealthy. Supply-side economics cleaves an ever widening income gap, the least conservative social policy imaginable. Then too these champions of the free market and opponents of government intervention rushed to bail out the big banks (but not the citizens whose homes were foreclosed on) during the Great Recession of 2008. All this leads one to think that this class of politicians is serving its donor class and not the working class, the middle class, the upper middle class or even much of the upper class.

Perhaps semantic rock bottom is reached when “conservative” members of Congress vote vociferously against any measure for environmental conservation. But this is predictable given the lobbying power of the fossil fuel industry, a power so impressive that even the current head of the Environmental Protection Agency is a veteran lobbyist for Big Coal. Actually for these conservatives, climate change denial is consistent with their core beliefs: fighting the effects of global warming effectively will require large-scale government intervention, significantly increased regulation of industry and agriculture as well as binding international agreements – all of which are anathema to conservatives in the U.S. today.

Still there where the word “conservative” is most misapplied is in matters judicial. One speaks today of “conservative majorities” on the Supreme Court but these majorities have proved themselves all too ready to rewrite laws and overturn precedent in 5-4 decisions in an aggressive phase of judicial activism.

So for those who fear that corruption of its language is dangerous for the U.S. population, this is the worst of times: “liberal” which once designated a proponent of Gilded Age laissez-faire capitalism is now claimed by the heirs of the New Deal and the Great Society; “conservative” which once designated a traditionalist is now the label for radical activists both political and judicial. “Liberal” is yielding to “progressive” now. However, the word “conservative” has a certain gravitas to it and “conservatism” has taken on the trappings of a religious movement complete with patron saints like Ronald Reagan and Margaret Thatcher; “conservative” is likely to endure, self-contradictory though it has become.

The Roberts Court

 

In 2005, with the retirement of Justice Rehnquist, John Roberts was named to the position of Chief Justice by Republican president George W. Bush. Another change in Court personnel occurred in 2008 when Sandra Day O’Connor retired and was replaced by Justice Samuel Alito. With Roberts and Alito, the Court had an even more solid “conservative” majority than before – the result being that more than ever in a 5-4 decision a justice’ vote would be determined by the party of the president who appointed him or her.

It was Ronald Reagan who named the first woman justice to the Supreme Court with the appointment of Sandra Day O’Connor in 1981. It was also Ronald Reagan who began the practice of Republican presidents’ naming ideological, conservative Roman Catholics to the Supreme Court with the appointment of Antonin Scalia in 1986. This practice on the part of Republican presidents has indeed been followed faithfully as we have to include Neil Gorsuch in this group of seven – though an Episcopalian today, Gorsuch was raised Catholic, went to parochial school and even attended the now notorious Georgetown Prep. Just think: with Thomas and Gorsuch already seated, the Brett Kavanaugh appointment brings the number of Jesuit trained justices on the Court up to three; this numerologically magic number of men trained by an organization famous for having its own adjective, plus the absence of true WASPs from the Supreme Court since 2010, plus the fact that all five of the current “conservative” justices have strong ties to the cabalistic Federalist Society could all make for an interesting conspiracy theory – or at least the elements of a Dan Brown novel.

It is said that Chief Justice Roberts is concerned about his legacy and does not want his Court to go down in history as ideological and “right wing.” However, this “conservative” majority has proven radical in their 5-4 decisions, decisions for which they then have full responsibility.

They have put gun manufacturers before people by replacing the standard interpretation of the 2nd Amendment that went back to Madison’s time with a dangerous one by cynically appealing to “originalism” and claiming the authority to speak for Madison and his contemporaries (District of Columbia v. Heller 2008)

Indeed, with Heller there was no compelling legal reason to play games with the meaning of the 2nd Amendment – if the over 200 years of interpretation of the wording of the amendment isn’t enough, if the term “militia” isn’t enough, if the term “bear arms” isn’t enough to link the amendment to matters military in the minds of the framers, one can consult James Madison’s original text:

    “The right of the people to keep and bear arms shall not be infringed; a well armed, and well regulated militia being the best security of a free country: but no person religiously scrupulous of bearing arms, shall be compelled to render military service in person.” [Italics added].

The italicized clause was written to reassure Quakers and other pacifist religious groups that the amendment was not forcing them to serve in the military, but it was ultimately excluded from the final version for reasons of separation of church and state. This clause certainly indicates that the entirety of the amendment, in Madison’s view, was for the purpose of maintaining militias: Quakers are not vegetarians and do use firearms for hunting. Note too that Madison implies in this text and in the shorter final text as well that “the right to bear arms” is a collective “right of the people” rather than an individual right to own firearms.

The radical ruling in Heller by the five “conservative” justices has stopped all attempts at gun control, enriched gun manufacturers, elevated the National Rifle Association to the status of a cult and made the Court complicit in the wanton killings of so many.

The “conservative” majority of justices has overturned campaign finance laws passed by Congress and signed by the President by summoning up an astonishing, ontologically challenged version of the legal fiction that corporations are “persons” and imbuing them with new First Amendment rights (Citizens United v. FEC 2008).

Corporations are treated as legal “persons” in some court matters, basically so that they can pay taxes and so that the officers of the corporation are not personally liable for a corporation’s debts. But, there was no compelling legal reason to play Frankenstein in Citizens United and create a new race of corporate “persons” by endowing corporations with a human-like right to free speech that allows them to spend their unlimited money on U.S. political campaigns; this decision is the first of the Roberts Court’s rulings to make this list of all-time worst Supreme Court decisions, a list (https://blogs.findlaw.com/supreme_court/2015/10/13-worst-supreme-court-decisions-of-all-time.html ) compiled for legal professionals. It has also made TIME magazine’s list of the two worst decisions in the last 60 years and likely many other such rankings. The immediate impact of this decision has been a further gap between representatives and the people they are supposed to represent; the political class was at least somewhat responsive to the voters, now they are only responsive to the donor class. This likely works well for the libertarians and conservatives who boast “this is a Republic, not a Democracy.”

These same five justices have continued their work by

  • usurping Congress’ authority and undoing hard-fought for minority protections from the Voting Rights Act by adventuring into areas of history and politics that they clearly do not grasp and basing the decision on a disingenuous view of contemporary American race relations (Shelby County v. Holder 2013),

  • doubling down on quashing the Voting Rights Act five years later in a decision that overturned a lower court ruling that Texas’ gerrymandered redistricting map undercut the voting power of black and Hispanic voters (Texas Redistricting Case 2018)

  • breaching the separation of Church and State by ascribing “religious interests” to companies in a libertarian judgment that can justify discrimination in the name of person’s individual freedoms, the “person” in this case being a corporation no less (Burwell v. Hobby Lobby Stores 2014)

  • gravely wounding the labor movement by overturning the Court’s own ruling in a 1977 case, Abood v. Detroit Board of Education, thus undoing years of established Labor Law practice (Janus v. AFSCME 2018) – a move counter to the common law principle of following precedent.

These six decisions are examples of “self-inflicted wounds” on the part of the Roberts Court and can be added to a list begun by Chief Justice Charles Evans Hughes, a list that begins with Dred Scott. The recent accessions of Neil Gorsuch and Brett Kavanaugh to seats on the Court may well make for even more decisions of this kind.

This judicial activism is indeed as far away from the dictionary meaning of “conservatism” as one can get. Calling these activist judges “conservative” makes American English a form of the “Newspeak” of Orwell’s 1984. The Court can seem to revel in its arrogance and its usurpation of power: Justice Scalia would dismiss questions about Bush v. Gore with “Get over it” – a rejoinder some liken to “Let them eat cake” – and he refused to recuse himself in cases involving Dick Cheney, his longtime friend (Bush v. Gore, Cheney v. U.S. District Court).

The simple fact is that the courts have become too politicized. The recent fracas between the President and the Chief Justice where the President claimed that justices’ opinions depended on who appointed them just makes this all the more apparent.

Pundits today talk endlessly on the topic of how “we are headed for a constitutional crisis,” in connection with potential proceedings to impeach the President. But we are indeed in a permanent constitutional crisis in any case. Thus, there is a clear majority in the country that wants to undo the results of Citizens United and of Heller in particular – both are decisions which shot down laws enacted by elected representatives. Congressional term limits are another example; in 1995 with U.S. Term Limits Inc. v. Thornton, the Court nullified 23 state laws instituting term limits for members of Congress, thereby declaring that the Constitution had to be amended for such laws to pass judicial review.

In the U.S. the Congress is helpless when confronted with this kind of dilemma; passing laws cannot help since the Court has already had the final say, a true Catch-22. This is an American constitutional problem, American Exceptionalism gone awry. In England and Holland, for example, the courts cannot apply judicial review to nullify a law; in France, the Conseil Constitutionel has very limited power to declare a law unconstitutional; this was deliberately engineered by Charles de Gaulle to avoid an American style situation because per DeGaulle “la [seule] cour suprême, c’est le peuple” (The only supreme court is the people).

So what can the citizen majority do? The only conceivable recourse is to amend the Constitution; but the Constitution itself makes that prospect dim since an amendment would require approval by a supermajority in both houses of Congress followed by ratification by two-thirds of the states; moreover, states with barely 4% of the population account for over one-third of the states, making it easy and relatively inexpensive for opposition to take hold – e.g. the Equal Rights Amendment. Also, those small, rural states’ interests can be very different from the interests of the large states – one reason reform of the Electoral College system is an impossibility with the Constitution as it is today. Only purely bureaucratic measures can survive the amendment ratification process. Technically, there is a second procedure in Article V of the Constitution where a proposal to call a Constitutional Convention can be initiated by two-thirds of the states but then approval of an amendment by three-fourths of the states is required; this procedure has never been used. Doggedly, the feisty group U.S. Term Limits (USTL), the losing side in that 1995 decision, is trying to do just that! For their website, click https://www.termlimits.com/  .

What has happened is that the Constitution has been gamed by the executive and judicial branches of government and the end result is that the legislative branch is mostly reduced to theatrics. Thus, for example, while the Congress is supposed to have the power to make war and the power to levy tariffs, these powers have been delegated to the office of the President. Even the power to make law has, for so many purposes, been passed to the courts where every law is put through a legal maze and the courts are free to nullify the law or change its meaning invoking the interpretation du jour of the Constitution and/or overturning legal precedents, all on an as needed basis.

This surge in power of the judiciary was declared to be impossible by Alexander Hamilton in his Federalist 78 where he argues approvingly that the judiciary will necessarily be the weakest branch of the government under the Constitution. But oddly no one is paying attention to the rock-star founding father this time. For example, this Federalist Paper of Hamilton’s is sacred scripture for the assertive Federalist Society but they seem silent on this issue – but that is not surprising given that they have become the gatekeepers for Republican presidents’ nominees for the Supreme Court.

Americans are simply blinded to problems with the Constitution by the endless hymns in its praise in the name of American Exceptionalism. Many in Europe also argue that the way the Constitution contributes to inaction is a principal reason that voter participation in the U.S. is far lower than that in Europe and elsewhere. Add to that, the American citizen’s well founded impression that it is the money of corporations, billionaires and super-PACs in cahoots with the lobbyists of the Military Industrial Complex, Big Agriculture, Big Pharma, Big Oil and Big Banks that runs the show and you have a surefire formula to induce voter indifference. Even the improved turnout in the 2018 midterm elections was unimpressive by international standards.

This is not a good situation; history has examples of what happens when political institutions are no longer capable of running a complex nation – the French Revolution, the fall of the Roman Republic, the rise of Fascism in post WWI Europe … .

Bush v. Gore

In 1986, when Warren Burger retired, Ronald Reagan promoted Associate Justice William Rehnquist to the position of Chief Justice and nominated Antonin Scalia to fill Rehnquist’s seat. This created a solid conservative kernel on the Court consisting of the five justices Rehnquist, Scalia, Thomas, O’Connor and Kennedy; there was also Justice John Paul Stevens (appointed by Gerald Ford) who was considered a “moderate conservative.” On occasion O’Connor or Kennedy could become a swing vote and turn things in another direction and Stevens too voted against the conservative majority on some important decisions.
While more conservative than the Burger Court, the Rehnquist Court did not overthrow the legacy of the Warren Court; on the other hand, it promoted a policy of “New Federalism” which favored empowering the states rather than the federal government.
This philosophy was applied in two cases that weakened Roe v. Wade, the defining ruling of the Burger Court.
Thus in Webster v. Reproductive Health Services (1989), the Court upheld a Missouri law that restricted the way state funds could be used in connection with counseling and other aspects of abortion services; this ruling allowed states to legislate in ways thought to have been ruled out by Roe.
As a second example, we have their ruling in Planned Parenthood v. Casey (1992) which also weakened Roe by giving much more power to the states to control access to abortion. Thus today in states like Mississippi, there is virtually no such access. All this works against the poor and the less affluent as women need to travel far, even out of state, to get the medical attention they seek.
Then the Rehnquist Court delivered one of the most controversial, politicized decisions imaginable with its ruling in Bush v. Gore (2000). With this decision, the Court came between a state supreme court and the state’s election system and hand-delivered the presidency to Republican George W. Bush.
After this case, the Court made other decisions that generated some controversy, but in these it came down, relatively speaking, on the liberal side in ruling on anti-sodomy laws, on affirmative action and on election finance. However, Bush v. Gore is considered one of the worst Supreme Court decisions of all time. For a list that includes this decision, Dred Scott, Plessy v. Ferguson and ten others, click HERE ; for a TIME magazine piece that singles it out as one of the two worst decisions since 1960 (along with Citizens United v. FEC), click HERE .
Naturally, the 5-4 decision in Bush v. Gore by the Court’s conservative kernel is controversial because of the dramatic end it put to the 2000 presidential election. There are also legal and procedural aspects of the case that get people’s dander up.
To start there is the fact that in this decision the Court overruled a state supreme court on the matter of elections, something that the Constitution itself says should be left to the states.
For elections, Section 4 of Article 1 of the U.S. Constitution leaves the implementation to the states to carry out, in the manner they deem fit – subject to Congressional oversight but not to court oversight:
    “The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof; but the Congress may at any time by Law make or alter such Regulations, except as to the Places of chusing (sic) Senators.”
N.B. In the Constitution, the “Senators” are an exception because at that time the senators were chosen by the state legislatures and direct election of senators by popular vote did not come about until 1913 and the 17th Amendment.
From the time of the Constitution, voting practices have varied from state to state. In fact, at the outset free African-Americans with property could vote in Maryland and that lasted until 1810; women of property could vote in New Jersey until 1807; in both cases, the state legislatures eventually stepped in and “restored order.”
In the Constitution, it is set out that the electors of the Electoral College should be named by each state legislature and not voted for by the people at all – Hamilton and Madison were most fearful of “mob rule.” The only founding father who expressed some admiration for the mass of U.S. citizenry was, not surprisingly, Jefferson who famously asserted in a letter to Lafayette that “The yeomanry of the United States are not the canaille [rabble] of Paris.”
Choosing Electors by popular vote was established nation-wide, however, by the 1820s; by Section 4 of Article 1 above, it is each state’s responsibility to implement its own system for choosing electors; there is no requirement for things to be uniform. In fact, today Nebraska and Maine use congressional district voting to divide up their electors among the candidates while all the other states use plurality voting where the presidential candidate with the most votes is awarded all the electoral votes from that state. In yet another break with the plurality voting system that the U.S. inherited from England, the state of Maine now employs ranked choice voting to elect Congressional representatives – in fact, in 2018 a candidate in a House Congressional race in Maine with fewer first place votes but a larger total of first and second place votes emerged the victor in the second round of the instant runoff.
So from a Constitutional point of view, the Supreme Court really did not have the authority to take the case Bush v. Gore on. In his dissent, Justice John Paul Stevens decried this usurpation of state court power:
    [The court displayed] “an unstated lack of confidence in the impartiality and capacity of the state judges who would make the critical decisions if the vote count were to proceed”.
Moreover, Sandra Day O’Connor said as much when some years later in 2013 she expressed regret over her role in Bush v. Gore telling the Chicago Tribune editorial board: “Maybe the court should have said, ‘We’re not going to take it, goodbye.’ ”
Taking this case on added to the Courts history of “self inflicted wounds” to use the phrase Chief Justice Charles Evans Hughes applied to bad decisions the Court just did not have to make the way they did for any compelling legal reason.
The concurring justices admitted that their decision was not truly a legal ruling but rather an ad hoc way of making a problem go away when they said that the ruling in Bush v. Gore should not be considered a precedent for future cases:
    “Our consideration is limited to the present circumstances, for the problem of equal protection in election processes generally presents many complexities.”
Another odd thing was that the ruling did not follow the usual practice of having one judge write the deciding opinion with concurring and dissenting opinions from the other justices. Instead, they followed a technique known as a per curiam decision which is usually reserved for 4-4 hung court when no actual decision is being made. It is a technique for dodging responsibility for the decision and not assigning credit for a decision to a particular justice. As another example of how this method of laying down a decision is employed, in the state of Florida their Supreme Court often issues per curiam decisions in death penalty cases.
Borrowing a trope from the Roman orator Cicero, we pass over in silence the revelation that three of the majority judges in this case had reason to recuse themselves by not mentioning the fact that Justice Thomas’ wife was very active in the Bush transition team even as the case was before the Court, by leaving out the fact that Justice Scalia’s son was employed by the very law firm that argued Bush’s case before the Court, by omitting the fact that Justice Scalia and vice-presidential candidate Dick Cheney were longtime personal friends and by skipping over the fact that according to The Wall Street Journal and Newsweek, Justice O’Connor had previously said that a Gore victory would be a disaster for her because she would not want to retire under a Democratic president!
For an image of Cicero practicing his craft before an enthralled Roman Senate, click HERE .
So we limit ourselves to quoting Harvard Professor Alan Dershowitz who summed things up this way:
    “The decision in the Florida election case may be ranked as the single most corrupt decision in Supreme Court history, because it is the only one that I know of where the majority justices decided as they did because of the personal identity and political affiliation of the litigants. This was cheating, and a violation of the judicial oath.”
Another villain in the piece is Governor Jeb Bush of Florida whose voter suppression tactics implemented by Secretary of State Katherine Harris disenfranchised a significant number of voters. In the run up to this election, according to the Brennan Center for Justice at NYU, some 4,800 eligible African-American Florida voters were wrongly identified as convicted felons and purged from the voting rolls. Given that 86% of African-American voters went for Gore over Bush in 2000, one can do the math and see that Gore would likely have won if but 20% of these African-American voters had been able to cast ballots.
Yet another villain in the piece and in the recurring election problems in Florida is the plurality voting system that the state uses to assign all its votes for its electors to the candidate who wins the most votes (but not necessarily the majority of votes). This system works poorly in cases where the elections are as tight as again and again they prove to be in Florida. In 2000, had Florida been using ranked-choice voting (to account for votes for Nader and Buchanan) or congressional district voting (as in Maine and Nebraska), there would have no recount crisis at all – and either way Gore would in all probability have won enough electoral votes to secure the presidency and the matter never would have reached the Supreme Court.
Sadly, the issues of the presidential election in Florida in 2000 are still very much with us – the clumsiness of plurality voting when elections are close, the impact of voter suppression, antiquated equipment, the role of the Secretary of State and the Governor in supervising elections, … . The plot only thickens.

The Warren Court, Part B

In the period from 1953-1969, Earl Warren became the most powerful Chief Justice since John Marshall as he led the Court through a dazzling series of rulings that established the judiciary as a more than equal partner in government – an outcome deemed impossible by Alexander Hamilton in his influential paper Federalist 78 and an outcome deemed undesirable for the separation of powers in government by Montesquieu in his seminalThe Spirit of the Laws. The impact of this Court was so dramatic that it provoked a nationwide call among conservatives for Warren’s impeachment. (Click  HERE).
As the Cold War intensified, the historical American separation of Church and State was compromised. In the name of combating godless communism, the national motto was changed! From the earliest days of the Republic, the motto had been “E Pluribus Unum,” which is the Latin for “Out of Many, One”; this motto was adopted by an Act of Congress under the Articles of Confederation in 1782. In 1956, the official motto became “In God We Trust” and that text now appears on all U.S. paper currency. Shouldn’t they at least have asked what deist leaning Washington, Jefferson, Franklin and Hamilton would have thought before doing this – after all their pictures are on the bills?
Spurred on by the fact that the phrase “under God” appears in most versions of the Gettysburg Address,
          that this nation, under God, shall have a new birth of freedom
groups affiliated with organized religion like the Knights of Columbus successfully campaigned for Congress to insert this phrase into the Pledge of Allegiance; this was done in 1954. Interestingly, “under God” does not appear in Lincoln’s written text for the cemetery dedication speech but was recorded by listeners who were taking notes. Again, this insertion in the Pledge was justified at the time by need to rally the troops in the struggle against atheistic communism. In particular, in the Catholic Church a link was established between the apparitions of The Virgin at Fatima in the period from May to October of 1917 and the October Revolution in Russia in 1917 – though the Revolution actually took place on November 6th and 7th in the Gregorian Calendar; with the Cold War raging, the message of Fatima became to say the rosary for the conversion of Russia, a directive that was followed fervently by the laity, especially school children during the 1950s and 1960s. In addition, the “Second Secret” of Our Lady of Fatima was revealed to contain the line “If my requests are heeded, Russia will be converted, and there will be peace.” When the Soviet Union fell at the end of 1991, credit was ascribed to Ronald Reagan and other political figures; American Catholics of a certain age felt slighted indeed when their contributing effort went unrecognized by the general public!!
The course was reversed somewhat with the Warren Court’s verdict in Engle v. Vitale (1962) when the Court declared that organized school prayer violated the separation of Church and State. A second (and better known) decision followed in Abington School District v. Schempp (1963) where the Court ruled that official school Bible reading also violated the separation of Church and State. This latter case is better known in part because it involved the controversial atheist Madalyn Murray O’Hair who went on to make an unsuccessful court challenge to remove “In God We Trust” from U.S. paper currency. Ironically, the federal courts that thwarted this effort cited Abington School District where Justice Brennan’s concurring opinion explicitly stated that “the motto” was simply too woven into the fabric of American life to “present that type of involvement which the First Amendment prohibits.” In the U.S., “God” written with a capital “G” refers specifically to the Christian deity; so a critic deconstructing Brennan’s logic might argue that Brennan concedes that worship of this deity is already an established religion here.
The Warren Court also had a significant impact on other areas of rights and liberties.
With Baker v. Carr (1962) and Reynolds v. Sims (1964), the Court codified the principle of “one man, one vote.” In the Baker case, the key issue was whether state legislative redistricting was a matter for state and federal legislatures or whether it came under the authority of the courts. Here, the Court overturned its own decision in Colegrove v. Green (1946) where it ruled that such redistricting was a matter for the legislatures themselves with Justice Frankfurter declaring “Courts ought not to enter this political thicket.” The majority opinion in the Baker ruling was written by Justice Brennan; Frankfurter naturally dissented. In any case, this was a bold usurpation of authority on the part of the Supreme Court, something hard to undo even should Congress wish to do so. Again we are very far from Marbury v. Madison; were that case to come up today, one would be very surprised if the Supreme Court didn’t instruct Secretary of State Madison to install Marbury as Justice of the Peace in Washington D.C.
With Gideon v. Wainwright (1963) the Court established the accused’s right to a lawyer in state legal proceedings. This right is established for defendants vis-à-vis the federal government by the Bill of Rights with the Fifth and Sixth Amendments; this case extended that protection to defendants in dealings with the individual states.
With Miranda v. Arizona (1966), it mandated protection against self-incrimination – the “Miranda rights” that a plaintiff must be informed of. A Virginia state law banning interracial marriage was struck down as unconstitutional in Loving v. Virginia (1967), a major civil rights case on its own.
The Gideon and Miranda rulings were controversial, especially Miranda, but they do serve to protect the individual citizen from the awesome power of the State, very much in the spirit of the Bill of Rights and of the Magna Carta; behind the Loving case is an inspiring love story and, indeed, it is the subject of a recent motion-picture.
Warren’s legacy is complex. On the one hand, his Court courageously addressed pressing issues of civil rights and civil liberties, issues that the legislative and executive branches would not deal with. But by going where Congress feared to tread, the delicate balance of the separation of powers among the three branches of government has been altered, irreparably it appears.
The Warren Court (1953-1969) was followed by the Burger Court (1969-1986).
Without Earl Warren, the Court quickly reverted to making decisions that went against minorities. In San Antonio Independent School District v. Rodriguez (1973), the Court held in a 5-4 decision that inequities in school funding did not violate the Constitution; the ruling implied that discrimination against the poor is perfectly compatible with the U.S. Constitution and the right to an education is not a fundamental right. This decision was based on the fact that the right to an education does not appear in the Constitution echoing the logic of Marbury. Later some plaintiffs managed to side-step this ruling by appealing directly to state constitutions. We might add that all 5 concurring justices in this 5-4 ruling were appointed by a Republican president – a pattern that is all too common today; judicial activism fueled by political ideology is a dangerous force.
The following year in another 5-4 decision Milliken v. Bradley (1974), the Court further weakened Brown by overturning a circuit court’s ruling. With this ruling, the Court scratched a plan for school desegregation in the Detroit metropolitan area that involved separate school districts, thus preventing the integration of students from Detroit itself with those of adjacent suburbs like Grosse-Pointe. The progressive stalwarts Marshall, Douglas and Brennan were joined by Byron White in their dissent; the 5 concurring justices were all appointed by Republican presidents. The decision cemented into place the pattern of city schools with black students and surrounding suburban schools with white students.
The most controversial decision made by the Burger Court was Roe v. Wade (1973). This ruling invoked the Due Process Clause of the 14th Amendment and established a woman’s right to privacy as a fundamental right and declared that abortion could not be subject to state regulation until the third trimester of pregnancy. Critics, including Ruth Bader Ginsburg, have found fault with the substance of the decision and its being “about a doctor’s freedom to practice his profession as he thinks best…. It wasn’t woman-centered. It was physician-centered.” A fresh attempt to overturn Roe and subsequent refinements such as Planned Parenthood v. Casey (1992) is expected, given the current ideological makeup of the conservative majority on the Court and the current Court’s propensity to overturn even recent rulings.
Today to the overweening power of the Court has been added a political dimension in that in 5-4 decisions there continues to be, with rare exceptions, that direct correlation between a justice’s vote and the party of the president who appointed that justice. To that add the blatantly partisan political shenanigans we have seen on the part of the Senate Majority leader in dealing with Supreme Court nominations and add the litmus test provided by the conservative/libertarian Federalist Society. The plot thickens. Affaire à suivre.

The Warren Court, Part A

Turning to the courts when the other branches of government would not act was the technique James Otis and the colonists resorted to in the period before the American Revolution, the period when the Parliament and the Crown would not address “taxation without representation.” Like the colonists, African-Americans had to deal with a government that did not represent them. Turning to the courts to achieve racial justice and to bring about social change was then the strategy developed by the NAACP. However, for a long time, even victories in the federal courts were stymied by state level opposition. For example, Guinn v. United States (1915) put an end to one “literacy test” technique for voter suppression but substitute methods were quickly developed.
In the 1950’s, the Supreme Court finally undid the post Civil War cases where the Court had authorized state level suppression of the civil rights of African-Americans – e.g. the Slaughterhouse Cases (1873), the Civil Rights Cases (1875), Plessy v. Ferguson (1896); these were the Court decisions that callously rolled back the 13th, 14th and 15th amendments to the Constitution and locked African-Americans into an appalling system.
The first chink in Plessy was made in a case brilliantly argued before the Supreme Court by Thurgood Marshall, Sweatt v. Painter (1950). The educational institution in this case was the University of Texas Law School at Austin, which at that time actually had a purportedly equal but certainly separate school for African-American law students. The Court was led by Kentuckian Fred M. Vinson, the last Chief Justice to be appointed by a Democratic president – in this case Harry Truman! Marshall exposed the law school charade for the scam that it was. Similarly and almost simultaneously, in McLaurin v. Oklahoma State Regents for Higher Education, the Court ruled that the University of Oklahoma could not enforce segregation in classrooms for PhD students. In these cases, the decisions invoked the Equal Protection Clause of the 14th Amendment; both verdicts were unanimous.
These two important victories for civil rights clearly meant that by 1950 things were starting to change, however slowly – was it World War II and the subsequent integration of the military? Was it Jackie Robinson, the Brooklyn Dodgers and the integration of baseball? Was it the persistence of African-Americans as they fought for what was right? Was it the Cold War fear that U.S. racial segregation was a propaganda win for the international Communist movement? Was it the fear that the American Communist party had gained too much influence in the African-American community – indeed Langston Hughes, Paul Robson and other leaders had visited the Soviet Union; the leading scholar in the U.S. of African-American history was Herbert Aptheker, a card-carrying member of the Communist Party. Or was it an enhanced sense of simple justice on the part of nine “old white men”?
The former Republican Governor of California, Earl Warren, was named to succeed Vinson in 1953 by President Dwight D. Eisenhower. The Warren Court would overturn Plessy and other post Civil War decisions that violated the civil rights of African-Americans and go on to use the power of the Court in other areas of political and civil liberties. This was a period of true judicial activism. Experienced in government, Warren saw that the Court would have to step in to achieve important democratic goals that the Congress was unwilling to act on. Several strong, eminent jurists were part of this Court. There were the heralded liberals William O. Douglas and Hugo Black. There was Viennese-born Felix Frankfurter, a former Harvard Law Professor and a co-founder of the ACLU; Frankfurter was also a proponent of judicial restraint which strained his relationship with Warren over time as bold judgments were laid down. For legal intricacies, Warren relied on William J. Brennan, another Eisenhower appointee but a friend of Labor and a political progressive. Associate Justice John Marshall Harlan, the grandson and namesake of the sole dissenter in Plessy, was the leader of the conservative wing.
Perhaps, the most well-known of the Warren era cases is Brown v. Board of Education, which grouped several civil rights suits that were being pursued by the NAACP and others; the ruling in this case, which like Sweatt was again based on the Equal Protection Clause, finally undid Plessy. This case too was argued before the Court by Thurgood Marshall.
Brown was followed by several other civil rights cases which ended legal segregation in other aspects of American life. Moreover, when school integration was not being implemented around the country, with the case Brown v. Board of Education II (1955), the Court ordered schools to desegregate “with all deliberate speed”; this elusive phrase proved troublesome. It was introduced in the 1912 decision in Virginia v. West Virginia by Court wordsmith Oliver Wendell Holmes Jr. and it was used in Brown II at the behest of Felix Frankfurter, that champion of judicial restraint. This decision was 9-0 as it was in all the Warren Court’s desegregation cases, something that Warren considered most important politically.
With Brown II and other cases, the Court ordered states and towns to carry out its orders. This kind of activism is patently inconsistent with the logic behind Marbury v. Madison where John Marshall declared that the Court could not order anything that was not a power it was explicitly given in the Constitution, not even something spelt out in an act of Congress. No “foolish consistency” to worry about here.
However, school desegregation hit many obstacles. The resistance was so furious that Prince Edward County in Virginia actually closed its schools down for 5 years to counter the Court’s order; in Northern cities like Boston, enforced busing led to rioting; Chris Rock “jokingly” recounts that in Brooklyn NY he was bused to a neighborhood poorer than the one he lived in – and he was beaten up every day to boot.
The most notorious attempt to forestall the desegregation ruling took place in Little Rock, AR in September, 1957. Nine (outstanding) African-American students had been chosen to enroll in previously all white Central High School. The governor, Orval Faubus, actually deployed National Guard troops to assist segregationists in their effort to prevent these students from attending school. President Dwight D. Eisenhower reacted firmly; the Arkansas National Guard was federalized and taken out of the governor’s control and the elite 101st Airborne Division of the U.S. Army (the “Screaming Eagles”) was sent to escort the nine students to class, all covered on national television:
Segregationist resistance did not stop there: among other things, the Little Rock schools were closed for the 1958-59 school year in a failed attempt to turn city schools into private schools and this “Lost Year” was blamed on African-American students. It was ugly.
The stirring Civil Rights movement of the 1950s and 1960s fought for racial equality on many fronts. It spawned organizations and leaders like SNCC (Stokely Carmichael), CORE (Roy Innis) and SCLC (Martin Luther King Jr.) and it spawned activists like Rosa Parks, John Lewis, Michael Schwerner, James Chaney and Andrew Goodman. The price was steep; people were beaten and people were murdered.
The President and Congress were forced to react and enacted the Civil Rights Act of 1964 and the Voting Rights Act of 1965. The latter, in particular, had enforcement provisions which Supreme Court decisions like Guinn had lacked. This legislation reportedly led Lyndon Johnson to predict that the once Solid South would be lost to the Democratic Party. Indeed today, the New South is comprised of “deep red” states. Ironically, it was the Civil Rights movement that made the prosperous New South possible – with segregation, companies (both domestic and international) wouldn’t relocate there or expand operations there; with segregation, the impressive Metro system MARTA of Atlanta could never have been possible; with segregation, a modern consumer economy cannot function; with segregation, Alabama wouldn’t be the reigning national football champion – and college football is a big, big business. .
Predictably, there was a severe backlash against the new legislation and already in 1964 two expedited challenges reached the Warren Court, Heart of Atlanta v. United States and Katzenbach v. McClung. Both rulings were in favor of the Civil Rights Act by means of 9-0 decisions. Interestingly, in both cases, the Court invoked the Commerce Clause of the Constitution rather than the 13th and 14th amendments basing the decision on the authority of the federal government to regulate interstate commerce rather than on civil liberties; experts warn that this could make these decisions vulnerable in the future.
The period of slavery followed by the period of segregation and Jim Crow laws lasted 346 years from 1619 to 1965. Until 1776, this repression was enforced by the English Crown and Parliament, then until the Civil War by the Articles of Confederation and the U.S. Constitution; and then until 1965 by state governments and the Supreme Court. During this time, there was massive wealth accumulation by white America, drawn in no small measure from the profits of slave labor and later the Jim Crow economy. Great universities such as the University of Virginia, Duke and Clemson owe their existence to fortunes gained through this exploitation. Recently, it was revealed that Georgetown University profited from significant slave sales in Maryland to finance its operations. In the North too, the profits from selling factory product to the slave states, to say nothing of the slave trade itself, contributed to the endowments of the great universities of the northeast. Indeed, Columbia, Brown and Harvard have publicly recognized their ties to slavery and the slave trade.  On the other hand, Europeans who arrived in the U.S. in the waves of immigration following the Civil War and their descendants were able, in large numbers, to accumulate capital and accede to home ownership and eventually to higher education. Black America was simply denied this opportunity for those 346 years and today the level of black family wealth is still appallingly low – to cite a Washington Post article of Sept. 28, 2017: “The median net worth of whites remains nearly 10 times the size of blacks’. Nearly 1 in 5 black families have zero or negative net worth — twice the rate of white families.”
It is hard to imagine how this historical injustice can ever be righted. The Supreme Court has played a nefarious role in all this from the Marshall Court’s assiduous defense of the property rights of slave owners (Scott v. London (1806), etc.) to Dred Scott to Plessy, weakening the 14th and 15th amendments en passant, enabling Jim Crow and creating the world of “Separate But Equal.” Earl Warren’s leadership was needed in the period following the Civil War but alas that is not what happened.
In addition to these celebrated civil rights cases, the Warren Court also had to take on suits involving separation of Church and State and involving protection of the individual citizen from the awesome power of the State, the very thing that made the Bill of Rights necessary. More to come. Affaire à suivre.

Business and Baseball

The twentieth century began in 1901. Teddy Roosevelt became President after William McKinley’s assassination by an anarchist at the Pan American Exposition in Buffalo NY. This would prove a challenging time for the Supreme Court and judicial review. By the end of the century the power and influence of the Court over life in America would far exceed the limits stipulated by the Baron de Montesquieu in The Spirit of the Laws or those predicted by the analysis of Alexander Hamilton in Federalist 78.
Normally, the most visible of the justices on the Court is the Chief Justice but in the period from 1902 till 1932, the one most quotable was Associate Justice Oliver Wendell Holmes Jr. Holmes Sr. was the famous physician, writer and poet, author of Old Ironsides and other entries in the K-12 canon. For his part, Holmes Jr. wrote Supreme Court decisions and dissents that have become part of the lore of the Court.
In 1905, the 5-4 Court ruled against the state of New York in one of its more controversial decisions, Lochner v. New York. Appealing to laissez-faire economics, the majority ruled that the state did not have the authority to limit bakery workers hours to 10 hours a day, 60 hours a week even if the goal was to protect the workers’ health and that of the public. The judges perverted the Due Process Clause of the 14th Amendment which reads.
    [Nor] shall any State deprive any person of life, liberty, or property, without due      process of law
They invoked this clause of a civil rights Amendment to rule that the New York law interfered with an individual baker’s right to enter into a private contract. In his dissent, Holmes attacked the decision for applying the social Darwinism of Herbert Spencer (coiner of the phrase “survival of the fittest”) to the Constitution; rather pointedly, Hughes wrote
    The Fourteenth Amendment does not enact Mr. Herbert Spencer’s Social Statics.
Over time, the anti-labor aspects of this decision were undone by legislation but its influence on the discussion of “due process” continues. It has given rise to the verb “lochnerize” which is defined thusly by Wiktionary:
    To read one’s policy preferences into the Constitution, as was (allegedly) done by the U.S. Supreme Court in the 1905 case Lochner v. New York.
The parenthetical term “allegedly” presumably refers to Holmes’ critique. Two other contributions of Lochner to the English language are the noun “Lochnerism” and the phrase “The Lochner Era.”
In 1917, Congress passed the Espionage Act which penalized protests and actions that contested American participation in WWI. This law and its added amendments in the Sedition Act (1918) were powerful tools for suppressing dissent, something pursued quite vigorously by the Wilson administration. A challenge to the act followed quickly with Schenk v. United States (1919). The Court ruled in favor of the Espionage Act unanimously; Holmes wrote the opinion and created some oft cited turns of phrase:
    The most stringent protection of free speech would not protect a man in falsely shouting fire in a theatre and causing a panic.
    The question … is whether the words used … create a clear and present danger that .. will bring about the substantive evils that Congress has a right to prevent.
Holmes’ opinion notwithstanding, the constitutionality of the Espionage Act is still debated because of its infringement on free speech.
Schenck was then followed by another case involving the Espionage Act, Debs v. United States (1919). Eugene Debs was the union activist and socialist leader whom the Court had already ruled against in the Pullman case known as In re Debs (1895). Writing again for a unanimous court, Holmes invoked Schenck and ruled that the right of free speech did not protect protest against the military draft. Debs was sentenced to ten years in Prison and disenfranchised; that did not prevent him from running for President in 1920 – he received over 900.000 votes, more than 3% of the total.
Debs was soon pardoned by President Warren G. Harding in 1921 and even invited to the White House! The passionate Harding apparently admired Debs and did not approve of the way he had been treated by Wilson, the Espionage Act and the Court; Harding famously held that “men in Congress say things worse than the utterances” for which Debs was convicted. In 1923, having just announced a campaign to eliminate the rampant corruption in Washington, Harding died most mysteriously in the Palace Hotel in San Francisco: vampire marks on his neck, no autopsy, hasty burial – suspects ranged from a Norwegian seaman to Al Capone hit men to Harding’s long- suffering wife. Harding was succeeded by Calvin Coolidge who is best remembered for his insight into the soul of the nation: “After all, the chief business of the American people is business.”
Although the Sedition Act amendments were repealed in 1921, the Espionage Act itself lumbers on. It has been used in more recent times against Daniel Ellsberg and Edward Snowden.
Today, Holmes is also remembered for his opinion in an anti-trust suit pitting a “third major league” against the established “big leagues.” The National League had been a profitable enterprise since 1876, with franchises stretching from Boston to St. Louis. At the top of the century, in 1901, the then rival American League was formed but the two leagues joined together in time for the first World Series in 1903. The upstart Federal League managed to field eight teams for the 1914 and 1915 seasons, but interference from the other leagues forced them to end operations. A suit charging the National and American leagues with violating the Sherman Anti-Trust Act was filed in 1915 and it was heard before Judge Kenesaw Mountain Landis – who, interestingly, was to become the righteous Commissioner of Baseball following the Black Sox Scandal of 1919. In Federal Court Landis dramatically slow-walked the Federal League’s case and the result was that different owners made various deals, some buying into National or American League teams and/or folding their teams into established teams; the exception was the owner of the Baltimore Terrapins franchise – the terrapin is a small turtle from Maryland, but the classic name for a Baltimore team is the Orioles; perhaps, the name still belonged to the New York Yankees organization since the old Baltimore Orioles, when dropped from the National League, joined the new American League in 1901 and then moved North to become the New York Highlanders in 1903, that name being changed to New York Yankees in 1913. Be that as it may, the league-less Terrapins continued to sue the major leagues for violating anti-trust law; this suit made its way to the Supreme Court as Federal Baseball Club v. National League.
In 1922, writing for a unanimous court in the Federal case, Holmes basically decreed that Major League Baseball was not a business enterprise engaged in interstate commerce; with Olympian authority, he wrote:
    The business [of the National League] is giving exhibitions of baseball. … the exhibition, although made for money, would not be called trade or commerce in the commonly accepted use of those words.
So, this opinion simply bypasses the Commerce Clause of the Constitution, which states that the Congress shall have power
    To regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes.
With this verdict, Major League Baseball, being a sport and not a business, was exempt from anti-trust regulations such. This enabled “the lords of baseball” to continue to keep a player bound to the team that first signed that player; the mechanism for this was yet another clause, the “reserve clause” which was attached to all the players’ contracts. The “reserve clause” also allowed a team (but not a player) to dissolve the player’s contract on 10 days notice. Obviously this had the effect of depressing player salaries. It also led to outrages such as Sal “The Barber” Maglie’s being blackballed for three seasons for having played in the Mexican League and such as the Los Angeles Dodgers’ treatment of Brooklyn great, Carl “The Reading Rifle” Furillo. Interestingly, although both were truly star players, neither is “enshrined” in the Baseball Hall of Fame; Furillo, though, is featured in Roger Kahn’s classic The Boys of Summer and both Furillo and Maglie take the field in Doris Kearns Goodwin’s charming memoir Wait Till Next Year.
The Supreme Court judgment in Federal was mitigated by subsequent developments that were set in motion by All-Star outfielder Curt Flood’s courageous challenge to the “reserve clause” in the suit Flood v. St Louis (1972); this case was decided against Flood by the Supreme Court in a 5-3 decision that was based on the precedent of the Federal ruling; in this case justice Lewis Powell recused himself because he owned stock in Anheuser-Busch, the company that owned the St. Louis franchise – an honorable thing to do but something which exposes the potential class bias of the Court that might lie behind decisions favoring corporations and the powerful. Though Flood lost, there were vigorous dissents by Justices Marshall, Douglas and Brennan and his case rattled the system; the players union was then able to negotiate for free agency in 1975. However, because of the anti-trust exemption, Major League Baseball still has much more control over its domain than do other major sports leagues even though the NFL and the NCAA benefit from legislation exempting them too from some anti-trust regulations.
In 1932 when Franklin D. Roosevelt became President, the U.S. was nearly three years into the Great Depression. With the New Deal, the congress moved quickly to enact legislation that would serve both to stimulate the economy and to improve working conditions. In the First Hundred Days of the Roosevelt presidency, the National Industrial Recovery Act (NIRA) and the Agricultural Adjustment Act (AAA) were passed. Both were then declared unconstitutional in whole or in part by the Court under Chief Justice Charles Evans Hughes: the case Schechter Poultry Corp. v. United States (1935) was brought by a Kosher poultry business in Brooklyn NY (for one thing, the NIRA regulations interfered with its traditional slaughter practices); with United States v. Butler (January 6, 1936) the government filed a case against a processor of cotton in Illinois who contested paying “processing and floor-stock taxes” to support subsidies for the planters of cotton. The first decision invoked the Commerce Clause of the Constitution; the second invoked the Taxing and Spending Clause which empowers the Federal Government to impose taxes.
In reaction, Roosevelt and his congressional allies put together a plan in 1937 “to pack the court” by adding six additional justices to its roster. The maneuver failed, treated with opprobrium by many. However, with the appointment of new justices to replace retiring justices, Roosevelt soon had a Court more to his liking and also by then the New Deal people had learned from experience not to push programs that were too clumsy to pass legal muster.
The core programs launched by the AAA were continued thanks to subsequent legislation that was upheld in later cases before the Court. The pro-labor part of the NIRA was rescued by the National Labor Relations Act (aka the Wagner Act) of 1935. The act which protected labor unions was sponsored by Prussian-born Senator Robert F. Wagner Sr.; his feckless son Robert Jr. was the Mayor of New York who enabled its two National League baseball teams to depart for the West Coast in 1957 – two teams that between them had won the National League pennant every fall for the previous six seasons, two teams with stadium-filling heroes like Willie Mays and Sandy Koufax; moreover, the Dodgers would never have been able to treat fan-favorite Carl Furillo so shabbily had the team still been in Brooklyn: he would not have been dropped mid-season which precisely made him ineligible for the pension of a 15 year veteran, would not have had to go to court to obtain money still due him and would not have been blackballed from organized baseball.

The Dred Scott Decision

Early in its history, the U.S. Supreme Court applied judicial review to acts of Congress. First here was Hylton v. United States (1796) and there was Marbury v. Madison (1803); with these cases the Court’s power to decide the constitutionality of a law was established – constitutional in the first case, unconstitutional in the second. But it would take over 50 years for the Court again to declare a law passed by Congress and signed by the President to be unconstitutional. Moreover, this fateful decision would push the North and South apart to the point of no return. From the time of the Declaration of Independence, the leadership of the country had navigated carefully to maintain a union of free and slave states; steering this course was delicate and full of cynical calculations. How did these orchestrated compromises keep the peace between North and South? Mystère.

During the Constitutional Convention (1787), to deal with states’ rights and with “the peculiar institution” of chattel slavery, two key arrangements were worked out. The Connecticut Compromise favored small states by according them the same number of Senators as the larger states; the Three-Fifths Compromise included 3/5ths of enslaved African Americans in a state’s population count for determining representation in the House of Representatives and, thus, in the Electoral College as well. The Electoral College itself was a compromise between those who wanted direct election of the President and those, like Madison and Hamilton, who wanted a buffer between the office and the people – it has worked well in that 5 times the system has put someone in the office of President who did not win the popular vote.

The compromise juggernaut began anew in 1820 with an act of Congress known as the Missouri Compromise which provided for Maine to enter the union as a free state and for Missouri to enter as a slave state. It also set the southern boundary of Missouri, 36° 30′, as the northern boundary for any further expansion of slavery; at this point in time, the only U.S. land west of the Mississippi River was the Louisiana Territory and so the act designated only areas of today’s Arkansas and Oklahoma as potential slave states; click HERE . (This landscape would change dramatically with the annexation of Texas in 1845 and the Mexican War of 1848.)

Then there was the Compromise Tariff of 1833 that staved off a threat to the Union known as the Nullification Crisis, a drama staged by John C. Calhoun of South Carolina, Andrew Jackson’s Vice-President at the time. The South wanted lower tariffs on finished goods and the North wanted lower tariffs on raw materials. Calhoun was a formidable and radical political thinker and is known as the “Marx of the Master Class.” His considerable fortune went to his daughter and then to his son-in-law Thomas Green Clemson, who in turn in 1888 left most of this estate to found Clemson University, which makes one wonder why Clemson did not name the university for his illustrious father-in-law.

As a result of the Mexican War, in 1848, Alta California became a U.S. territory. The area was already well developed with roads (e.g. El Camino Real), with cities (e.g. El Pueblo de la Reina de Los Angeles), with Jesuit prep schools (e.g. Santa Clara) and with a long pacified Native American population, herded together by the Spanish mission system. With the Gold Rush of 1849, the push for statehood became unstoppable. This led to the Great Compromise (1850) which admitted California as a free state and which instituted a strict fugitive slave law designed to thwart Abolitionists and the Underground Railroad. Henry Clay of Kentucky was instrumental in all three of these nineteenth century compromises which earned him the titles “the Great Compromiser” and “the Great Pacificator,” both of which school textbooks like to perpetuate. Clay, who ran for President three times, is also known for stating “I would rather be right than be President” which sounds so quaint in today’s world where “truth is not truth” and where facts can yield to “alternative facts.”

In 1854, the Missouri Compromise was modified by the Kansas-Nebraska Act that was championed by Lincoln’s opponent for Senator Stephen Douglas; this act applied “squatter sovereignty” to the territories of Kansas and Nebraska which were north of 36° 30′ – this meant that the settlers there themselves would decide whether to outlaw slavery or not. Violence soon broke out pitting free-staters (John Brown and his sons among them) against pro-slavery militias from neighboring Missouri, all of which led to the atrocities of “bleeding Kansas.”

But even the atrocities in Kansas did not overthrow the balance of power between North and South. So how did a Supreme Court decision in 1857 undo over 80 years of carefully orchestrated compromises between the North and South? Mystère.

In 1831, Dred Scott, a slave in Missouri, was sold to Dr. John Emerson, a surgeon in the U.S. army. Emerson took Scott with him as he spent several years in the free state of Illinois and in the Wisconsin Territory where slavery was outlawed by Northwest Ordinance of 1787 and by the Missouri Compromise itself. When in the Wisconsin Territory, Scott married Harriet Robinson, also a slave; the ceremony was performed by a Justice of the Peace. Logically, this meant that they were not considered slaves anymore because in the U.S. at that time, slaves were prohibited from marrying because they could not enter into a legal contract; legal marriage, on the other hand, has been the basis of transmission of property and accumulation of wealth and capital since ancient Rome.

Some years later, back in Missouri, with help from an abolitionist pastor and others, Scott sued for his freedom on the grounds that his stay in free territory was tantamount to manumission; this long process began in 1846. For an image of Dred Scott, the plaintiff and the individual, click HERE .

Previous cases of this kind had been decided in the petitioner’s favor; but, due to legal technicalities and such, this case reached the Missouri Supreme Court where the ruling went against Scott; from there it went to the U.S. Supreme Court.

John Marshall was the fourth Chief Justice and served in that capacity from 1801 to 1835. His successor, appointed by Andrew Jackson, was Roger Taney (pronounced “Tawny”) of Maryland. Taney was a Jackson loyalist and also a Roman Catholic, the first but far from the last Catholic to serve on the Court.

In 1857, the Court declared the Missouri Compromise to be flat-out unconstitutional in the most egregious ruling in its history, the Dred Scott Decision. This was the first time since Marbury that a federal law was declared unconstitutional: in the 7-2 decision penned by Taney himself, the Chief Justice asserted that the federal government had no authority to control slavery in territories acquired after the creation of the U.S. as a nation, meaning all the land west of the Mississippi. Though not a matter before the Court, Taney ruled that even free African Americans could not be U.S. citizens and drove his point home with painful racist rhetoric that former slaves and their descendants “had no rights which the white man was bound to respect.” The Dred Scott Decision drove the country straight towards civil war.

Scott himself soon gained his freedom thanks to a member of a family who had supported his case. But sadly he died from tuberculosis in 1858 in St. Louis. Scott and his wife Harriet have been honored with a plaque on the St. Louis Walk of Fame along with Charles Lindbergh, Chuck Berry and Stan Musial; in Jefferson City, there is a bronze bust of Scott in the Hall of Famous Missourians, along with Scott Joplin, Walt Disney, Walter Cronkite, and Rush Limbaugh making for some strange bedfellows.

President James Buchanan did approve of the Taney decision, however, thinking it put the slavery question to rest. This is certainly part of the reason Buchanan used to be rated as the worst president in U.S. history. It is also thought by historians that Buchanan illegally consulted with Taney before the decision came down, perhaps securing Buchanan’s place in the rankings for the near future despite potential new competition in this arena.

The Dred Scott Decision wrecked the reputation of the Court for years – how blinded by legalisms could justices be as not to realize what their rulings actually said! Charles Evans Hughes, Chief Justice from 1930 to 1941 and foe of FDR and his New Deal, lamented how the Dred Scott Decision was the worst example of the Court’s “self-inflicted wounds.” The Court did recover in time, however, to return to the practice of debatable, controversial decisions.

To start, in 1873, it gutted the 14th Amendment’s protection of civil rights by its 5-4 decision in the Slaughterhouse Cases, a combined case from New Orleans where a monopoly over slaughter houses had been set up by the State Legislature. The decision seriously weakened the “privileges and immunities” clause of the Amendment:

    No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States.

There is a subtlety here: the U.S. Constitution’s Bill of Rights protects the citizen from abuse by the Federal Government; it is left to each state to have and to enforce its own protections of its citizens from abuse by the state itself. This clause was designed to protect the civil liberties of the nation’s new African-American citizens in the former slave states. The damage done by this decision would not be undone until the great civil rights cases of the next century.

 In the Civil Rights Cases of 1883, the Supreme Court declared the Civil Rights Act of 1875 to be unconstitutional, thereby authorizing racial discrimination by businesses setting the stage for Jim Crow legislation in former Confederate states and border states such as Maryland, Missouri and Kentucky – thus undoing the whole point of Reconstruction.

On the other hand, sandwiched around the Civil Rights Cases, were some notable decisions that supported civil rights such as Strauder v. West Virginia (1880), Boyd v. United States (1886) and Yick Wo v. Hopkins (1886). The first case was brought to the Court by an African American and the third of these was brought by a Chinese American.

Somewhat later, during the “Gay Nineties,” the Supreme Court laid down some fresh controversial decisions to usher the U.S. into the new century. In 1894 there was in Re Debs, where the court allowed the government to obtain an injunction and use federal troops to end a strike against the Pullman Company. As a practical matter, this unanimous decision curbed the growing power of labor unions and for the next forty years, “big business” would use court injunctions to suppress strikes. The “Debs” in this case was Eugene Debs, then the head of the American Railway Union; later the Court would uphold the Sedition Act of 1918 to rule against Debs in a case involving his speaking out against American entry into WWI.

It is interesting to note that this time with in Re Debs the Court did not follow papal guidelines as it had with the Discovery Doctrine in Johnson v. McIntosh and other cases under John Marshall. This despite the fact that it had been handed an opportunity to do so. In his 1891 encyclical De Rerum Novarum (“On Revolution”), Pope Leo XIII had come out in favor of labor unions; this was done at the urging of American cardinals and bishops who, at that time, were a progressive force in the Catholic Church. Without the urging of U.S. hierarchy, the pope would likely have condemned unions as secret societies lumped in with the Masons and Rosicrucians.

As the turn of the century approached, the Court’s ruling in Plesssy v. Ferguson (1896) upheld racial segregation on railroads, in schools and in other public facilities with the tag line “separate but equal.” Considered one of the very worst of the Court’s decisions, it legalized racial segregation for another seventy years. In fact, it has never actually been overturned. The celebrated 1954 case Brown v. Board of Education only ruled against it in the case of schools and educational institutions – one of the clever legal arguments the NAACP made was that, for Law Schools and Medical Schools, “separate but equal” was impossible to implement. Subsequent decisions weakened Plesssy further but technically it is still “on the books.” The case itself dealt with “separate but equal” cars for railway passengers; one of the contributions of the Civil Rights movement to the economy of the New South is that it obviated the need for “separate but equal” subway cars and made modern transportation systems possible in Atlanta and other cities.

The number and range of landmark Supreme Court decisions expanded greatly in the twentieth century and that momentum continues to this day. We have drifted further and further from the view of Montesquieu and Hamilton that the Judiciary should be a junior partner next to the legislative and executive branches of government. The feared gouvernement des juges is upon us.

The Discovery Doctrine

John Marshall, the Federalist from Virginia and legendary fourth Chief Justice of the Supreme Court, is celebrated today for his impact on the U.S. form of government. To start, there is the decision Marbury v. Madison in 1803. In this ruling, the Court set a far-reaching precedent by declaring a law passed by Congress and signed by the President to be inconsistent with the Constitution – which at that point in time was a document six and a half pages long , its twelve amendments included. However, his Court laid down no other rulings of unconstitutionality of federal laws. So what sort of other stratagems did John Marshall resort to in order to leave his mark? Mystėre.
One way the Marshall Court displayed its power was by means of three important cases involving the status and rights of Native Americans. The logic behind the first of these, Johnson v. McIntosh, is astonishing and the case is used in law schools today as a classic example of a bad decision. The basis of this unanimous decision, written by Marshall himself, is a doctrine so medieval, so racist, so Euro-centric, so intolerant, so violent as to beggar belief. Yet it so buried in the record that few are even remotely aware of it today. It is called the Doctrine of Christian Discovery or just the Discovery Doctrine.
Simply put, this doctrine states that a Christian nation has the right to take possession of any territory whose people are not Christians.
The term “Discovery” refers to the fact that the European voyages of discovery (initially out of Portugal and Spain) opened the coast of Africa and then the Americas to European takeovers.
All this marauding was justified (even ordered) by edicts issued by popes written for Christian monarchs.
In his bull (the term for one of these edicts) entitled Romanus Pontifex (1452), Pope Nicholas V, in a burst of Crusader spirit, ordered the Portuguese King Alfonso V to “capture, vanquish, and subdue the Saracens, pagans, and other enemies of Christ,” to “put them into perpetual slavery,” and “to take all their possessions and property.” Columbus himself sailed with instructions to take possession of lands not ruled by Christian leaders. Alexander VI was the quintessential Renaissance pope, famous among other things for making nepotism something of a science – he was the father of Lucrezia Borgia (the passionate femme fatale of paintings, books and films, click HERE ) and of Cesare Borgia (the model for Machiavelli’s prince, click HERE ). In his Bulls of Donation of 1493, Alexander extended to Spain the right and duty to take sovereignty over all non-Christian territories “discovered” by its explorers and conquistadors; and then on behalf of Spain and Portugal, with the Line of Demarcation, Alexander divided the globe into two zones one for each to subjugate.
Not to be left behind, a century or so later, when England and Holland undertook their own voyages of discovery and colonization, they adopted the Discovery Doctrine for themselves despite the Protestant Reformation; France did as well. What is more, after Independence, the Americans “inherited” this privilege; indeed, in 1792, U.S. Secretary of State Thomas Jefferson declared that the Discovery Doctrine would pass from Europe to the newly created U.S. government – interesting that Jefferson, deist that he was, would resort to Christian privilege to further U.S. interests! In American hands, the Discovery Doctrine also gave rise to doctrines like Manifest Destiny and American Exceptionalism.
The emphasis on enslavement in Romanus Pontifex is dramatic. The bull was followed by Portuguese incursion into Africa and Portuguese involvement in the African slave trade, till then a Muslim monopoly. In the 1500’s, African slavery became the norm in New Spain and in New Portugal. In August 1619, when the Jamestown colony was only 12 years old, a ship that the Dutch had captured from Portuguese slavers reached the English settlement and Africans were traded for provisions – one simple application of the Discovery Doctrine, one fateful day for the U.S.
Papal exhortations to war were not new in 1452. A bull of Innocent III in 1208 instigated a civil war in France, the horrific Albigensian Crusade. Earlier, in 1155 the English conquest of Ireland was launched by a bull of Pope Adrian IV (the only English pope no less); this conquest has proved long and bloody and has created issues still unresolved today. And even earlier there was the cry “God Wills It” (“Deus Vult”) of Pope Urban II and the First Crusade.
Hopping forward to the U.S. of 1823, in Johnson v. McIntosh, the plaintiff group, referred to as “Johnson,” claimed that a purchase of land from Native Americans in Indiana was valid although the defendant McIntosh for his part had a claim to overlapping land from a federal land grant (federal would prove key). An earlier lower court had dismissed the Johnson claim. Now (switching to the historical present) John Marshall, writing for a unanimous court, reaffirms the lower court’s dismissal of the Johnson suit. But that isn’t enough. After a lengthy discussion of the history of the European voyages of discovery in the Americas, Marshall focuses on the manner in which each European power acquired land from the indigenous occupants. He outlines the Discovery Doctrine and how a European power gains sovereignty over land its explorers “discover”; he adds that the U.S. inherited this power from Great Britain and reaches the conclusion that only the Federal Government can obtain title to Native American land. Furthermore, he concludes that indigenous populations only retain the “right of occupancy” in their lands and that this right can still be dissolved by the Federal Government.
One of the immediate upshots of this decision was that only the Federal Government could purchase land from Native Americans. Going forward, this created a market with only one buyer; a monopoly is created when there is only one seller; a market like this one with only one buyer is called a monopsony, a situation which could work against Native American interests – for the pronunciation of monopsony, click HERE . To counter the efforts of the Apple Computer company to muddy the waters, there’s just “one more thing”: the national apple of Canada, the name of the defendant in this case and the name of the inventor of the stylish raincoat are all written “McIntosh” and not “Macintosh.” (“Mc” is the medieval scribes’ abbreviation of “Mac” the Gaelic patronymic of Ireland and Scotland; other variants include “M’c”, “M'” and “Mc” with the “c” raised with two dots or a line underneath it.)
The decision in Johnson formalized the argument made by Jefferson that the Discovery Doctrine applied to relations between the U.S. government and Native Americans. This doctrine is still regularly cited in federal cases and only recently the Discovery Doctrine was invoked by none other than Justice Ruth Bader Ginsburg writing for the majority in City of Sherrill v. Oneida Indian Nation of New York (2005), a decision which ruled against Oneida claims to sovereignty over once tribal lands that the Oneida had managed to re-acquire!
What has happened here with Johnson is that John Marshall made the Discovery Doctrine part of the law of the land thanks to the common law reliance on precedent. A similar thing happens when a ruling draws on the natural law of Christian theology, a practice known as “natural law jurisprudence.” In effect, in both scenarios, the Court is making law in the sense of legislation as well as in the sense of a judicial ruling.
A few years after Johnson, in response to the state of Georgia’s efforts to badger the Cherokee Nation in an effort to drive them off their lands, the Cherokee asked the Supreme Court for an injunction to put a stop to the state’s practices. The case Cherokee Nation v. Georgia (1831) was dismissed by the Court on a technicality drawn from its previous decision – the Cherokee, not being a foreign nation but rather a “ward to its guardian” the Federal Government, did not have standing to sue before the Court; thereby adding injury to the insult that was Johnson.
The next year Marshall actually made a ruling in favor of the Cherokee nation in Worcester v. Georgia (1832) which laid the foundation for tribal sovereignty over their lands. However, this was not enough to stop Andrew Jackson from carrying out the removal of the Cherokee from Georgia in the infamous Trail of Tears. In fact, confronted with Marshall’s decision, Jackson is reported to have said “Let him enforce it.”
The U.S. is not the only country to use the Discovery Doctrine. In the English speaking world, it has been employed in Australia, New Zealand and elsewhere. In the Dutch speaking world, it was used recently in 1975 with the accession of Suriname to independence where it is the basis for the rights (or lack of same) of indigenous peoples. Even more recently in 2007, the Russian Federation invoked it when placing its flag on the floor of the Arctic Ocean to claim oil and gas reserves there. Interesting that Orthodox Christians would honor papal directives once it was in their economic interest – reminiscent of Jefferson
In addition to Marbury and the cases dealing with Native Americans, there are several other Marshall Court decisions that are accorded “landmark” status today such as McCulloch v Maryland (1819), Cohens v. Virginia (1821) and Gibbons v. Ogden (1824) – all of which established the primacy of federal law and authority over the states. This consistent assertion of federal authority is the signature achievement of John Marshall.
Marshall’s term of 34 years is the longest for a Chief Justice. While his Court did declare state laws unconstitutional, for the Supreme Court to declare another federal law unconstitutional would take over half a century after Marbury. This would be the case that plunged the country into civil war. Affaire à suivre. More to come.

Marbury v. Madison

The Baron de Montesquieu and James Madison believed in the importance of the separation of powers among the executive, legislative and judicial branches of government. However, their view was that the third branch would not have power equal to that of either of the first two but enough so that no one branch would overpower the other two. In America today, things have shifted since 1789 when the Constitution became the law of the land: the legislative branch stands humbled by the reach of executive power and thwarted by endless interference on the part of the judiciary.
The dramatically expanded role of the executive can be traced to the changes the country has gone through since 1789 and the quasi-imperial military and economic role it plays in the world today.
The dramatically increased power of the judiciary is largely due to judicial review:
(a) the practice whereby a court can interpret the text of the Constitution itself or of a law passed by the Congress and signed by the President and tell us what the law “really” means, and
(b) the practice whereby a court can declare a law voted on by the Congress and signed by the President to be unconstitutional.
In fact, the term “unconstitutional” now so alarms the soul that it is even the title of Colin Quinn’s latest one-man show.
Things are very different in other countries. In the U.K., the Parliament is sovereign and its laws mean what Parliament says they mean. In France, in the Constitution of the Fifth Republic (1958) the reach of the Cour Constitutionelle is very limited: Charles DeGaulle in particular was wary of the country’s falling into a gouvernement des juges – this last expression being a pejorative term for a situation like that in U.S. today where judges have power not seen since the time of Gideon and Samuel of the Hebrew Bible.
The U.S. legal system is based on the Norman French system (hence trial by a jury of one’s peers, “voir dire” and “oyez, oyez”) and its evolution into the British system of common law (“stare decisis” and the doctrine of precedence). So why is the U.S. so different from countries with whom the U.S. has so much in common in terms of legal culture? How did this come about? In particular, where does the power to declare laws to be unconstitutional come from? Mystėre.
A famous early example of Judicial Review occurred in Jacobean England about the time of the Jamestown settlement and about the time the King James Bible was finished. In 1610, in a contorted dispute know as Dr. Bonham’s Case over the right to practice medicine, Justice Edward Coke opined in his decision that “in many cases, the common law will control Acts of Parliament.” This was not well received, Coke lost his job and Parliamentary Sovereignty became established in England. Picking himself up, Coke went on to write his Institutes of the Lawes of England which became a foundational text for the American legal system and which is often cited in Supreme Court decisions, an example being no less a case than Roe v. Wade.
Another English jurist who had a great influence on the American colonists in the 18th century was Sir William Blackstone. His authoritative Commentaries on the Laws of England of 1765 became the standard reference on the Common Law, and in this opus, parliamentary sovereignty is unquestioned. The list of subscribers to the first edition of the Commentaries included future Chief Justices John Jay and John Marshall and even today the Commentaries are cited in Supreme Court decisions between 10 and 12 times a year. Blackstone had his detractors, however: Alexis de Tocqueville described him as “an inferior writer, without liberality of mind or depth of judgment.”
Blackstone notwithstanding, judicial review naturally appealed to the colonists: they were the target of laws enacted by a parliament where they had no representation; turning to the courts was the only recourse they had. Indeed, a famous and stirring call for the courts to overturn an act of the British Parliament was made by James Otis of Massachusetts in 1761. The Parliament had just renewed the hated writs of assistance and Otis argued (brilliantly it is said) that the writs violated the colonists’ natural rights and that any act of Parliament that took away those rights was invalid. Still, the court decided in favor of Parliament. Otis’ appeal to natural rights harkens back to Coke and Blackstone and to the natural law concept that was developed in the late Middle Ages by Thomas Aquinas and other scholastic philosophers. Appeal to natural law is “natural” when working in common law systems where there is no written text to fall back on; it is dangerous, however, in that it tugs at judges’ religious and emotional sensibilities.
Judicial review more generally emerged within the U.S. in the period under the Articles of Confederation where each state had its own constitution and legal system. By 1787, state courts in 7 of the 13 states had declared laws enacted by the state legislatures to be invalid.
A famous example of this took place in Massachusetts where slavery was still legal when the state constitution went into effect. Subsequently, in a series of cases known collectively as the Quock Walker Case, the state supreme court applied judicial review to overturn state law as unconstitutional and to abolish slavery in Massachusetts in 1783.
As another example at the state level before the Constitution, in New York the state constitution provided for a Council of Revision which applied judicial review to all bills before they could become law; however, a negative decision by the Council could be overturned by a 2/3 majority vote in both houses of the state legislature.
In 1784 in New York, in the Rutgers v. Waddington case, Alexander Hamilton, taking a star turn, argued that a New York State law known as the Trespass Act, which was aimed at punishing Tories who had stayed loyal to the Crown during the Revolutionary War, was invalid. Hamilton’s argument was that the act violated terms of the Treaty of Paris of 1783; this treaty put an end to the Revolutionary War and in its Articles VI and VII addressed the Tories’ right to their property. Clearly Hamilton wanted to establish that federal treaties overruled state law but also he would well have wanted to keep Tories and their money in New York. Indeed, the British were setting up the English speaking Ontario Province in Canada to receive such émigrés including a settlement on Lake Ontario alluringly named York – which later took back its original Native Canadian name Toronto. For a picture of life in Toronto in the old days, click HERE .
The role of judicial review came up in various ways at the Constitutional Convention of 1787. For example, with the Virginia Plan, Madison wanted there to be a group of judges to assist the president in deciding to veto a bill or not, much like the New York State Council of Revision – and here too this could be overturned by a supermajority vote in Congress. The Virginia Plan was not adopted; many at the Convention saw no need for an explicit inclusion of judicial review in the final text but they did expect the courts to be able to exercise constitutional review. For example, Elbridge Gerry of Massachusetts (and later of gerrymander fame) said federal judges “would have a sufficient check against encroachments on their own department by their exposition of the laws, which involved a power of deciding on their constitutionality.” Luther Martin of Maryland (though born in New Jersey) added that as “to the constitutionality of laws, that point will come before the judges in their official character…. “ For his part, Martin found that the Constitution as drawn up made for too strong a central government and opposed its ratification.
The Federalist Papers were newspaper articles and essays written by the founding fathers John Jay, James Madison and Alexander Hamilton, all using the pseudonym “Publius,” a tip of the hat to the great Roman historian Publius Cornelius Tacitus; for the relevance of Tacitus across time, try Tacitus by historian Ronald Mellor. A group formed around Hamilton and Jay giving birth to the Federalist Party, the first national political party – it stood for a strong central government run by an economic elite; this quickly gave rise to an opposition group the Democratic Republicans (Jefferson, Burr, …) and the party system was born, somewhat to the surprise of those who had written the Constitution. Though in the end, judicial review was left out of the Constitution, right after the Convention, the need for it was brought up again in the Federalist Papers : in June 1788 Hamilton, already a star, published Federalist 78 in which he argued for the need for judicial review of the constitutionality of legislation as a check on abuse of power by the Congress. In this piece, he also invokes Montesquieu on the relatively smaller role the judiciary should have in government compared to the other two.
Fast forward two centuries: the Federalist Society is a political gate-keeper which was founded in 1982 to increase the number of right-leaning judges on the federal courts. Its founders included such high-profile legal thinkers as Robert Bork (whose own nomination to the Supreme Court was so dramatically scuttled by fierce opposition to him that it led to a coinage, the verb “to bork”). The Society regrouped and since then members Antonin Scalia, John G. Roberts, Clarence Thomas, Samuel Alito and Neil Gorsuch have acceded to the Supreme Court itself. (By the way, Federalist 78 is one of their guiding documents.)
Back to 1788: Here is what Section 1 of Article III of the Constitution states:                     The judicial Power of the United States shall be vested in one Supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish.
Section 2 of Article III spells out the courts’ purview:                                                           The judicial Power shall extend to all Cases, in Law and Equity, arising under this Constitution, the Laws of the United States, and Treaties made, or which shall be made, under their Authority;—to all Cases affecting Ambassadors, other public Ministers and Consuls;—to all Cases of admiralty and maritime Jurisdiction;—to Controversies to which the United States shall be a Party;—to Controversies between two or more States;—between a State and Citizens of another State;—between Citizens of different States;—between Citizens of the same State claiming Lands under Grants of different States, and between a State, or the Citizens thereof, and foreign States, Citizens or Subjects.
So while Article III lays out responsibilities for the court system, it does not say the courts have the power to review the work of the two other branches of government nor call any of it unconstitutional.
Clause II of Article 6 of the Constitution is known as the Supremacy Clause and states that federal law overrides state law. In particular, this would imply that a federal court could nullify a law passed by a state. But, again, it does not allow for the courts to review federal law.
So there is no authorization of judicial review in the U.S. Constitution. However, given the precedents from the state courts and the positions of Madison, Gerry, Martin, Hamilton et al., it is as though lines from the Federalist 78 such as these were slipped into the Constitution while everyone was looking:
  The interpretation of the laws is the proper and peculiar province of the courts. A constitution is, in fact, and must be regarded by the judges as, a fundamental law. It therefore belongs to them to ascertain its meaning, as well as the meaning of any particular act proceeding from the legislative body. … If there should happen to be an irreconcilable variance between the [Constitution and an act of the legislature], the Constitution ought to be preferred to the statute.
The Supreme Court of the United States (SCOTUS) and the federal court system were created straightaway by the Judiciary Act passed by Congress and signed by President George Washington in 1789.
The first “big” case adjudicated by the Supreme Court was Chisholm v. Georgia (1793). Here the Court ruled in favor of the plaintiff Alexander Chisholm and against the State of Georgia, implicitly ruling that nothing in the Constitution prevented Chisholm from suing the state in federal court. This immediately led to an outcry amongst the states and to the 11th Amendment which precludes a state’s being sued in federal court without that state’s consent. So here the Constitution was itself amended to trump a Court decision.
The precedent for explicit judicial review was set seven years later in 1796 in the case Hylton v. United States: this was the first time that the Court ruled on the constitutionality of a law passed by Congress and signed by the President. It involved the Carriage Act of 1794 which placed a yearly tax of $16 on horse-drawn carriages owned by individuals or businesses. Hylton asserted that this kind of tax violated the powers of federal taxation as laid out in the Constitution while Alexander Hamilton, back in the spotlight, pled the government’s case that the tax was consistent with the Constitution. Chief Justice Oliver Ellsworth and his Court decided in favor of the government, thus affirming the constitutionality of a federal law for the first time; by making this ruling, the Court claimed for itself the authority to determine the constitutionality of a law, a power not provided for in the Constitution but one assumed to come with the territory. This verdict held sway for 101 years; it was overturned in 1895 (Pollock v. Farmers’ Loan and Trust) and then reaffirmed after the passage of the 16th amendment which authorized taxes on income and personal property.
Section 13 of the Judiciary Act of 1789 mandated SCOTUS to order the government to do something specific for a plaintiff if the government is obliged to do so according to law but has failed to do so. In technical terms, the court would issue a writ of mandamus ordering the government to act – mandamus, meaning “we command it,” is derived from the Latin verb mandare.
John Marshall, a Federalist from Virginia, was President John Adam’s Secretary of State. When Jefferson, and not Adams, won the election of 1800, Adams hurried to make federal appointments ahead of Jefferson’s inauguration that coming March; these were the notorious “midnight appointments.” Among them was the appointment of John Marshall himself to the post of Chief Justice of the Supreme Court. Another was the appointment of William Marbury to a judgeship in the District of Columbia. It was Marshall’s job while he was still Secretary of State to prepare and deliver the paperwork and official certifications for these appointments. He failed to accomplish this in time for Marbury and some others; when Jefferson took office he instructed his Secretary of State, James Madison, not to complete the unfinished certifications.
In the “landmark” case Marbury v. Madison (1803), William Marbury petitioned the Court under Section 13 of the Judiciary Act to order the Secretary of State, James Madison, to issue the commission for Marbury to serve as Justice of the Peace in the District of Columbia as the certification was still unfinished thanks to John Marshall, now the Chief Justice. In a legalistic tour de force, the Court affirmed that Marbury was right and that his commission should be issued but ruled against him. John Marshall and his judges declared Section 13 of the Judicial Act unconstitutional because it would (according to the Court) enlarge the authority of the Court beyond that permitted by the Constitution.
Let’s try to analyze the logic of this decision: put paradoxically, the Court could exercise a power not given to it in the Constitution to rule that it could not exercise a power not given to it in the Constitution. Put ironically, it ascribed to itself the power to be powerless. Put dramatically, Marbury, himself not a lawyer, might well have cheered on Dick the Butcher who has the line “let’s kill all the lawyers” in Henry VI, Part 2 – but all this business is less like Shakespeare and more like Aristophanes.
Declaring federal laws unconstitutional did not turn into a habit in the 19th century. The Marshall court itself did not declare any other federal laws to be unconstitutional but it did find so in cases involving state laws. For example, Luther Martin was on the losing side in McCulloch v. Maryland (1819) when the Court declared a Maryland state law levying a tax on a federally authorized national bank to be unconstitutional.
The story doesn’t end there.
Although another law wouldn’t be ruled unconstitutional by the Supreme Court until 1857, the two plus centuries since Marbury would see a dramatic surge in judicial review and outbreaks of judicial activism on the part of courts both left-wing and right-wing. There is a worrisome tilt toward increasing judicial power: “worrisome” because things might not stop there; the 2nd Book of Samuel (the last Judge of the Israelites) is followed by the 1st Book of Kings, something to do with the need to improve national defense. Affaire à suivre. More to come.

Voting Arithmetic

Voting is simple when there are only two candidates and lots of voters: people vote and one of the candidates will simply get more votes than the other (except in the very unusual case of a tie when a coin is needed). In other words, the candidate who gets a majority of the votes wins; naturally, this is called majority voting. Things start to get complicated when there are more than two candidates and a majority of the votes is required to elect a candidate; in this case, multiple ballots and lots of horse trading can be required until a winner emerges.

The U.S. has continued many practices inherited from England such as the system of common law. One of the most important of these practices is the way elections are run. In the U.K. and the U.S., elections are decided (with some exceptions) by plurality : the candidate who polls the largest number of votes is the winner. Thus if there are 3 candidates and one gets 40% of the votes while the other two get 30%, the one with 40% wins – there is no runoff. This is called plurality voting in the U.S. and relative majority voting in the U.K.

The advantages of plurality voting are that it is easy to tabulate and that it avoids multiple ballots. The glaring disadvantage is that it doesn’t provide for small party representation and leads to two party dominance over the political system – a vote for a third party candidate is a vote thrown away and counts for nothing. As a random example, look at the presidential election in the Great State of Florida in 2000: everything would have been the same if those who voted for Ralph Nader had simply stayed home and not voted at all. As a result, third parties never get very far in the U.S. and if they do pick up some momentum, it is quickly dissipated. In England, it is similar with Labour and the Conservatives being the dominant parties since the 1920’s. Is this handled differently elsewhere? Mystère.

To address this problem of third party representation, countries like Italy and Holland use proportional voting for parliamentary elections. For example, suppose there are 100 seats in parliament, that the 3 parties A, B and C propose lists of candidates and that party A gets 40% of the votes cast and B and C each get 30%; then A is awarded 40 seats in parliament while B and C each get 30 seats. With this system, more than two parties will be represented in parliament; if no one party has a majority of the seats, then a coalition government will be formed.

Another technique used abroad is to have majority voting with a single runoff election. Suppose there are 4 parties (A,B,C,D) with candidates for president; a first election is held and the two top vote getters (say, A and B) face off in a runoff a week or two later. During this time, C and D can enter into a coalition with A or B and their voters will now vote for the coalition partner. So those minority votes in the first round live to fight another day. Also that president, once elected, now represents a coalition which mitigates the kind of extreme Manichaeism of today’s U.S. system. It can lead to some strange bedfellows, though. In the 2002 presidential election in France, to stop the far-right National Front candidate Jean-Marie LePen from winning the second round, the left wing parties had to back their long time right wing nemesis Jacques Chirac.
However, one might criticize and find fault with these countries and their election systems, the fact is that voter participation is far higher than that in the U.S.

For U.S. presidential elections, what with the Electoral College and all that, “it’s complicated.” For Hamilton, Madison and others, the Electoral College would serve as an additional buffer between the masses and the government: one way this was to be achieved was by means of the “faithless elector,” one who does not vote for the candidate he pledged to – this stratagem would overturn a mass vote for a potential despot. This was considered a feature and not a bug; this feature is still in force and some pledged electors do employ it – in the 2016 election, seven electors voted against their pledged candidates, two against Trump and five against Clinton. But, except for faithless electors, how else could the Electoral College stymie the will of the people? Mystère.

That the Electoral College can indeed serve as a buffer between the presidency and the population has been proven by four elections (1876, 1888, 2000, 2016) where the Democratic candidate carried the popular vote but the Republican candidate obtained a majority in the Electoral College; most scandalously, in the 1876 election, in a backroom deal, 20 disputed electoral votes were awarded to the Republican candidate Rutherford B. Hayes to give him a majority of 1 vote in exchange for the end of Reconstruction in the South – “probably the low point in our republic’s history” to cite Gore Vidal.

That the Electoral College can indeed serve as a buffer between the presidency and the population has also been proven by the elections of 1800 and 1824 where no candidate had a majority of the electoral vote; in this case, the Constitution specifies that the election is to be decided in the House of Representatives with each state having one vote. In 1824, the populist candidate, Andrew Jackson, won a plurality both of the popular vote and the electoral vote, but on the first ballot a majority of the state delegations, cajoled by Henry Clay, voted for the establishment candidate John Quincy Adams. In the election of 1800, Jefferson and Burr were the top electoral vote getters with 73 votes each. Jefferson won a majority in the House on the 36th ballot, his victory engineered by Hamilton who disliked Jefferson but loathed Burr – we know how this story will end, unfortunately.

For conspiracy theorists, it is worth pointing out that not only were all four candidates who won the popular vote but lost the electoral vote Democrats but that three of the four were from New York State as was Aaron Burr.

The most obvious shortcoming of the Electoral College system is that it is a form of gerrymandering that gives too much power and representation to rural states at the expense of large urban states; in English terms, it creates “rotten boroughs.” For example, using 2018 figures, California has 55 electoral votes for 39,776,830 people and Wyoming has 3 votes for 573,720; so, if one does the math, 1 vote for president in Wyoming is worth 3.78 votes in California. Backing up, let us “show our work.” When we solved this kind of problem in elementary school, we used the rule, the “product of the means is equal to the product of the extremes”; thus, using the camel case dear to programmers, we start with the proportion

     votesWyHas : itsPopulation :: votesCaShouldHave : itsPopulation

where : is read “is to” and “::” is read “as.” Three of the four terms have known values and so the proportion becomes

     3 : 573,720 :: votesCaShouldHave : 39,776,830

The above rule says that the product of the inner two terms is equal to that of the outer two terms. The term votesCaShouldHave is the unknown so let us call it ; let us apply the rule using * as the symbol for multiplication and let us solve the following equation:

     3 * 39,776,830 = 573,720 * x

which yields the number of electors California would have, were it to have as many electors per person as Wyoming does; this simplifies to

     x = (3 * 39,776,830)/ 573,720 = 207.99

So California would have to have 207.99 electors to make things fair; dividing this figure by 55, we find that 1 vote in Wyoming is worth 207.99/55 = 3.78 votes in California. This is a most undemocratic formula for electing the President. But things can get worse. If the race is thrown into the House, since California has 69.33 times as many people as Wyoming, the ratio jumps to 1.0 to 69.33, making this the most undemocratic way of selecting a President imaginable. For state by state population figures, click  HERE .

One simple way to mitigate the undemocratic nature of the Electoral College would be to eliminate the 2 electoral votes that correspond to each state’s 2 senators. The would change the Wyoming/California ratio and make 1 vote in the Equality State worth only 1.26 votes in the Golden State. With this counting technique, Trump would still have won the 2016 presidential election 236 to 195 (much less of a “massive landslide” than 304 to 227, the official tally) but Al Gore would have won the 2000 race, 228 to 209, even without Florida (as opposed to losing 266 to 271).

To tally the Electoral College vote, most states assign all their votes (via the “faithful” pledged electors) to the plurality winner for president in that state’s presidential tally. Nebraska and Maine, the two exceptions, use the congressional district method which assigns the two votes that correspond to Senate seats to the overall plurality winner and one electoral vote to the plurality winner in each congressional district in the state. By way of example, in an election with 3 candidates, suppose a state has 3 representatives (so 5 electoral votes) and that one candidate obtains 50% of the total vote and the other two 25% each; then if each candidate is the plurality winner in the vote from exactly one congressional district, the top vote-getter is assigned the 2 votes for the state’s senators plus 1 vote for the congressional district he or she has won and the other two candidates receive 1 electoral vote each. This system yields a more representative result but note that gerrymandering will still impact who the winner is in each congressional district. What is intrinsically dangerous about this practice, though, is that, if candidates for more than two parties are running, it can dramatically increase the chances that the presidential election will be thrown into the House of Representatives. In this situation, the Twelfth Amendment ordains that the 3 top electoral vote-getters must be considered for the presidency and so, if this method had been employed generally in the past, the elections of 1860 (Lincoln), 1912 (Wilson) and 1992 (Clinton) could well have given us presidents Stephen Douglas, Teddy Roosevelt and Ross Perot.

The congressional district method is a form of proportional voting. While it could be disastrous in the framework of the Electoral College system for electing a U.S. president, proportional voting itself is successfully implemented in many countries to achieve more equitable outcomes than that furnished by plurality voting and the two party system.

A voting system which is used in many American cities such as Minneapolis, and Oakland and in countries such as Australia and Ireland is known as ranked choice voting or instant-runoff voting. Voters in Maine recently voted for this system to be used in races for seats in the U.S. House of Representatives and in party primaries. Ranked choice voting emulates runoff elections but in a single round of balloting; it is a much more even-handed way to choose a winner than plurality voting. Suppose there are 3 candidates – A, B and C; then, on the ballot, each voter lists the 3 candidates in the order of that voter’s preference. For the first round, the count is made of the number of first place votes each candidate received; if for one candidate that number is a majority, that candidate wins outright. Otherwise, the candidate with the least number of first place votes, say A, is eliminated; now we go into the “second round” with only B and C as candidates and we add to B’s first place total the number of ballots for A that listed B as second choice and similarly for C. Now, except in the case of a tie, either B or C will have a clear majority and will be declared the winner. This will give the same result that staging a runoff between B and C would yield. With 3 candidates, at most 2 rounds are required; if there were 4 candidates, up to 3 rounds could be needed, etc.

Most interestingly, in the Maine 2018 election, in one congressional district, no candidate for the House of Representatives gathered an absolute majority on the first round but a candidate who received fewer first place votes on that first round won on the second round when he caught up and surged ahead because of the number of voters who made him their second choice. (For details, click  HERE ).

Naturally, all this is being challenged in court by the losing side. However, for elections, Section 4 of Article 1 of the U.S. Constitution leaves implementation to the states for them to carry out in the manner they deem fit – subject to Congressional oversight but not to judiciary oversight:

   “The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof; but the Congress may at any time by Law make or alter such Regulations, except as to the Places of chusing (sic) Senators.”

N.B. In this article of the Constitution, the senators are an exception because at that time the senators were chosen by the state legislatures and direct election of senators by popular vote had to wait for 1913 and the 17th Amendment.

At the first legal challenge to it, the new Maine system was upheld vigorously in the United States District Court based in large part on Section 4 of Article 1 above. For the ruling itself, click  HERE . But the story will not likely end so simply.

This kind of voting system is also used by the Academy of Motion Picture Arts and Sciences to select the nominees in each category, but they call it preferential voting. So to determine five directors, they apply the elimination process until only five candidates remain.

With ranked choice voting, in Florida, in that 2000 election, if the Nader voters listed Ralph Nader first, Al Gore (who was strong on the environment) second and George Bush third and if all Pat Buchanan voters listed Buchanan first, Bush second and Gore third, Gore would have carried the day by over 79,000 votes in the third and final round.

However one might criticize and find fault with countries like Australia and Ireland and their election systems, the fact is that voter participation is far higher than that in the U.S. For numbers, click  HERE .

Ranked voting systems are not new and have been a topic of interest to social scientists and mathematicians for a long time now. The French Enlightenment thinker, the Marquis de Condorcet, introduced the notion of the Condorcet Winner of an election – the candidate who would beat all the other candidates in a head-to-head election based on the ballot rankings; he also is the author of Condorcet’s Paradox – that a ranked choice setup might not produce a Condorcet winner. To analyze this situation, the English mathematician Charles Lutwidge Dodgson introduced the Dodgson Method, an algorithm for measuring how far the result for a given election using ranked choice voting is from producing a Condorcet Winner. More recently, the mathematician and economist Kenneth Arrow authored Arrow’s Paradox which shows that there are ways in which ranked voting can sometimes be gamed by using the idea behind Condorcet’s Paradox: for example, it is possible that in certain situations, voters can assure the victory of their most preferred candidate by listing that candidate 2nd and not 1st – the trick is to knock out an opponent one’s favorite would lose to in a head to head election by favoring a weaker opponent who will knock out the feared candidate and who will then be defeated in the final head to head election. For his efforts, Arrow was awarded a Nobel Prize; for his efforts, Condorcet had a street named for him in Paris (click  HERE  ); for his efforts, Charles Lutwidge Dogson had to latinize his first and middle names, then reverse them to form the pen name Lewis Carroll, and then proceed to write Alice in Wonderland and Jabberwocky, all to rescue himself from the obscurity that usually awaits mathematicians. For a detailed but playful presentation on paradoxes and ranked choice voting, click  HERE .

The Constitution – U.S Scripture III

In 1787, the Confederation Congress called for a Constitutional Convention with the goal of replacing the Articles of Confederation with a form of government that had the central power necessary to lead the states and the territories. This had to be a document very different from the Iroquois Great Law of Peace, from the Union of Utrecht and from the Articles of Confederation themselves. It had to provide for a centralized structure that would exercise legislative and executive power on behalf of all the states and territories. Were there existing historical precedents for a written document to set up a social contract of government? Mystère.
In antiquity, there was “The Athenian Constitution”; but this text, credited to Aristotle and his students at the Lyceum, is not a founding document; rather it is an after-the-fact compilation of the workings of the Athenian political system. In the Middle Ages there was the Magna Carta of 1215 with its legal protections such as trial by jury and its limitations on the power of the king. Though the Magna Carta itself was quickly annulled by a bull of the crusade-loving Pope Innocent III as “illegal, unjust, harmful to royal rights and shameful to the English people,” it served as a template for insulating the citizen from the power of the state.
There is a book entitled “The English Constitution” but this was published by Walter Bagehot in the latter part of the 19th century and, like “The Athenian Constitution,” it is an account of existing practices and procedures rather than any kind of founding document. This is the book that the 13 year old Elizabeth is studying when taking history lessons with the provost of Eton in the TV series “The Crown.”
For an actual example of a nation’s constitution that pre-dates 1789, one has to go back to 1600, the year that the Constitution of San Marino was adopted. However, there is no evidence that the founding fathers knew anything of this at all. Since that time, this document has been the law of this land-locked micro-state and it has weathered many storms; most recently, during the Cold War, it gave San Marino its 15 minutes of fame when the citizens elected a government of Communist Party members and then peacefully voted them out of office twelve years later. For an image of St. Martinus, stonemason and founding father of this, the world’s smallest republic, click  HERE .
The English Bill of Rights of 1689, an Act of Parliament, is a constitutional document in that it transformed an absolute monarchy into a constitutional monarchy. This is the key role of a constitution – it tempers or replaces traditional monarchy based on the Divine Right of Kings with an explicit social contract. This sharing of power between the monarch and the parliament made England the first Constitutional Monarchy – in simple terms, the division of roles made the parliament the legislature and made the king the executive. To get a sense of how radical this development was, it took place only a few years after Louis XIV of France reportedly exclaimed “L’Etat, c’est moi.”
With independence brewing, the Continental Congress in May 1776 directed the colonies to draw up constitutions for their own governance. The immediate precursors to the U.S. Constitution then were the state constitutions of 1776, 1777 and 1780, born of the break with Great Britain.
An important influence on this generation of constitutional documents was the work of the French Enlightenment philosopher Montesquieu. In his Spirit of the Laws (1748) Montesquieu analyzed forms of government and how different forms matched with different kinds of nations – small nations best being republics, medium sized nations best served by a constitutional monarchy and very large ones best being empires. His analysis broke government down into executive, legislative and (to a lesser extent) judicial powers and he argued that, to avoid tyranny, these should be separate and independent of each other so that the power of any one of these would not exceed the combined power of the other two.
In 1776, the state of Connecticut did not adopt a new constitution but continued with its (sometimes updated) Royal Charter of 1662. In the matter of religious freedom, in Connecticut, the Congregational Church was effectively the established state religion until the Constitution of 1818. Elsewhere, the antidisestablishmentarians generally lost out. For example, in the case of New York, Georgia, Rhode Island, Pennsylvania and Massachusetts, the state constitution guaranteed freedom of religion; in New Hampshire, the Constitution of 1776 was silent on the subject, but “freedom of conscience”  was guaranteed in the expanded version of 1784.  In Delaware’s case, it prohibited the installation of an established religion; in Virginia’s case, it took the Virginia Statute for Religious Freedom, written by Thomas Jefferson in 1786, to stave off the threat of an established church. On the other hand, the Maryland Constitution only guaranteed freedom of religion to “persons professing the Christian religion” (which was the same as in Maryland’s famous Toleration Act of 1649, the first step in the right direction in the colonies). In its 1776 document, Anglicanism is the established religion in South Carolina – this was undone in the 1778 revision. In the North Carolina Constitution, Article 32 affirms “That no person who shall deny the being of God, or the truth of the Protestant religion … shall be capable of holding any office … within this State”; New Jersey’s Constitution had a similar clause. It seems that from a legal point of view, a state still has the authority to have its own established or favored religion and attempts to move in this direction are still being made in North Carolina and elsewhere – the First Amendment explicitly only prohibits Congress from setting up an established religion for the country as a whole.
The challenges confronting the Constitutional Convention in Philadelphia in 1787 were many – to craft a system with a sufficiently strong central authority but not one that could morph into a dictatorship or mob rule, to preserve federalism and states’ rights (in particular, for the defense of the peculiar institution of slavery), to preserve popular sovereignty through a system of elections, etc. Who, then, rose to the occasion and provided the intellectual and political drive to get this done? Mystère.
Thomas Jefferson was the ambassador to France, John Adams was the ambassador to Great Britain and neither attended the Convention. Benjamin Franklin was one of the few who took a stand for the abolition of slavery but to no avail; Alexander Hamilton had but a bit part and his main (virtually monarchist) initiative was roundly defeated. George Washington took part only at James Madison’s urging (but did serve as president of the Convention). But it is Madison who is known as the Father of the Constitution.
Madison and the others were keen readers of the Roman historian Publius Cornelius Tacitus who pitilessly described the transformation of the Roman Senatorial class from lawmakers into sniveling courtiers with the transformation of the Roman Republic into the Roman Empire; Montesquieu also wrote about the end of the Roman Republic. On the other hand, the rumblings leading up to the French Revolution could be heard and the threat of mob rule was not unrealistic. So the fear of creating a tyrannical regime was very much with them.
Madison’s plan for a strong government that would not turn autocratic was, like some of the state constitutions, based on the application of ideas of Montesquieu. In fact, in Federalist No. 47, Madison (using the Federalist pseudonym Publius) developed Montesquieu’s analysis of the separation of powers further and enunciated the principle of “checks and balances.”
For his part, Hamilton pushed for a very strong central government modeled on the English system with his British Plan; however, this plan was not adopted, nor were the plans for structuring the government proposed by Virginia and by New Jersey. Instead, a balance between large and small states was achieved by means of the Connecticut Compromise: there would be a bicameral legislature with an upper house, the Senate, having two senators from each state; there would be a lower house, the House of Representatives, with each state having a number of representatives proportional to its population. While the senators would be appointed by the state legislatures, the representatives would be chosen by popular vote (restricted to men of property, of course).
This bicameral setup, with its upper house, was designed to reduce the threat of mob rule. However, it also brought up the problem of computing each state’s population for the purpose of determining representation in the House of Representatives. The resulting Three-Fifths Compromise stipulated that 3/5ths of the slave population in a state would count toward the state’s total population for this computation. This compromise created the need for an electoral college to elect the president, since enslaved African Americans would not each have three-fifths of a vote! So the system of electors was introduced and each state would have one elector for each member of Congress.
Far from abolishing slavery, Article 1, Section 9, Clause 1 of the Constitution prohibited Congress from making any law that would interfere with the international slave trade until 1808 at the earliest. However, Jefferson had stood for ending this traffic since the 1770’s and, in his second term in 1807, the Act Prohibiting the Importation of Slaves was passed and the ban was initiated the following year.  However, the ban was often violated and some importation of slaves continued into the Civil War. In 1807, the British set up a similar ban on the slave trade in the Empire and the British Navy actively enforced it against ships of all nations off the coast of West Africa and elsewhere, technically classifying slave traders as pirates; this was an important impediment to the importation of slaves into the United States.
For Hamilton, Madison and others, the Electoral College would serve as an additional buffer between the masses and the government: one way this was to be achieved was by means of the “faithless elector,” one who does not vote for the candidate he pledged to – this stratagem would overturn a mass vote for a potential despot. This was considered a feature and not a bug; this feature is still in force and some pledged electors do employ it – in the 2016 election, seven electors voted against their pledged candidates, two against Trump and five against Clinton.
The Constitution left it to the states to determine who is eligible to vote. With some exceptions here and there at different times, the result was that only white males who owned property were eligible to vote. This belief in the “divine right of the propertied” has its roots in the work of John Locke; it also can be traced back to a utopian composition published in 1656 by James Harrington; in The Commonwealth of Oceana he describes an egalitarian society with an ideal constitution where there is a limit on how much property a family can own and rules for distributing property; there is a senate and there are elections and term limits. Harrington promulgated the idea of a written constitution arguing that a well-designed, rational document would curtail dangerous conflicts of interest. This kind of interest in political systems was dangerous back then; Oliver Cromwell blocked publication of Harrington’s work until it was dedicated to the Lord Protector himself; with the Stuart Restoration, Harrington was jailed in the Tower of London and died soon after as a result of mistreatment. For a portrait, click HERE .
In any case, it wasn’t until 1856 that even universal suffrage for white males became established in the U.S. For the enfranchisement of rest of the population, it took the Civil War and constant militancy up to and during WWI. A uniform election day was not fixed until 1845 and there are no real federal guidelines for election standards. This issue is still very much with us, as demonstrated by a wave of voter suppression laws in the states newly released from the strictures of the Voting Rights Act by the Roberts’ Court with the 2013 decision in Shelby County v. Holder.
Finally, a four page document entitled Constitution of the United States of America was submitted to the states in September 1787 for ratification. This process required nine of the thirteen states; the first to ratify it was Delaware and the ninth was New Hampshire. There was no Bill of Rights and no provision for judicial review of legislation. Political parties were not expected to play a significant role and the provisions for the election of president and vice-president were so clumsy that they exacerbated the electoral crisis of 1800 which ultimately led to the duel between Aaron Burr and Alexander Hamilton.
The Confederation Congress declared the Constitution ratified in September 1788 and the first presidential election was held. Congress was seated and George Washington became President in the spring of 1789.
In American life, the Constitution has truly become unquestionable, sacred scripture and the word unconstitutional has the force of a curse. As a result, to a large extent, Americans are frozen in place and are not able to be forward looking in dealing with the myriad new kinds of problems, issues and opportunities that contemporary life creates.
For example, the Constitution provides for an Amendment process that requires ratification by 3/4ths of the states. When there were 13 states huddled together on the Eastern Seaboard, this worked fine and the first 10 amendments, The Bill of Rights, were passed quickly after the Constitution was adopted. However, today this process is most cumbersome. For example, any change in the Electoral College system would require an amendment to the Constitution; but any 13 states could block an attempt at change and the 13 smallest states, which have barely 4% of the population, would not find it in their interest to make any such change, alas. Another victim is term limits for members of Congress. It is in states’ interest to have senators and representatives with seniority so they can accede to powerful committee chairmanships etc.; this is the old Dixiecrat strategy that kept Strom Thurmond in the Senate until he was over 100 years old – but then the root of the word senator is the Latin senex which means “old man.” The Constitution does provide for a second way for it to be amended: 34 state legislatures would have to pass applications for a constitutional convention to deal with, say, term limits; this method has never been used successfully, but a group “U.S. Term Limits” is trying just that.
The idea of judicial review of laws passed by Congress did come up at the Convention. Madison first wanted there to be a set of judges to assist the president in deciding to veto a bill or not. In the end, nothing was set down clearly in the Constitution and the practice of having courts review constitutionality came about by a kind of judicial fiat when John Marshall’s Supreme Court ruled a section of an act of Congress to be unconstitutional. Today, any law passed by Congress and signed by the President has to go through an interminable process of review in the courts and in the end, the law means only what the courts say it means. Contrast this with the U.K. where the meaning of a law is what Parliament says it is. As a result with the Supreme Court politicized the way it is, law is actually made today by the Court and the Congress just licks its wounds. The most critical decisions are thus made by 5 unelected career lawyers. Already in 1921, seeing what was happening in the U.S., the French jurist Edouard Lambert coined the phrase “gouvernement des juges” for the way judges privileged their personal slant on cases before them to the detriment of a straightforward interpretation of the letter and spirit of the law.
The reduction of the role of Congress and the interference of the courts have also contributed to the emergence of an imperial presidency. The Constitution gives only Congress the right to levy tariffs or declare war; but now the president imposes tariffs, sends troops off to war, and governs mostly by executive order. Much of this is “justified” by a need for expediency and quick decision making in running such a complex country – but this is, as Montesquieu and others point out, the sort of thing that led to the end of the Roman Republic.

The Articles – U.S. Scripture II

The second text of U.S. scripture, the Articles of Confederation, gets much less attention than the Declaration of Independence or the Constitution. Still it set up the political structure by which the new country was run for its first thirteen years; it provided the military structure to win the war for independence; it furnished the diplomatic structure to secure French support during that war and then to reach an advantageous peace treaty with the British Empire.
The Iroquois Great Law of Peace and the Albany Plan of Benjamin Franklin that it inspired are certainly precursors of the Articles of Confederation.  In fact, the First Continental Congress set up a meeting in Albany NY in August 1775 with the Iroquois. The colonists informed the Iroquois of the possibility of the colonies’ breaking with Britain, acknowledged the debt they owed them for their example and advice, and presumably tried to test whether the Iroquois would support the British in the case of an American move for independence. In the end, the Iroquois did stay loyal to the British (the Royal Proclamation of 1763 would have had something to do with that). The French Canadians also stayed loyal to the British; in this case too, it was likely preferring “the devil you know.”
Were there other forerunners to the Articles? Mystère.
Another precursor of the Articles was the Dutch Republic’s Union of Utrecht of 1579, which created a confederation of seven provinces in the north of the Netherlands. Like that of the Iroquois, the Dutch system left each component virtually independent except for issues like the common defense. This was not a democratic system in the modern sense in that each province was controlled by a ruling clique called the regents. For affairs of common interest, the Republic had a governing body called the Staaten (parliament) with one representative from each province. Henry Hudson’s voyage in 1609 was financed by the Staaten and he loyally named the westernmost outpost of New York City Staaten Eylandt. (This is not inconsistent with the old Vaudeville joke that the name originated with the near-sighted Henry asking “Is dat an Eyelandt?!!!” in his broken Dutch.) Hudson stopped short of naming the mighty river he navigated for himself; the Native Americans called it the Mahicanituck and the Dutch simply called it the North River.
Like the American colonists but two hundred years earlier, to achieve independence the citizens of the Dutch Republic had to rebel against the mightiest empire of the time, in this case that of Philipp II of Spain. However, the Dutch Republic in its Golden Age was the most prosperous country in Europe and among the most powerful, proving its military mettle in the Anglo-Dutch Wars of the 17th century – all of which gave rise to the unflattering English language expressions Dutch Courage (bravery fueled by alcohol), Dutch Widow (a woman of ill repute), Dutch Uncle (someone not at all avuncular), Dutch Comfort (a comment like “things could be worse”) and, of course, Dutch Treat. The Dutch Republic was also remarkable for protecting civil liberties and religious freedom, keys to the domestic tranquility that did find their way into the U.S. Bill of Rights. For a painting by the Dutch Master Abraham Storck of a scene from the Four Days Battle during the Second Anglo-Dutch War, click HERE.
The Articles of Confederation were approved by the Second Continental Congress on Nov 15, 1776. Though technically not ratified by the states until 1781, the Articles steered the new country through the Revolutionary War and continued to be in force until 1789. The Articles embraced the federalism of the Iroquois Confederation and the Dutch Republic; they rejected the principle of the Divine Right of Kings in favor of republicanism and they embraced the idea of popular sovereignty affirming that power resides with the people
The Congress of the Confederation had a unicameral legislature (like the Staaten and Nebraska). It had a presiding officer referred to as the President of the United States who organized the deliberations of the Congress, but who did not have executive authority. In all, there were ten presidents, John Hancock and Richard Henry Lee among them. John Hanson, a wealthy landowner and slaveholder from Maryland, was the first president and so wags claim that “John Hanson” and not “George Washington” is the correct answer to the trivia question “who was the first U.S. president” – by the way the answer to the question “which president first called for a Day of Thanksgiving on a Thursday in November” is also “John Hanson.” For a statue of the man, click HERE
The arrangement was truly federal: each state had one vote and ordinary matters required a simple majority of the states. The Congress could not levy taxes itself but depended on the states for its revenue. On the other hand, Congress could coin money and conduct foreign policy but decisions on making war, entering into treaties, regulating coinage, and some other important issues required the vote of nine states in the Congress.
Not unsurprisingly, given the colonists’ opposition to the Royal Proclamation of 1763, during the Revolutionary War the Americans took action to wrest the coveted land west of the Appalachians away from the British. George Rogers Clark, a general in the Virginia Militia (and older brother of William of “Lewis and Clark” fame) is celebrated for the Illinois Campaign and the captures of Kaskaskia (Illinois) and Vincennes (Indiana). For the Porte de Vincennes metro stop in Paris, click HERE.
As the French, Dutch and Spanish squabbled with the English over the terms of a treaty to end the American War of Independence and dithered over issues of interest to these imperial powers ranging from Gibraltar to the Caribbean to the Spice Islands to Senegal, the Americans and the English put together their own deal (infuriating the others, especially the French). This arrangement ceded the land east of the Mississippi and South of the Great Lakes (except for Florida) to the newly born United States. The Florida territory was transferred back once again to Spain. The French had wanted all that land east of the Mississippi and West of the Appalachians to be ceded to its ally Spain who also controlled the Louisiana Territory at this time. Given how Spain returned the Louisiana Territory to France by means of a secret treaty twenty years later, the bold American diplomatic dealings in the Treaty of Paris proved to be prescient; the Americans who signed the treaty with England were Benjamin Franklin, John Adams and John Jay.
The treaty with England was signed in 1783 and ratified by the Confederation Congress then sitting at the Maryland State House in Annapolis on January 14, 1784
However, hostilities between American militias and British and Native American forces continued after Cornwallis’ defeat at Yorktown and even after the signing of the treaty that officially ended the war; in fact, the British did not relinquish Fort Detroit and surrounding settlements until Jay’s Treaty which took effect in 1796. Many thought this treaty made too many concessions to the British on commercial and maritime matters and, for his efforts, Jay was hanged and burned in effigy everywhere by anti-Federalists. Jay reportedly joked that he could find his way across the country by the light of his burning effigies. Click HERE for a political cartoon from the period.

A noted achievement of the Confederation Congress was the Ordinance of 1787 (aka the Northwest Ordinance), approved on July 13, when the Congress was seated at Federal Hall in New York City. The Northwest Territory was comprised of the five territories of Illinois, Michigan, Wisconsin, Indiana, Ohio – the elementary school mnemonic was “I met Walter in Ohio.” Four of these names are Native American in origin; Indiana is named for the Indiana Land Company, a group of real estate investors. The Ordinance outlawed slavery in these areas (but it did include a fugitive slave clause), provided a protocol for territories’ becoming states, acknowledged the land rights of Native Americans, established freedom of navigation on lakes and rivers, established the principle of public education (including universities), … . In fact no time was wasted: Ohio University was chartered in 1787; it is located in Athens (naturally) and today has over 30,000 students. The Ordinance was re-affirmed when the Constitution replaced the Articles of Confederation.

With all these successes in war and in making peace, what drove the Americans to abandon the proven formula of a confederation of tribes or provinces and seek to replace it? Again, mystère.

While the Articles of Confederation were successful when it came to waging war and preparing for new states, it was economic policy and domestic strife that made the case for a stronger central government.

Under the Articles of Confederation, the power to tax stayed with the individual states; in 1781 and again in 1786, serious efforts were made to amend the Articles so that the Confederation Congress itself could levy taxes; both efforts failed leaving the Congress without control over its own finances. During and after the war, both the Congress and individual states printed money, money that soon was “not worth a continental.”

In 1785 and 1786, a rebellion broke out in Western Massachusetts, in the area around Springfield; the leader who emerged was Daniel Shays, a farm worker who had fought in the Revolution (Lexington and Concord, Saratoga, …) and who had been wounded  – but Shays had never been paid for his service in the Continental Army and he was now being pursued by the courts for debts. He was not alone and petitions by yeoman citizens for relief from debts and taxes were not being addressed by the State Legislature. The rebels shut down court houses and tried to seize the Federal Armory in Springfield; in this, they were thwarted only by an ad hoc militia raised with money from merchants in the east of the state. After, many rebels including Shays, swiftly escaped to neighboring states such as New Hampshire and New York, out of reach for the Massachusetts militia.

Shays’ Rebellion shook the foundations of the new country and accelerated the process that led to the Constitutional Convention of 1787. It dramatically highlighted the shortcomings of such a decentralized system in matters of law and order and in matters economic; in contrast, with the new Constitution, Washington as President was able to lead troops to put down the Whiskey Rebellion and Hamilton as Secretary of the Treasury was able to re-organize the economy (national bank, assumption of states’ debts, protective tariffs, …). Click HERE for a picture of George back in the saddle; as for “Hamilton” tickets, just wait for the movie.

The Articles continued to be the law of the land into 1789: the third text of U.S. scripture, the U.S. Constitution, was ratified by the ninth state New Hampshire on June 21, 1788 and the Confederation Congress established March 4, 1789 as the date for the country to begin operating under the new Constitution.
How did that work out? More to come. Affaire à suivre.

The Declaration – U.S. Scripture I

There are three founding texts for Americans, texts treated like sacred scripture. The first is the Declaration of Independence, a stirring document both political and philosophical; in schools and elsewhere, it is read and recited with religious spirit. The second is the Articles of Confederation; a government based on this text was established by the Second Continental Congress; despite the new country’s success in waging the Revolutionary War and in reaching an advantageous peace treaty with the British Empire, this document is not venerated by Americans in quite the same way because this form of government was superseded after thirteen years by that of the third founding text, the Constitution. These three texts play the role of secular scripture in the United States; in particular, the Constitution, although only 4 pages long without amendments, is truly revered and quoted like “chapter and verse.”

Athena, the Greek goddess, is said to have sprung full grown from the head of Zeus (and in full armor, to boot); did these founding texts just emerge from the heads and pens of the founding fathers? In particular, there is the Declaration of Independence. Did it have a precursor? Was it part of the spirit of the times, of the Zeitgeist? Mystère.

Though still under British rule in 1775 when hostilities broke out at Lexington and Concord, the colonies had had 159 years of self-government at the local level: they elected law makers, named judges, ran courts and collected taxes. Forward looking government took root early in the colonies. Already in 1619, in Virginia, the House of Burgesses was set up, the first legislative assembly of elected representatives in North America; in 1620, the Pilgrims drew up the Mayflower Compact before even landing; the 1683, the colonial assembly in New York passed the Charter of Liberties. A peculiar matrix evolved where there was slavery and indentured servitude on the one hand and progress in civil liberties on the other (one example, the trial of John Peter Zenger in New York City in 1735 and the establishment of Freedom of the Press).

In fact, the period from 1690 till 1763 is known as the “period of salutary neglect,” where the British pretty much left the colonies to fend for themselves – the phrase “salutary neglect” was coined by the British parliamentarian Edmund Burke, “the father of modern conservatism.” Salutary neglect was abandoned at the end of the French and Indian War (aka The Seven Years War) which was a “glorious victory” for the British but which left them with large war debts; their idea was to have the Americans “pay their fair share.”

During the run-up to the French and Indian War, at the Albany Convention in 1754, Benjamin Franklin proposed the Albany Plan, which was an early attempt to unify the colonies “under one government as far as might be necessary for defense and other general important purposes.” The main thrust was mutual defense and, of course, it would all be done under the authority of the British crown.

The Albany Plan was influenced by the Iroquois’ Great Law of Peace, a compact that long predated the arrival of Europeans in the Americas; this compact is also known as the Iroquois Constitution. This constitution provided the political basis for the Haudenosaunee, aka the Iroquois Confederation, a confederacy of six major tribes. The system was federal in nature and left each tribe largely responsible for its own affairs. Theirs was a very egalitarian society and for matters of group interest such as the common defense, a council of chiefs (who were designated by the senior women of their clans) had to reach a consensus. The Iroquois were the dominant Indian group in the northeast and stayed unified in their dealings with the French, British and Americans. In a letter to a colleague in 1751, Benjamin Franklin acknowledged his debt to the Iroquois with this amazing admixture of respect and condescension:

    ”It would be a strange thing if six nations of ignorant savages should be capable of forming such a union, and yet it has subsisted for ages and appears indissolvable, and yet a like union should be impractical for 10 or a dozen English colonies.”

The Iroquois Constitution was also the subject of a groundbreaking ethnographic monograph. In 1724, the French Jesuit missionary Joseph-François Lafitau published a treatise on Iroquois society, Mœurs des sauvages amériquains comparées aux mœurs des premiers temps (Customs of the American Indians Compared with the Customs of Primitive Times), in which he describes the workings of the Iroquois system and compares it to the political systems of the ancient world in an attempt to establish a commonality shared by all human societies. Lafitau admired this egalitarian society where each Iroquois, he observed, views “others as masters of their own actions and of themselves” and each Iroquois lets others “conduct themselves as they wish and judges only himself.”

The pioneering American anthropologist Lewis Henry Morgan, who studied Iroquois society in depth, was also impressed with the democratic nature of their way of life, writing “Their whole civil policy was averse to the concentration of power in the hands of any single individual.” In turn, Morgan had a very strong influence on Frederick Engel’s The Origin of the Family, Private Property, and the State: in the Light of the Researches of Lewis H. Morgan (1884). Apropos of the Iroquois Constitution, Engels (using “gentile” in its root Latin sense of “tribal”) waxed lyric and exclaimed “This gentile constitution is wonderful.” Engel’s work, written after Karl Marx’s death, had for its starting point Marx’ notes on Morgan’s treatise Ancient Society (1877).

All of these European writers thought that Iroquois society was an intermediate stage in a progression subject to certain laws of sociology, a progression toward a society and way of life like their own. Of course, Marx and Engels did not think things would stop there.

At the end of the French and Indian War, the British prevented the colonists, who numbered around 2 million at this point, from pushing west over the Appalachians and Alleghenies with the Royal Proclamation of 1763; indeed, his Majesty George III proclaimed (using the royal “our”) that this interdiction was to apply “for the present, and until our further pleasure be known.” This proclamation was designed to pacify French settlers and traders in the area and to keep the peace with Native American tribes, in particular the Iroquois Confederation who, unlike nearly all other tribes, did side with the British in the French and Indian War. It particularly infuriated land investors such as Patrick Henry and George Washington – the latter, a surveyor by trade, founded the Mississippi Land Company in 1763 just before the Proclamation with the expectation of profits from investments in the Ohio River Valley, an expectation dashed by the Proclamation. Designs by Virginians on this region were not surprising given Virginia’s purported westward reach at that time (for a map, click HERE); even today, the Ohio River is the Western boundary of West Virginia. Washington recovered financially and at the time of his death was a very wealthy man.

Though the Royal Proclamation was flouted by colonists who continued to migrate west, it was the first in the series of proclamations and acts that finally outraged the colonists to the point of armed rebellion. Interestingly, in Canada the Royal Proclamation still forms the legal basis for the land rights of indigenous peoples. Doubtless, this has worked out better for both parties than the Discovery Doctrine which has been in force in the U.S. since independence – for details. click HERE .

The Proclamation was soon followed by the hated Stamp Act, which was a tax directly levied by the British government on colonists, as opposed to a tax coming from the governing body of a colony. This led to the Stamp Act Congress which was held in New York City in October 1765. It was attended by representatives from 9 colonies and famously published its Declaration of Rights and Grievances which included the key point “There should be no taxation without representation.” This rallying cry of the Americans goes back to the English Bill of Rights of 1689 which asserts that taxes can only be enacted by elected representatives: “levying taxes without grant of Parliament is illegal.”

Rumblings of discontent continued as the British Parliament and King continued to alienate the colonists.

In the spring of 1774, Parliament abrogated the Massachusetts Charter of 1691, which gave people a considerable say in their government. In September, things boiled over not in Boston, but in the humble town of Worcester. Militiamen took over the courts and in October at Town Meeting independence from Britain was declared. The Committees of Correspondence assumed authority. (For a most useful guide to the pronunciation of “Worcester” and other Massachusets place names, click HERE. By the way, the Worcester Art Museum is outstanding.)

From there, things moved quickly and not so quickly. While the push for independence was well advanced in Massachusetts, the delegates to the First Continental Congress in the fall of 1774 were not prepared to take that bold step: in a letter John Adams wrote “Absolute Independency … Startle[s] People here.”  Most delegates attending the Philadelphia gathering, he warned, were horrified by “The Proposal of Setting up a new Form of Government of our own.”

But acts of insurrection continued. For example, in December 1774 in New Hampshire, activists raided Fort William and Mary, seizing powder and weaponry. Things escalated leading to an outright battle at Lexington and Concord in April, 1775.

Two years after the First Congress, at the Second Continental Congress, the delegates were ready to move in the direction of independence, convening in Philadelphia on May 10, 1776. In parallel, on June 12, 1776 at Williamsburg, the Virginia Constitutional Convention adopted the Virginia Declaration of Rights which called for a break from the Crown and which famously begins with

    Section 1. That all men are by nature equally free and independent and have certain inherent rights, of which, when they enter into a state of society, they cannot, by any compact, deprive or divest their posterity; namely, the enjoyment of life and liberty, with the means of acquiring and possessing property, and pursuing and obtaining happiness and safety.

This document was authored principally by George Mason, a planter and friend of George Washington. Meanwhile back in Philadelphia, Thomas Jefferson (who would have been familiar with the Virginia Declaration) was charged by a committee with the task of putting together a statement presenting the views of the Second Continental Congress on the need for independence from the British. After some edits by Franklin and others, the committee brought forth the founding American document, The Declaration of Independence. The 2nd paragraph of which begins with that resounding universalist sentence:

    We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

And it continues

    Governments are instituted among Men, deriving their just powers from the consent of the governed

The ideas expressed in the Virginia Declaration of Rights and the Declaration of Independence are radical and they had to have legitimacy. So, we are back to the original mystère, where did that legitimacy come from?

Clearly, phrases like “all men are by nature equally free and independent” and “all men are created equal” reverberate with an Iroquois, New World sensibility. Scholars also see here the hand of John Locke, most literally in the use of Locke’s  phrase “the pursuit of Happiness”; they also see the influence of Locke in the phrase “consent of the governed” – the idea of popular sovereignty being a concept closely associated with social contract philosophers of the European Enlightenment such as Locke and Rousseau.

Others also see the influence of the Scottish Enlightenment, that intellectual flowering with giants like David Hume and Adam Smith. The thinkers whose work most directly influenced the Americans include Thomas Reid (“self-evident” is drawn from the writings of this founder of the Common Sense School of Scottish Philosophy) and Francis Hutchenson (“unalienable rights” is drawn from his work on natural rights – Hutchenson is also known for his impact on his student Adam Smith, who later held Hutchenson‘s Chair of Moral Philosophy at Glasgow). Still others hear echoes of Thomas Paine and the recently published Common Sense – that most vocal voice for independence.

Jefferson would have been familiar with the Scottish Enlightenment; he also would have read the work of Locke and, of course, Paine’s pamphlet. The same most probably applies to George Mason as well. In any case, the Declaration of Independence went through multiple drafts, was read and edited by others of the Second Continental Congress and eventually was approved by the Congress by vote; so it must also have reflected a generally shared sense of political justice.

On July 1, 1776, the Second Continental Congress did take that bold step and voted in favor of Richard Henry Lee’s motion for independence from Great Britain. On July 4th, they officially adopted the Declaration of Independence.

Having declared independence from the British Crown, the representatives at the Second Continental Congress now had to come up with a scheme to unify a collection of fiercely independent political entities. Where could they have turned for inspiration and example in a political landscape dominated by powerful monarchies? Mystère. More to come.

Champagne

The Rule of St Benedict goes back to the beginning of the Dark Ages. Born in the north of Italy at the end of the 5th century, at the end of the Roman Empire, Benedict is known as the Father of Western Monasticism; he laid down a social system for male religious communities that was the cornerstone of monastic life in Western Europe for a thousand years and one that endures to this day. It was a way of life built around prayer, manual work, reading and, of course, submission to authority. The Benedictine abbey was led by an abbot; under him were the priests who were the ones to copy manuscripts and to sing Gregorian chant; then there were the brothers who bore the brunt of much of the physical work and lay people who bore as much or more. The Rule of St. Benedict is followed not only by the Benedictines, the order he founded, but also by the Cistercians, the Trappists and others.

The monasteries are credited with preserving Western Civilization during the Dark Ages; copying manuscripts led to the marvelous decorative calligraphy of the Book of Kells and other masterpieces – the monks even introduced the symbol @ (the arobase, aka the “at sign”) to abbreviate the Latin preposition ad. They are also credited with sending out those fearless missionaries who brought literacy and Christianity to pagan Northern tribes, bringing new people into the orbit of Rome: among them were Winfrid (aka Boniface) from Wessex who chopped down sacred oak trees to prosyletize German tribes and Willibrord from Northumbria who braved the North Sea to convert the fearsome Frisians, destroying pagan sanctuaries and temples in the process.

The monasteries also accumulated vast land holdings (sometimes as donations from aging aristocrats who were more concerned to make a deal with God than they were for the future of their children and heirs).  With land and discipline came wealth and monasteries became the target of choice for Viking marauders. At the time of the Protestant Reformation, the monasteries fell victim to iconoclasts and plundering potentates. Henry VIII dissolved the Cistercian abbey at Rievaulx in Yorkshire and other sites, procuring jewelry for Anne Boleyn in the process. This was the kind of thing that provoked the crypto-Catholic poet Shakespeare to lament about the

Bared ruin’d choirs, where late the sweet birds sang

Even today, though in ruins, Rievaulx is magnificent as a visit to Yorkshire or a click HERE will confirm. It can also be noted that the monks at Rievaulx abandoned the Rule of St. Benedict in the 15th century and became rather materialistic; this probably made them all the more tempting a target for Henry VIII.

We also owe the great beers to the monks. Today, two of the most sought after Belgian beers in the world are Trappiste and Abbaye de Leffe. The city of Munich derives its name from the Benedictine monastery that stood in the center of town. Today’s Paulaner and Augustiner beers trace back to monasteries. But what was it that turned other-wordly monks into master brewers? Mystère.

The likely story is that it was the Lenten fast that drove the monks to secure a form of nourishment acceptable to the prying papal legates. Theirs was a liquid only fast from Ash Wednesday through Holy Saturday, including Sundays. The beers they crafted passed muster as non-solid food and were rich in nutrients. There are also stories that the strong brews would be considered undrinkable by Italian emissaries from Rome and so would be authorized to be drunk during Lent as additional mortification of the flesh.

Indeed, the followers of the Rule of St. Benedict didn’t stop there. We also owe bubbly to the Benedictines. The first documented sparkling wine was made in 1531 at the Benedictine Abbey of St. Hilaire in the town of Limoux in the south of France – close to the Mediterranean, between Carcassonne and Perpignan – a mere 12 miles from Rennes-Le-Chateau (of Davinci Code fame). Though wine-making went back centuries and occasionally wine would have a certain effervescence, these churchmen discovered something new. What was this secret? Mystère.

These Benedictine monks were the first to come upon the key idea for bubbly – a second fermentation in the bottle. Their approach is what now called the “ancestral method” – first making a white wine in a vat or barrel (“cuve” in French), interrupting the vinification process, putting it in bottles and adding yeast, grape sugar or some alcohol, corking it and letting it go through a second fermentation process in the (hopefully strong and well-sealed) bottle. This is the technique used for the traditional Blanquette de Limoux; it is also used for the Clairette de Die. The classic Blanquette de Limoux was not strong in alcohol, around 6-8 % and it was rather doux and not brut. Today’s product comes in brut and doux versions and is 12.5%  alcohol.

By the way, St. Hilaire himself was not a monk but rather a 4th century bishop and defender of the orthodoxy of the Nicene Creed (the one intoned in the Catholic and Anglican masses and other services); he is known as the “Athanasius of the West” which puts him in the company of a man with a creed all of his own – the Athanasian Creed forcefully affirms the doctrine of the Triune God and is read on Trinity Sunday.

The original ancestral method from Limoux made for a pleasant quaff, but not for the bubbly of today. Did the Benedictines come to the rescue once again? What devilish tricks did they come up with next? Or was it done by some other actors entirely? Mystère.

This time the breakthrough to modern bubbly took place in the Champagne region of France. This region is at the point where Northern and Southern Europe meet. In the late middle ages, it was one of the more prosperous places in Europe, a commercial crossroads with important fairs at cities like Troyes and Rheims. The cathedral at Rheims is one of the most stunning in Europe and the French kings were crowned there from Louis the Pious in 816 to Charles X in 1825. In fact, Rheims has been a city known to the English speaking world for so long that its name in French (Reims) has diverged from its older spelling which we still use in English. It is also where English Catholics in the Elizabethan and Jacobean periods published the Douay-Rheims translation of the Latin Vulgate.

Enter Pierre Pėrignon, son of a prosperous bourgeois family, who in 1668 joined the Benedictine monastery at St. Pierre de Hautvillers. The order by that time deigned to accept non-noble commoners as monks and would dub them dom, a title drawn from the Latin word for lord, dominus, to make up for their lack of a traditional aristocratic handle.

Dom Pėrignon was put in charge of wine making and wine storage, both critical to the survival of the monastery which had fallen on hard times with only a few monks left and things in a sorry state. The way the story is told in France, it was during a pilgrimage south and a stay with fellow Benedictines at the monastery of St. Hilaire in Limoux that he learned the ancestral method of making sparkling wine. However he learned of their techniques, he spent the rest of his life developing and perfecting the idea. By the time of his death in 1715, champagne had become the preferred wine at the court of Louis XIV and the wine of choice of the fashionable rich in London. Technically, Dom Pėrignon was a master at choosing the right grapes to blend to make the initial wine; then he abandoned the ancestral technique of interrupting the fermentation of the wine in the cuve or vat and let the wine complete its fermentation; next, to deal with the problem of dregs caused by the second fermentation in the bottle, he developed the elaborate practice of rotating each bottle by a quarter turn, a step repeated every day for two or more months and known as the remuage (click HERE for an illustration); it is said that professionals can do 40,000 bottles a day.

To all that, one must add that he found a solution to the “exploding bottle problem”; as the pressure of the CO2 that creates the bubbles builds up during the fermentation in the bottle, bottles can explode spontaneously and even set off a chain reaction. To deal with this, Dom Pėrignon turned to bottle makers in London who could make bottles that could withstand the build-up of all that pressure. Also, that indentation in the bottom of the bottle (the punt or kick-up in English, cul in French) was modified; better corks from Portugal too entered into it.

Putting all this together yielded the champagne method. Naturally, wine makers in other parts of France have applied this process to their own wines, making for some excellent bubbly; these wines used to carry the label “Mėthode Champenoise” or “Champagne Method.” While protection of the right to the term Champagne itself is even included in the Treaty of Versailles, more recently, a restriction was made to the effect that only wines from the Champagne region could even be labeled “Champagne Method.” So the other wines produced this way are now called crėmants (a generic term for sparkling wine). Thus we have the Crėmant d’Alsace, the Crėmant de Bourgogne, the Crėmant de Touraine and even the Crėmant de Limoux. All in all, these crėmants constitute the best value in French wines on the market. Other countries have followed the French lead and do not use the label “Champagne” or “Champagne Method” for sparkling wines; even the USA now follows the international protocol (although wines labeled Champagne prior to 2006 are exempt).

Admittedly, champagne is a marvelous beverage. It has great range and goes well with steak and with oysters. The sound of the pop of a champagne cork means that the party is about to begin.  Of course, one key thing is that champagne provides a very nice “high”; and it does that without resorting to high levels of alcoholic content. Drolly, at wine tastings and similar events, the quality of the “high” provided by the wine is never discussed directly – instead they skirt around it talking about legs and color and character and what not, while the key thing is the quality of the “high” and the percentage of alcohol in the wine. So how do you discuss this point in French itself? Mystère. In effect, there is no way to deal with all this in French, la Langue de Voltaire. The closest you can come to “high” is “ivresse” but that has a negative connotation; you might force the situation and try something like “douce ivresse” but that doesn’t work either. Tis a mystère without a solution, then. But it is the main difference between a $60 bottle of Burgundy and a $15 bottle of Pinot Noir – do the experiment.

There are some revisionist historians who try to diminish the importance of Dom Pėrignon in all this. But he has been elevated to star status as an icon for the Champagne industry. As an example, click HERE for a statue in his honor erected by Moȅt-Chandon; click HERE for an example of an advertisement they actually use to further sales.

So the secret to the ancestral method and the champagne method is that second fermentation in the bottle. But now we have popular alternatives to champagnes and crėmants in the form of Prosecco and Cava. If these very sparkling wines are not made with the champagne method, what is it then? Mystère.

In fact, the method used for these effervescent wines does not require that second fermentation in the bottle, a simplification made possible by the invention of stainless steel toward the end of the 19th century. This newer method carries out the secondary fermentation in closed stainless steel tanks that are kept under pressure. This simplifies the entire production process and makes the bubbly much less expensive to make. The process was first invented by Frederico Martinotti in Italy, then improved by Eugène Charmat in France – all this in the 1890’s. So it is known as the metodo italiano in Italy and the Charmat method most elsewhere. In the 1960’s things were improved further to allow for less doux, more brut wines. They are now extremely popular and considered a good value by the general wine-loving public.

Finally, there is the simplest method of all to make sparkling wine or cider – inject carbon dioxide directly into the finished wine or cider, much like making seltzer water from plain water with a SodaStream device. In fact, this is the method used for commercial ciders. Please don’t try this at home with a California chardonnay if you don’t have strong bottles, corks and protective wear.

Indeed, beer and wine have played an important role in Western Civilization; wine is central to rituals of Judaism and Christianity; the ancient Greeks and Romans even had a god of wine. In fact, beer and wine go back to the earliest stages of civilization. When hunter gatherers metamorphosed into farmers during the Neolithic Revolution and began civilization as we know it, they traded a life-style where they were taller, healthier and longer-lived for one in which they were shorter, less healthy, had a shorter life span and had to endure the hierarchy of a social structure with priests, nobles, chiefs and later kings.  On the other hand, archaeological evidence often points to the fact that one of the first things these farmers did do was to make beer by fermenting grains. Though the PhD thesis hasn’t been written yet, it is perhaps not unsafe to conclude that fermentation made this transition bearable and might even have been the start of it.

 

 

 

North America III

When conditions allowed, humans migrated across the Bering Land Bridge moving from Eurasia to North America thousands of years before the voyages of discovery of Columbus and other European navigators. That raises the question whether there were Native Americans who encountered Europeans before Columbus. If so, were these encounters of the first, second or third kind? Mystère.
For movement west by Europeans before Columbus, first we have the Irish legend of the voyage of St. Brendan the Navigator. Brendan and his 16 mates (St. Malo among them) sailed in a currach – an Irish fisherman’s boat with a wooden frame over which are stretched animal skins (nowadays they use canvas and sometimes anglicize the name to curragh). These seafarers reportedly reached lands far to the West, even Iceland and Newfoundland in the years 512-530 A.D. All this was presented as fact by nuns in parochial schools of yore, but, begorrah, there is archaeological evidence of the presence of Irish visitors on Iceland before any Viking settlements there. Moreover, in the 1970’s the voyage of St. Brendan was reproduced by the adventurer Tim Severin and his crew, which led to a best-selling book The Brendan Voyage, lending further credence to the nuns’ version of history. However, there is no account in the legends of any contact with new people; for contact with a mermaid, click HERE.
In the late 9th century, Viking men accompanied by (not so willing) Celtic women reached Iceland and established a settlement there (conventionally dated as of 874 A.D.). Out of Iceland came the Vinland Saga of the adventures of Leif Erikson (who converted to Christianity and so gained favor with the later saga compilers) and of his fearless sister Freydis Eriksdottir (who also led voyages to Vinland but who stayed true to her pagan roots). It has been established that the Norse did indeed reach Newfoundland and did indeed start to found a colony there; the site at L’Anse-aux-Meadows has yielded abundant archaeological evidence of this – all taking place around 1000 A.D.  The Norse of the sagas called the indigenous people they encountered in Vinland the Skraeling (which can be translated as “wretched people”). These people were not easily intimidated; there were skirmishes and more between the Skraeling (who did have bows and arrows) and the Norse (who did not have guns). In one episode, Freydis grabs a fallen Viking’s sword and drives the Skraeling attackers off on her own – click HERE for a portrait of Freydis holding the sword to her breast.
Generally speaking, the war-like nature of the Skraeling is credited with keeping the Vikings from establishing a permanent beach head in North America. So these were the first Native Americans to encounter Europeans, solving one mystery. But exactly which Native American group were the Skraeling? What is their migration story? Again mystère.
The proto-Inuit or Thule people, ancestors of the modern Inuit, emerged in Alaska around 1000 B.C. Led by their Sled Dogs, they “quickly” made their way east across Arctic Canada, then down into Labrador and Newfoundland. The proto-Inuit even made their way across the Baffin Bay to Greenland. What evidence there is supports the idea that these people were the fierce Skraeling of the sagas.
As part of the movement West, again according to the sagas, Norse settlers came to Greenland around 1000 A.D. led by the notorious Erik the Red, father both of Leif and Freydis. Settlers from Norway as well as Iceland joined the Greenland colony and it became a full blown member of medieval Christendom, what with churches, a monastery, a convent and a bishop. In relatively modern times, around 1200 A.D., the proto-Inuit reached far enough South in Greenland to encounter the Europeans there. Score one more for these intrepid people. These are the only two pre-Columbian encounters between Europeans and Native Americans that are well established.
In the end, though, the climate change of the Little Ice Age (which began around 1300 A.D.) and the Europeans’ impact on the environment proved too much and the Greenland colony died out sometime in the 1400’s. The proto-Inuit population with their sled dogs stayed the course (though not without difficulty) and survived. As a further example of the impact of the climate on Western Civilization, the Little Ice Age practically put an end to wine making in the British Isles; the crafty and thirsty Scots then made alcohol from grains such as barley and Scotch whisky was born.
The success of the proto-Inuit in the Arctic regions was based largely on their skill with the magnificent sled dogs that their ancestors had brought with them from Asia. The same can be said for the Aleut people and their Alaskan Malamute Dog. Both these wolf-like marvels make one want to read or reread The Call of the Wild; for a Malamute picture, click HERE.
We know the Clovis people and other populations in the U.S. and Mexico also had dogs that had come from Siberia. But today in the U.S. we are overwhelmed with dogs of European origin – the English Sheepdog, the Portuguese Water Dog, the Scottish Terrier, the Irish Setter, the French Poodle, the Cocker Spaniel, and on and on. What happened to the other dogs of the Native Americans? Are they still around? Mystère.
The simple answer is that, for the most part, in the region south of the Arctic, the native dogs were simply replaced by the European dogs. However, for the lower 48, it has been established recently that the Carolina Dog, a free range dingo, is Asian in origin. In Mexico, the Chihuahua is the only surviving Asian breed; the Xoloitzcuintli (aka the Mexican Hairless Dog) is thought to be a hybrid of an original Asian dog and a European breed.  (Some additional Asian breeds have survived in South America.)
Still it is surprising that the native North Americans south of the Arctic regions switched over so quickly and so completely to the new dogs. A first question that comes to mind is whether or not these two kinds of dogs were species of the same animal; but this question can’t be answered since dogs and wolves are already hard to distinguish – they can still interbreed and their DNA split only goes back at most 30,000 years. A more tractable formulation would be whether or not the Asian dogs and the European dogs are the issue of the same domestication event or of different domestication events. However, “it’s complicated.” One sure thing we know comes from the marvelous cave at Chauvet Pont d’Arc in central France where footprints of a young child walking side by side with a dog or wolf have been found which date back some 26,000 years. This would point to a European domestication event; however, genetic evidence tends to support an Asian origin for the dog. Yet another theory backed by logic and some evidence is that both events took place but that subsequently the Asian dogs replaced the original European dogs in Western Eurasia.
For one thing, the dogs that came with the Native Americans from Siberia were much closer to the dogs of an original Asian self-domestication that took place in China or Mongolia (according to PBS); in any case, they would not have gone through as intense and specialized a breeding process as would dogs in populous Europe, a process that made the European dogs more useful to humans south of the Arctic and more compatible with domestic animals such as chickens, horses and sheep – until the arrival of Europeans the Native Americans did not have domestic animals and did not have resistance to the poxes associated with them.
The role of dogs in the lives of people grows ever more important and dogs continue to take on new work roles – service dogs, search and rescue dogs, guide dogs, etc. People with dogs reportedly get more exercise, get real emotional support from their pets and live longer. And North America has given the world the wonderful Labrador Retriever. The “Lab” is a descendant of the St John’s Water Dog which was bred in the Province of New Foundland and Labrador; this is the only Canadian province to bear a Portuguese name – most probably for the explorer João Fernandes Lavrador who claimed the area for the King of Portugal in 1499, an area purportedly already well known to Portuguese, Basque and other fearless fishermen who were trolling the Grand Banks before Columbus (but we have no evidence of encounters of such fishermen with the Native Americans). At one point later in its development, the Lab was bred in Scotland by the Duke of Buccleuch, whose wife the Duchess was Mistress of the Robes for Queen Victoria (played by Diana Rigg in the PBS series Victoria – for the actual duchess, click HERE). From Labrador and Newfoundland, the Lab has spread all over North America and is now the most popular dog breed in the U.S. and Canada – and in the U.K. as well.

North America II

At various times in history, the continents of Eurasia and North America have been connected by the Bering Land Bridge which is formed when water levels recede and the Bering Sea is filled in (click  HERE for a dynamic map that shows changes in sea level over the last 21,000 years).

When conditions allowed, humans (with their dogs) migrated across the Bering Land Bridge moving from Eurasia to North America. It is not certain exactly when this human migration began and when it ended but a typical estimated range is from 20,000 years ago to 10,000 years ago. It has also been put forth that some of these people resorted to boats to ferry them across this challenging, changing area.

DNA analysis has verified that these new arrivals came from Siberia. It has also refuted Thor Heyerdahl’s “Kon Tiki Hypothesis” that Polynesia was settled by rafters from Peru – Polynesian DNA and American DNA do not overlap at all. On the other hand, there is DNA evidence that some of the trekkers from Siberia had cousins who lit off in the opposite direction and became the aboriginal people of New Guinea and of Australia!

The Los Angeles area is famous for its many attractions. Among them is the site of the La Brea Tar Pits, click HERE for a classic image. This is the resting place of countless animals who were sucked in to the primeval tar ooze here over a period of thousands and thousands of years. What is most striking is that so many of them have gone extinct, especially large animals such as the camel, the horse, the dire wolf, the giant sloth, the American lion, the sabre-toothed tiger, … .  In fact, with the exception of the jaguar, the musk ox, the moose, the caribou and the bison, all the large mammals of North America disappeared by 8,000 years ago. Humans arrived in North America not so many years before and we know they were successful hunters of large mammals in Eurasia. So, as in radio days, the $64 question is: did humans cause the extinction of these magnificent animals in North America? Mystère.

The last Ice Age lasted from 110,00 years ago to 11,700 years ago (approximately, of course).  During this period, glaciers covered Canada, Greenland and states from Montana to Massachusetts. In fact, the Bering Land Bridge was created when sea levels dropped because of the massive accumulation of frozen water locked in the glaciers. The glaciers didn’t just sit still; rather they moved South, receded to the North and repeated the cycle multiple times. In the process, they created the Great Lakes, flattened Long Island, and amassed that mound of rubble known as the Catskill Mountains – technically the Catskills are not mountains but rather a mass of monadnocks. In contrast, the Rockies and Appalachians are true mountains, created by mighty tectonic forces that thrust them upward. In any case, South of the glaciers, North America was home to many large mammal species.

As the Ice Age ended, much of North America was populated by the hunting group known as the Clovis People. Sites with their characteristic artifacts have been excavated all over the lower 48 and northern Mexico. For a picture of their characteristic spear head, click HERE. The term Clovis refers to the town of Clovis NM, scene of the first archaeological dig that uncovered this culture; luckily this dig preceded any work at the nearby town of Truth Or Consequences NM.

The Clovis people, who dominated North America south of the Arctic at the time of these mass extinctions, were indeed hunters of large and small mammals and that has given rise to the “Clovis overkill hypothesis” – that it was the Clovis people who hunted the horse and other large species to extinction. The hypothesis is supported by the fact that Clovis sites have been found with all kinds of animal remains – mammoths, horses, camels, sloths, tapirs and other species.

For one thing, these animals did not co-evolve with humans and had not honed defensive mechanisms against them – unlike, say, the Zebra in Africa (descendants of the original North American horse) who are aggressive toward humans; the Zebra have neither been hunted to extinction nor domesticated. In fact, the sociology of grazing animals such as the horse can work against them when pursued by hunters. The herd provides important defense mechanisms – mobbing, confusing, charging, stampeding, plus the presence of bulls. But since these animals employ a harem system and the herd has many cows per bull, the solitary males or the males in very small groups who are not part of the herd are natural targets for hunting. Even solitary males who use speed of flight as their first line of defense can fall victim to persistence hunting, a tactic known to have been used by humans – pursuing prey relentlessly until it is immobile with exhaustion:  a horse can run the 10 furlongs of the Kentucky Derby faster than a human in any weather but on a hot day a sweating human can beat a panting horse in a marathon. An example of persistence hunting in more modern times is stag hunting on horseback with yelping dogs – the hunts ends when the stag is exhausted and immobile and the coup de grace is administered by the Master of the Hunt. As for dogs, they were new to the North American mammals and certainly would have aided the Clovis people in the hunt.

Also there is a domino effect as an animal is hunted to extinction: the predator animals that depend on it are also in danger. By way of example, it is thought that the sabre-toothed tiger disappeared because its prey the mammoth went extinct.

Given all this, how did the bison and caribou survive?  In fact, for the bison, it was quite the opposite: there was a bison population explosion. Given that horses and mammoths have the same diet as bison, some scientists postulate that that competition with the overly successful bison drove the others out. Another thing is that bison live in huge herds while animals like horses live in small bands. It is theorized that the caribou, who also travel in massive herds, survived by pushing ever further north into the arctic regions where ecological conditions were less hostile for them and less hospitable to humans and others.

However, all this is happening at a dramatic period, the end of the Ice Age. So warming trends, the end of glaciation and other environmental changes can have contributed to this mass extinction: open spaces were replaced by forests reducing habitat, heavy coats of fur became a burden, … In fact, the end of the last Ice Age is also the end of the much longer Pleistocene period; this was followed by the much warmer Holocene period which is the one we are still in today. So the Ice Age and the movement of the glaciers suddenly ended; this was global warming to a degree that would not be seen again until the present time. The warming that followed the Ice Age would also have changed the ecology of insects, arachnids, viruses et al, with a potentially lethal impact on plant life and on mega fauna. Today we are witnessing a crisis among moose caused by the increase of the winter tick population which is no longer kept in check by cold winters. We are also seeing insects unleashed to attack trees. Along the East Coast, it is the southern pine beetle which has now reached New England – on its Shermanesque march north, this beetle has destroyed forests, woods, groves, woodlands, copses, thickets and stands of once proud pine trees. It is able to move north because the minimum cold temperature in the Mid-Altantic states has warmed by about 7 degrees Fahrenheit over the last 50 years. In Montana and other western states it is the mountain pine beetle and the endless fires that are destroying forests.

Clearly, the rapidly occurring and dramatic transformations at the end of the Ice Age could have disrupted things to the point of causing widespread extinctions – evolution did not have time to adjust.

And then there is this example where the overkill hypothesis is known not to apply. The most recent extinction of mammoths took place on uninhabited Wrangel Island in the Arctic Ocean off the Chukchi Peninsula in Siberia, only 4,000 years ago, and that event is not attributed to humans in any way. The principal causes cited are environmental factors and genetic meltdown – the accumulation of bad traits due to the small size of the breeding population

In sum, scientists make the case that climate change and environmental factors were the driving forces behind these extinctions and that is the current consensus.

So it seems that the overkill hypothesis is an example of the logical fallacy “post hoc ergo propter hoc” – A happened after B, therefore B caused A. By the way, this fallacy is the title of the 2nd episode of West Wing where Martin Sheen as POTUS aggressively invokes it to deconstruct his staff’s analysis of his electoral loss in Texas; in the same episode, Rob Lowe shines in a subplot involving a call-girl who is working her way through law school! Still, the circumstantial evidence is there – humans, proven hunters of mammoths and other large fauna, arrive and multiple large mammals disappear.

The situation for the surviving large mammals in North America is mixed at best. The bison are threatened by a genetic bottleneck (a depleted gene pool caused by the Buffalo Bill era slaughter to make the West safe for railroads), the moose by climate change and tick borne diseases, the musk ox’s habitat is reduced to arctic Canada, the polar bear and the caribou have been declared vulnerable species, the brown bear and its subspecies the grizzly bear also range over habitats that have shrunk. The fear is that human involvement in climate change is moving things along so quickly that future historians will be analyzing the current era in terms of a new overkill hypothesis.

North America I

Once upon a time, there was a single great continent – click HERE – called Pangaea. It formed 335 million years ago, was surrounded by the vast Panthalassic Ocean and only began to break apart about 175 million years ago. North America dislodged itself from Pangaea and started drifting west; this went on until its plate rammed into the Pacific plate and the movement dissipated but not before the Rocky Mountains swelled up and reached great heights.

As the North American piece broke off, it carried flora and fauna with it. But today we know that many land species here did not come on that voyage from Pangaea; even the iconic American bison (now the National Mammal) did not originate here. How did they get to North America? Something of a mystère.

Today the Bering Strait separates Alaska from the Chukchi Peninsula in Russia but, over the millennia, there were periods when North America and Eurasia were connected by a formation known as the Bering Land Bridge, aka Beringia. Rises and ebbs in sea level due to glaciation, rather than to continental drift, seem to be what creates the bridge. When the land bridge resurfaces, animals make their way from one continent to the other in both directions.

Among the large mammals who came from Eurasia to North America by means of the Bering Land Bridge were the musk ox (click HERE), the steppe mammoth, the steppe bison (ancestor of our bison), the moose, the cave lion (ancestor of the American lion) and the jaguar. The steppe mammoth and the American lion are extinct today.

Among the large mammals native to North America were the Columbian mammoth, the sabre-toothed tiger, and the dire wolf. All three are now extinct; for an image of the three frolicking together in sunny Southern California, click HERE .

The dire wolf lives on, however, in the TV series Game of Thrones and this wolf is the sigil of the House of Stark; so it must also have migrated from North America to the continent of Westeros – who knew !

Also among the large mammals native to North America are the short-faced bear, the tapir, the brown bear, and the caribou (more precisely, this last  is native to Beringia). The first two are sadly extinct today.

In school we all learned how the Spanish conquistadors brought horses to North America, how the Aztecs and Incas had never seen mounted troops before and how swiftly their empires fell to much smaller forces as a result. An irony here is that the North American plains were the homeland of the species Equus and horses thrived there until some 10,000 years ago. Indeed, the horse and the camel are among the relatively few animals to go from North America to the more competitive Eurasia; these odd-toed and even-toed ungulates prospered there and in Africa – even Zebras are descended from the horses that made that crossing.  The caribou also crossed to Eurasia where they are known as reindeer.

Similarly, South America split off from Pangaea and drifted west creating the magnificent range of the Andes Mountains before stopping.

The two New World continents were not connected until volcanic activity and plate tectonics created the Isthmus of Panama about 2.8 million years ago – some say earlier (click HERE). The movement of animals between the American continents via the Isthmus of Panama is called the Great American Interchange. Among the mammals who came from South America to North America were the mighty ground sloth (click HERE ).

This sloth is now extinct but some extant smaller South American mammals such as the cougar, armadillo, porcupine and opossum also made the crossing. The opossum and the ground sloth are both marsupials; before the Great American Interchange, there were no marsupials in North America as there are none in Eurasia or Africa.

The camel, the jaguar, the tapir, the short-faced bear and the dire wolf made their way across the Isthmus of Panama to South America. The camel is the ancestor of today’s llamas, vicuñas, alpacas and guanacos. The jaguar and tapir have found the place to their liking, the short-faced bear has evolved into the spectacled bear but the dire wolf is not found there today; it is not known if it has survived on the fictional continent of Essos.

The impact of the movement of humans and dogs into North America is a subject that needs extensive treatment and must be left for another day (post North America III). But one interesting side-effect of the arrival of humans has been the movement of flora in and out of North America. So grains such as wheat, barley and rye have been brought here from Europe, the soybean from Asia, etc.. In the other direction, pumpkins, pecans, and cranberries have made their way from North America to places all over the planet. Two very popular vegetables, that originated here and that have their own stories, are corn and the sweet potato.

North America gave the world maize. This word came into English in the 1600s from the Spanish maiz which in turn was based on an Arawak word from Haiti. The Europeans all say “maize” but why do Americans call it “corn”? Mystère.

In classical English, corn was a generic term for the locally grown grain – wheat or barley, or rye … . Surely Shakespeare was not thinking of maize when he included this early version of Little Boy Blue in King Lear where Edgar, masquerading as Mad Tom, recites

Sleepest or wakest thou, jolly shepherd?

Thy sheep be in the corn.

And for one blast of thy minikin mouth,

Thy sheep shall take no harm.

So naturally the English colonists called maize “Indian corn” and the “Indian” was eventually dropped for maize in general – although Indian corn is still in widespread use for a maize with multicolored kernels, aka flint corn. If you want to brush up on your Shakespeare, manikin is an archaic word that came into English from the Dutch minneken and that means small.

That other culinary gift of North America whose name has a touch of mystère is the sweet potato. In grocery stores, one sees more tubers labeled yam than one does sweet potato. However, with the rare exception of imported yams, all these are actually varieties of the sweet potato. The yam is a different thing entirely – it is a perennial herbaceous vine, while a sweet potato is in the morning glory family (and is the more nutritious vegetable). The yam is native to Africa and the term yam was probably brought here by West Africans.

The sweet potato has still another language problem: it was the original vegetable that was called the batata and brought back to Spain early in the 1500’s; later the name was usurped by that Andean spud which was also called a batata, and our original potato had to add “sweet” to its name making it a retronym (like “acoustic guitar,” “landline phone” and “First World War”).

Gold Coin to Bit Coin

The myth of Midas is about the power of money – it magically transforms everything in its path and turns it into something denominated by money itself. The Midas story comes from the ancient lands of Phrygia and Lydia, in western modern day Turkey, close to the Island of Lesbos and to the Ionian Greek city Smyrna where Homer was born. It was the land of a people who fought Persians and Egyptians, who had an Indo-European language, and who were pioneering miners and metal workers. It was the land of Croesus, the King of Sardis, the richest man in the world whose wealth gave rise to the expression “as rich as Croesus.” Click HERE for a map.
Croesus lived in the 6th century B.C., a century dominated by powerful monarchs such as of Cyrus the Great of Persia (3rd blog appearance), Tarquin the Proud of Rome, Hammurabi II of Babylon, Astyages of the Medes, Zedekiah the King of Judah, the Pharaoh Amasis II of Egypt, Hamilcar I of Carthage. How could the richest man in the world at that time be a king from a backwater place such as Sardis in Lydia? Mystère.
In the time of Achilles and Agamemnon in the Eastern Mediterranean as in the rest of the world, goods and services were exchanged by barter. Shells, beads, cocoa beans and other means were also used to facilitate exchange, in particular to round things out when the goods bartered didn’t quite match up in value.
So here is the simple answer to our mystère: The Lydians introduced coinage to the world, a first in history, and thus invented money and its magic – likely sometime in the 8th century B.C. Technically, the system they created is known as commodity money, one where a valuable commodity such as gold or silver is minted into a standardized form.
Money has two financial functions: exchange (buying and selling) and storage (holding on to your wealth until you are ready to spend it or give to someone else). So it has to be easy to exchange and it has to hold its value over time. The first Lydian coins were made from an alloy of gold and silver that was found in the area; later technology advances meant that by Croesus’ time, coins of pure gold or pure silver could be struck and traded in the marketplace. All this facilitated commerce and led to an economic surge which made Croesus the enduring personification of wealth and riches. Money has since has made its way into every nook and cranny of the world, restructuring work and society and creating some of the world’s newest professions along the way; money has evolved dramatically from the era of King Croesus’ gold coin to the present time and Satoshi Nakamoto’s Bitcoin.
The Lydian invention was adopted by the city states of Greece. Athens and other cities created societies built around the agora, the market-place, as opposed to the imperial societies organized around the palace with economies based on in-kind tribute – taxes paid in sacks of grain, not coin. These Greek cities issued their own coins and created new forms of political organization such as democratic government. This is a civilization far removed from the warrior world of the Homeric poems. The agora model spread to the Greek cities of Southern Italy such as Crotone in Calabria known for Pythagoras and his Theorem, such as Syracuse in Sicily renowned for Archimedes and his “eureka moment.” Further north in Rome, coinage was introduced to facilitate trade with Magna Graecia as the Romans called the southern region of Italy.
The fall of Rome came about in part because the economy collapsed under the weight of maintaining an overextended and bloated military. Western Europe then fell into feudalism which was only ended with the growth of towns and cities in the late middle ages: new trading networks arose such as the Hanseatic League, the Fairs of the Champagne region (this is before the invention of bubbly), and the revival of banking in Italy that fueled the Renaissance. The banks used bills of exchange backed by gold. So this made the bill of exchange a kind of paper money, but it only circulated within the banking community.
Banks make money by charging interest on loans. The Italian banks did not technically charge interest because back then charging interest was the sin of usury in the eyes of the Catholic Church – as it still is in Sharia Law. Rather they side-stepped this prohibition with the bills of exchange and took advantage of exchange rates between local currencies – their network could transfer funds from London to Lyons, from Antwerp to Madrid, from Marseilles to Florence, … . The Christian prohibition against interest actually goes back to the Hebrew Bible, but Jewish law itself only prohibits charging interest on loans made to other Jews. Although the reformers Luther and Calvin both condemned charging interest, the Protestant world as well as the Church of Rome eventually made peace with this practice.
In China also, paper exchange notes appeared during the Tang Dynasty (618 A.D. -907 A.D.) and news of this wonder was brought back to Europe by Marco Polo; perhaps that is where the Italian bankers got the idea of replacing gold with paper. Interestingly, the Chinese abandoned paper entirely in the mid 15th century during a bout of inflation and only re-introduced it in recent times.
Paper money that is redeemable by a set quantity of a precious metal is an example of representative currency. In Europe, the first true representative paper currency was introduced in Sweden in 1661 by the Stockholm Bank. Sweden waged major military campaigns during the Thirty Years War which lasted until 1648 and then went on to a series of wars with Poland, Lithuania, Russia and Denmark that only ended in 1658. The Stockholm bank’s mission was to reset the economy after all these wars but it failed after a few years simply because it printed much more paper money than it could back up with gold.
The Bank of England was created in 1694 in order to deal with war debts. It issued paper notes for the British pound backed by gold and silver. The British Pound became the world’s leading exchange currency until the disastrous wars of the 20th century.
The trouble with gold and silver is that supply is limited. The Spanish had the greatest supply of gold and silver and so the Spanish peso coin was the most widespread and the most used. This was especially the case in the American colonies and out in the Pacific. In the English speaking colonies and ex-colonies, pesos were called “pieces of 8” since a peso was equivalent to 8 bits, the bit being a Spanish and Mexican coin that also circulated widely. The story of the term dollar itself begins in the Joachimsthal region of Bohemia where the coin called the thaler (Joachim being the father of Mary, thal being German for valley) was first minted in 1517 by the Hapsburg Empire; it was then imitated elsewhere in Europe including in Holland where the daler was introduced and in Scotland where a similar coin took the name dollar. The daler was the currency of New Amsterdam and was used in the colonies. The Spanish peso, for its part, became known as “the Spanish dollar.” After independence, in the U.S. the dollar became the official currency and dollar coins were minted – paper money (except for the nearly worthless continentals of the Revolutionary War) would not be introduced in the U.S. until the Civil War. En passant, it can be noted that the early U.S. dollar was still thought of as being worth 8 bits and so “2 bits” became a term for a quarter dollar. (Love those fractions, 2/8 = 1/4). The Spanish dollar also circulated in the Pacific region and the dollar became the name of the currencies of Hong Kong, Australia, New Zealand, … .
During the Civil War, the Lincoln government introduced paper currency backed by a quantity of gold to help finance the war. Before long, Lincoln lowered this quantity because of the cost of the war, debasing the dollar in the process.
The money supply is limited by the amount of gold a government has in its vaults; this can have the effect of obstructing commerce. In the period leading up to World War I, in order to get more money into the economy, farmers in the American mid-west and west militated for silver to back the dollar at the ratio of 16 ounces of silver to 1 ounce of gold. The Wizard of Oz, written in support of this movement, is an allegory where the Cowardly Lion is really William Jennings Bryan, the Scarecrow is the American farmer, the Tin Man is the American worker; and the Wizard is Mark Hanna, the Republican political kingmaker; Dorothy’s wears magic silver shoes not ruby slippers in the book. Unlike the book, in the real world, the free silver movement failed to achieve its objective.
Admittedly, this is a long story – but soldier on, Bitcoin is coming up soon.
Needless to say, World War I had a terrible effect on currencies, especially in Europe where the German mark succumbed to hyper-inflation and money was worth less than the cost of printing it. This would have tragic consequences.
In the U.S. during the depression, the Roosevelt government made some very strong moves. It split the dollar into two – the domestic dollar and the international dollar; the domestic dollar was disconnected from gold completely while the dollar for international payments was still backed by gold but at $21 dollars the ounce, a significant devaluation. A paper currency, like Roosevelt’s domestic dollar, which is not backed by a commodity is called a fiat currency – meaning, in effect, that the currency’s value is declared by the issuer – prices then go up and down according to the supply of the currency and the demand for the currency. To increase the U.S. treasury’s supply of gold, as part of this financial stratagem, the government ordered the confiscation of all privately held gold (bullion, coin, jewelry, … ) with some small exceptions for collectors and dealers. Today, it is hard to imagine people being called upon to bring in the gold in their possession, have it weighed and then be reimbursed in paper money by the ounce.
Naturally, World War II also wreaked havoc with currencies around the world. The Bretton Woods agreement made the post-war U.S. dollar the currency of reference and other currencies were evaluated vis-à-vis the international (gold backed) U.S. dollar. So even the venerable British Pound became a fiat currency in 1946.
The impact of war on currency was felt once again in 1971 when Richard Nixon, with the U.S. reeling from the cost of the war in Vietnam, disconnected the dollar completely from gold making the almighty dollar a full fiat currency.
Soapbox Moment: The impact of war on a nation and the nation’s money is a recurring theme. Even Croesus lost his kingdom when he was killed in a battle with the forces of Cyrus the Great. Joan Baez and Marlene Dietrich both sang “When will they ever learn?” beautifully in English and even more plaintively in German, but men simply do not learn. Louis XIV said it all on his death bed: the Sun King lamented how he had wrecked the French economy and declared “I loved war too much.” Maybe we should all read or re-read Lysistrata by Aristophanes, the caustic playwright of 5th century Athens.
Since World War II, the world of money has seen many innovations, notably credit cards, electronic bank transfers and the relegation of cash to a somewhat marginal role as the currency of the poor and, alas, the criminal. Coupons and airline miles are examples of another popular form of currency, know as virtual currency; this form of currency has actually been around for a long time – C.W. Post distributed a one cent coupon with each purchase of Grape Nuts flakes as far back as 1895.
The most recent development in the history of money has been digital currency which is completely detached from coin, paper or even government – its most celebrated implementation being Bitcoin. A bitcoin has no intrinsic value; it is not like gold or silver or even paper notes backed by a precious metal. It is like a fiat currency but one without a central bank to tell us what it is worth. Logically, it should be worthless. But a bitcoin sells for thousands of dollars right now; it trades on markets much like mined gold. Why? Mystère.
A bitcoin’s value is determined by the marketplace: its worth is its value as a medium of exchange and its value as a storage medium for wealth. But Bitcoin has some powerful, innovative features that make it very useful both as a medium of exchange and as a medium of storage; its implementation is an impressive technological tour de force.
In 2008, a pdf file entitled “Bitcoin: A Peer-to-Peer Electronic Cash System” authored by one Satoshi Nakamoto, was published on-line; click HERE for a copy. “Satoshi Nakamoto” then is the handle of the founder or group of co-founders of the Bitcoin system (abbreviated BTC) which was launched in January 2009. BTC has four special features:
• Unlike the Federal Reserve System, unlike the Bank of England, it is decentralized. There is no central computer server or authority overseeing it. It employs the technology that Napster made famous, peer-to-peer networking; individual computers on the network communicate directly to one another without passing through a central post office; Bitcoin electronic transfers are not instantaneous but they are very, very fast compared to traditional bank transfers – SWIFT and all that.
• BTC guarantees a key property of money: the same bitcoin can not be in two different accounts and an account cannot transfer the same bitcoin twice – this is the electronic version of “you can’t spend the same dollar twice.” This also makes it virtually impossible to counterfeit a bitcoin. This is achieved by means of a technical innovation called the blockchain, which is a concise and efficient way of keeping track of bitcoins’ movements over time (“Bob sent Alice 100 bitcoins at noon GMT on 01/31/2018 … ”; it is a distributed, public account book – “ledger” as accountants like to say. A data compression method called hashing is employed to keep the size of the blockchain under control. Blockchain technology itself has since been adopted by tech companies such as IBM and One Network Enterprises.
• BTC guarantees that bitcoin transfers are secret, known only to the sender and the receiver. For this, in addition to hashing, it uses sophisticated cryptography protocols such as public key encryption; this is the method that distinguishes “https”from “http” in URL addresses and makes a site safe. Public key encryption is based on an interesting mathematical idea – the solvable problem that cannot be solved in our lifetimes even with the best algorithms running on the fastest computers; this is an example of the phenomenon mathematicians call “combinatorial explosion.” The receiver has created this set of problems or puzzles himself and so his private key gives him a way to decrypt the sender’s message. This impenetrability makes Bitcoin an example of a crypto-currency since transactions are decipherable only by the buyer and the seller. This feature clearly makes the system attractive to parties with a need for privacy and makes it abhorrent to tax collectors and regulators.
• New bitcoins are created by a process called mining that in some ways emulates mining gold. A new bitcoin is born when a computer manages to be the first to solve a terribly boring mathematical problem at the expense of a great deal of time, computer cycles and electricity; in the process of mining, as a side-effect, the BTC miners perform some of the grunt work of verifying that a new transaction can be added safely to the blockchain. Also, in analogy with gold, there is a limit of 21 million on the number of unit bitcoins that can be mined. This limit is projected to be reached around the year 2140; as is to be expected, this schedule is based on a clever strategy, one that reduces the rewards for mining over time.
The bitcoin can be divided into a very small unit called the satoshi. This means that $5 purchases, say, can be made. For example, using Gyft or eGifter or other system, one can use bitcoin for purchases in participating stores or even meals in restaurants. In the end, it is supply and demand that infuse bitcoins with value, the demand created by usefulness to people. It is easy enough to get into the game; for example, you can click HERE for one of many sites that support BTC banking etc
The future of Bitcoin itself is not easy to predict. However, digital currency is here to stay; there are already many digital currency competitors (e.g. Ethereum, Ripple) and even governments are working on ways to use this technology for their own national currencies. For your part, you can download the Satoshi Nakamoto paper, slug your way through it, rent a garage and start your own company.

Louisiana

At the very beginning of the 1600’s, explorers from England (Gosnold), Holland (Hudson and Block) and France (Champlain) nosed around Cape Cod and other places on the east coast of North America. Within a very short time, New England, New Netherland and New France were founded along with the English colony in Virginia; New Sweden followed soon. Unlike the early Spanish conquests and settlements in the Americas which were under the aegis of the King of Spain and legitimized by a papal bull, these new settlements were undertaken by private companies – the Massachusetts Bay Colony, the Dutch West India Company, the Compagnie des Cent-Associės de la Nouvelle France, the Virginia Company, the South Sweden Company.

New Sweden was short lived and was taken over by New Netherland; New Netherland in turn was taken over by the Duke of York for the English crown.

In terms of sheer area, New France was the most impressive. To get an idea of its range, consider taking a motor trip through the U.S.A. going from French named city to French named city, a veritable Tour de France:

Presque Isle ME -> Montpelier VT -> Lake Placid NY -> Duquesne PA -> Vincennes IN -> Terre Haute IN -> Louisville KY -> Mobile AL -> New Orleans LA -> Petite Roche (now Little Rock) AR -> Laramie WY -> Coeur d’Alène ID -> Pierre SD -> St Paul MN -> Des Moines IA -> Joliet IL -> Detroit MI

That covers most of the U.S. and you have to add in Canada. It is interesting to note that even after the English takeover of New Netherland (NY, NJ, PA, DE) in 1664, the English territories on the North American mainland still basically came down to the original 13 colonies of the U.S..

The first question is how did the area known as the Louisiana Territory get carved out of New France? Mystère.

To look into this mystery, one must go back to the French and Indian War which in Europe is known as the Seven Years War. This war, which started in 1756, was a true world war in the modern sense of the term with fronts on five continents and with many countries involved. This was the war in which Washington and other Americans learned from the French and their Indian allies how to fight against the British army – avoid open field battles above all. This was the war which left England positioned to take control of India, this was the war that ended New France in North America: with the Treaty of Paris of 1763, all that was left to France in the new world was Haiti and two islands in the Caribbean (Guadeloupe and Martinique) and a pair of islands off the Grand Banks (St. Pierre and Miquelon). The English took control of Canada and all of New France east of the Mississippi. Wait a second – what happened to New France west of the Mississippi? Here the French resorted to a device they would use again to determine the fate of this territory – the secret treaty.

In 1762, realizing all was lost in North America but still in control of the western part of New France, the French king, Louis XV (as in the furniture), transferred sovereignty over the Louisiana Territory to Spain in a secret pact, the Treaty of Fontainebleau. (For a map, click HERE) The British were not informed of this arrangement when signing the Treaty of Paris in 1763, apparently believing that the area would remain under French control. On the other hand, in this 1763 treaty, the Spanish ceded the Florida Territory to the British. This was imperial real estate wheeling-and-dealing at an unparalleled scale; but to the Europeans the key element of this war was the Austrian attempt to recover Silesia from the Prussians (it failed); today Silesia is part of Poland.

How did the Louisiana Territory get from being part of Spain to being part of the U.S.? Again mystère.

The Spanish period in the Louisiana Territory was marked by struggles over Native American slavery and African slavery. With the Treaty of Paris of 1783 which ended the American War of Independence, the Florida Territory which included the southern ends of Alabama and Mississippi was returned to Spain by the British. For relations between the U.S. and Spain, the important issue became free navigation on the Mississippi River. Claims and counterclaims were made for decades. Eventually the Americans secured the right of navigation down the Mississippi. So goods could be freely shipped on the Father of Waters on barges and river boats and the cargo could still pass through New Orleans before being moved to ships for transport to further locations. This arrangement was formalized by a treaty in 1795, know as Pinckney’s Treaty but one honored by the Spanish governor often in the breach.

The plot thickened in 1800 when France and Spain signed another secret treaty, the Third Treaty of San Ildefonso. This transferred control of the Louisiana Territory back to the French, i.e. to Napolėon.

Switching to the historical present for a paragraph, Napolėon’s goal is to re-establish New France in New Orleans and the rest of the Louisiana Territory. This ambition so frightens President Thomas Jefferson that, in a letter to Robert Livingston the ambassador to France, he expresses the fear that the U.S. will have to seek British protection if Napoleon does in fact take over New Orleans:

“The day that France takes possession of New Orleans…we must marry ourselves to the British fleet and nation.”

This from the author of the Declaration of Independence!! So he instructs Livingston to try to purchase New Orleans and the surrounding area. This letter is dated April 18, 1802. Soon he sends James Monroe, a former ambassador to France who has just finished his term as Governor of Virginia, to work with Livingston on the negotiations.

The staging area for Napoleon’s scheme was to be Haiti. However, Haiti was the scene of a successful rebellion in the 1790’s against French rule led by Toussaint Louverture leading to the abolition of slavery in Haiti and the entire island of Hispaniola by 1801. Napoleon’s response was to send a force of 31,000 men to retake control. At first, this army managed to defeat the rebels under Louverture, to take him prisoner, and to re-establish slavery. Soon, however, the army was out-maneuvered by the skillful military tactics of the Haitians and it was decimated by yellow fever; finally, at the Battle of Vertières in 1803, the French force was defeated by an army under Jean-Jacques Dessalines, Louverture’s principal lieutenant.

With the defeat of the French in Haiti at the hands of an army of people of color, the negotiations in Paris over transportation in New Orleans turned suddenly into a deal for the whole of the Louisiana Territory – for $15 million. The Americans moved swiftly, possibly illegally and unconstitutionally, secured additional funding from the Barings Bank in London and overcame loud protests at home. The Louisiana Territory was formally ceded to the U.S. on Dec. 20, 1803.

Some numbers: the price of the Louisiana Purchase comes to less than 3 cents an acre; adjusted for inflation, this is $58 an acre today – a good investment indeed made by a man, Thomas Jefferson, whose own finances were always in disarray.

There is something here that is reminiscent of the novel Catch 22 and the machinations of the character Milo Minderbinder (Jon Voight in the movie) : Barings Bank profited from this large transaction by providing funds for the Napoleonic regime at a point in time when England was once more at war with France! What makes the Barings Bank stunt more egregious is that Napolėon was planning to use the money for an invasion of England (which never did take place). But, war or no war, money was being made.

The story doesn’t quite end there. The British were not happy with these secret treaties and the American purchase of the Louisiana Territory, but they were too occupied by the Napoleonic Wars to act. However, their hand was forced with the outbreak of the War of 1812. At their most ambitious, the British war aims were to restore the part of New France in the U.S. that is east of the Mississippi to Canada and to gain control of the Louisiana Territory to the west of the Mississippi. To gain control of the area in question east of the Mississippi, forces from Canada joined with Tecumseh and other Native Americans; this strategy failed. With Napolėon’s exile to Elba, a British force was sent to attack New Orleans in December 1814 and to gain control of the Louisiana Territory. This led to the famous Battle of New Orleans, to the victory which made Andrew Jackson a national figure, and to that popular song by Johnnie Horton. So this strategy failed too. It is only at this point that American sovereignty over the Louisiana Territory became unquestioned. It can be pointed out that the Treaty of Ghent to end this war had been signed before the battle; however, it was not ratified by the Senate until a full month after the battle and who knows what a vengeful and batty George III might not have done had the battle gone in his favor. It can be said that it was only with the conclusion of this war that the very existence of the U.S. and its sovereignty over vast territories were no longer threatened by European powers. The Monroe Doctrine soon followed.

The Haitians emerge as the heroes of this story. Their skill and valor forced the French to offer the entire Louisiana Territory to the Americans at a bargain price and theirs was the second nation in the Americas to declare its independence from its European overlords – January 1, 1804. However, when Haiti declared its independence, Jefferson and the Congress refused to recognize their fellow republic and imposed a trade embargo, because they feared the Haitian example could lead to a slave revolt here. Since then, French and American interference in the nation’s political life have occurred repeatedly, rarely with benign intentions. And the treatment of Haitian immigrants in the U.S. today hardly reflects any debt of gratitude this nation might have.

The Haitian struggle against the French is the stuff of a Hollywood movie, what with heroic figures like Louverture, Dessalines and others, political intrigues, guerrilla campaigns, open-field battles, defeats and victories, and finally a new nation. Hollywood has never taken this on (although Danny Glover appears to be working on a project), but in the last decade there have been a French TV mini-series (since repackaged as a feature film) and other TV shows about this period in Haitian history.

The Barens Bank continued its financially successful ways. At one point, the Duc de Richelieu called it “the sixth great European power”; at another point, it actually helped the Americans carry out deals in Europe during the War of 1812, again proving that banks can be above laws and scruples. However, its comeuppance finally came in 1995. It was then the oldest investment bank in the City of London and banker to the Queen, but the wildly speculative trades of a star trader in the Singapore office forced the bank to fail; it was sold off to the Dutch bank ING for £1. The villain in the piece, Nick Leeson, was played by Ewan McGregor in the movie Rogue Trader.

In the end, Napoleon overplayed his hand in dealing with Spain. In 1808, he forced the abdication of the Spanish king Carlos IV and installed his own brother as “King of Spain and the Indies” in Madrid. This led to a long guerrilla war in Spain which relentlessly wore down the troops of the Grande Armėe and which historians consider to have been the beginning of the end for the Little Corporal.

 

 

 

Babylon

The modern world uses a number system built around 10, the decimal system. Things are counted and measured in tens and powers of ten. Thus we have 10 years in a decade, 100 years in a century and 1000 years in a millennium. On the other hand, the number 60 pops up in some interesting places; most notably, there are 60 minutes in an hour and 60 seconds in a minute.  The only challenge to this setup came during the decimal-crazed French Revolution which introduced a system with 10 hours in the day, 100 minutes in the hour and 100 seconds in the minute. Their decimal metric system (meters and kilos) prevailed but the decimal time system, despite its elegance, was soon abandoned. But why have there been 60 minutes in an hour and 60 seconds in a minute in a numerical world dominated by the number 10? And that for thousands of years. Mystère?

The short answer is Babylon. In the ancient world, Babylon was renowned as a center of learning, culture, religion, commerce and riches. It gave us the code of Hammurabi and the phrase “an eye for an eye”; it was conquered by Cyrus the Great and Alexander the Great. The Israelites endured captivity there and two books of the Hebrew Bible (Ezekiel and Daniel) were written there. The Christian Bible warns us of the temptations of the Whore of Babylon. It was the Babylonian system of telling time that spread west and became the standard in the Mediterranean world: 24 hours in the day, 60 minutes in the hour, 60 seconds in the minute. So far, so good; but still why did the Babylonians have 60 minutes in an hour?  Mystère within a mystère.

Here the answer is “fractions,” a very difficult topic that throws a lot of kids in school but one that will not throw the intrepid sleuths getting to the solution of this mystery. The good news is that one-half of ten is 5 and that one-fifth of ten is 2; the bad news is that one-fourth of ten, one-third of ten and one-sixth of ten are all improper fractions: 5/2, 10/3, 5/3; same for three-fourths, two-thirds and five-sixths. A number system based on 60 is called a “sexagesimal” system. If you have a sexagesimal number system, you need different notations for the numbers from 1 through 59 rather than just 1 to 9, but it makes fractions much easier to work with – the numbers 1,2,3,4,5,6 are all divisors of 60 and so one-half, one-quarter, one-third, one-fifth, and one-sixth of 60 are whole quantities, nothing improper about them. This also applies to two-thirds, three-quarters and other common fractions.

This practice of using a base different from 10 for a number system is alive-and-well in the computer era. For example, the base 16 hexadecimal system is used for the addresses of memory locations rather than the verbose binary number system that computers use for numerical computation. The hexadecimal system which uses 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F as its sixteen “digits,” is also used to describe colors on web-pages and you might come across something like <body bgcolor = “#FF0000”> that is instructing the page to make the background color red.

Having a good system for fractions is especially important if you are measuring quantities of products or land area: thus a quarter of a pound of ham, two-thirds  of an acre, … . The Babylonians of the period when the Book of Daniel was written did not invent the sexagesimal system out of whole cloth; rather they inherited it from the great Sumerian civilizations that preceded them. At the birth of modern civilization in Mesopotamia, Sumerian scribes introduced cuneiform writing (wedges and clay tablets) and then sexagesimal numbers for keeping track of accounts, fractions being important in transactions between merchants and buyers. At first, the notations for these systems would differ somewhat from city to city and also would differ depending on the thing being quantified. In the city of Uruk, 6000 years ago, there were at least twelve different sexagesimal number systems in use with differing names for the numbers from 1 to 60, each for working with different items: barley, malt, land, wheat, slaves and animals, beer, … . It is as though they were using French for one item and German for another; thus “cinq goats” and “fűnf tomatoes.” What this illustrates is that it requires an insight to realize that the five in “cinq goats” is the same as the five in “fűnf tomatoes.” Eventually, these systems became standardized and by Babylonian times only one system was in use.

The Sumerians gave us the saga, The Epic of Gilgamesh, whose account of the great flood is a brilliant forerunner of the version in Genesis. In fact, so renowned were the Sumerian cities that the Hebrew Bible tells us the patriarch Abraham (Richard Harris in the TV movie Abraham) came from the city of Ur, a city also known as Ur of the Chaldees; at God’s bidding, he left Ur and its pagan gods and moved west with his family and retinue to the Land of Canaan. The link to Ur serves as a hommage on the part of the Israelite scribes to the Sumerians, one designed to ascribe illustrious origins to the founder of the three Abrahamic religions.

In addition to being pioneers in literature, agriculture, time-telling, accounting, etc. the Sumerians pushed beyond the boundaries of arithmetic into more abstract mathematics. They developed methods we still use today for solving quadratic equations (x2 + b x = c) and their methods and techniques were imported by mathematicians of the Greco-Roman world. Moreover, very early on, the Sumerian scribes were familiar with the mighty Pythagorean Theorem: an ancient clay tablet, known as “ybc 7289,” shows that these scribes knew this famous and fundamental theorem at least two thousand years before Pythagoras. For a picture of the tablet, click HERE . The image shows a square with each side of length 1 and with a diagonal of length equal to the square root of 2, written in sexagesimal with cuneiform wedges (so this satisfies the Pythagorean formula a2 + b2 = c2 in its most simple form 1 + 1 = 2). In his Elements, Euclid gives a proof of the Pythagorean Theorem based on axioms and postulates. We do not know whether the Sumerians thought of this remarkable discovery as a kind of theorem or as an empirical observation or as something intuitively clear or just a clever aperçu or something else entirely. This tablet was on exhibit in NYC at the Institute for the Study of the Ancient World in late 2010; for the mathematically sensitive the viewing the tablet is an epiphany.

The Sumerians were also great astronomers – they invented the constellations we use today and the images associated with them – and their observations and techniques were used by geometers and astronomers from the Greco-Roman world such as Eratosthenes and Ptolemy. Indeed, the Babylonian practice of using sexagesimal numbers has persisted in geography and astronomy; so to this day, latitude and longitude are measured in degrees, minutes and seconds: thus a degree of north latitude is divided into 60 minutes and each minute is divided into 60 seconds. James Polk was elected to be the 11th president of the United States on the platform “Fifty-Four Forty or Fight” meaning that he would take on the British and push the Oregon territory border with Canada up to 54°40′ .  (Polk wisely settled for 49°00′).

The Sumerians were also great astrologers and soothsayers and it was they who invented the Zodiac that we still use today. If we think of the earth as the center of the universe, then of course it takes one year for the sun to revolve around the earth; as it revolves it follows a path across the constellations, each of which sits in an area called its “house.” As it leaves the constellation Leo and rises in front of the constellation Virgo, the sun is entering the house of Virgo. According to the horoscopes in this morning’s newspaper, the sun is in the house of Virgo from Aug 23 until Sept 22.

Recently, though, NASA scientists have noted that the Sumerian Zodiac we employ is based on observations of the relative movements of the sun, earth, planets and stars made a few thousand years ago. Things have changed since – the tilt of the earth’s axis is not the same, measurements have improved, calendars have been updated, etc.; the ad hoc Sumerian solution to keeping the number of signs the same as the number of months in the year no longer quite works. So the constellations are just not in the locations in the heavens prescribed by the ancients on the same days as in the current calendar; the sun might just not be in the house you think it’s in – if you were a Capricorn born in early January, you are now a Sagittarius. So far, the psychic world has pretty much ignored the implications of all this – people are just too attached to their signs.

It must be admitted that the NASA people did get carried away with the numbers and the science. They stipulated that there should actually be 13 signs, the new one being Ophiuchus, the Serpent Bearer (cf. Asclepius the Greek god of medicine and his snake entwined caduceus); this is a constellation that was known to the Sumerians and Babylonians but one which they finessed out of the Zodiac to keep the number of signs at 12. Click HERE for a picture. However, it is a sign of the Zodiac of Vedic Astrology and, according to contemporary astrologer Tali Edut, “It’s a pretty sexy sign to be!”

But why insist on 12 months and 12 signs, you may ask. Again mystère. This time the solution lies in ancient Egypt. The Egyptians started with a calendar based on 12 lunar months of 28 days each, then moved to 12 months of 30 days each with 5 extra days inserted at the end of the year. This solved the problem encountered world-wide of synchronizing the solar and lunar years (the leap year was later added). And this 12 month year took root. We also owe the 24 hour day to the Egyptians, who divided the day into 12 day hours and 12 night hours; at the outset, the length of an hour would vary with the season and the time of the day so as to insure that there were 12 hours of daylight and 12 hours of nighttime. The need for simplicity eventually prevailed and each hour became one-twenty-fourth of a day.

The number 12 is also handy when it comes to fractions and to this day it is the basis for many measuring systems: 12 donuts in a dozen, 12 dozen in a gross, 12 inches in a foot, 12 troy ounces in a troy pound. One that recently bit the dust, though, is 12 pence in a shilling; maybe Brexit will bring it back and make England Great Again.

Battle Creek

Battle Creek is a city of some 50,000 inhabitants in southwestern Michigan situated at the point where the Battle Creek River flows into the Kalamazoo River. The name Battle Creek traces back, according to local lore, to a skirmish between U.S. soldiers and Native Americans in the winter of 1823-1824.
At the beginning of the 20th century, Battle Creek was the Silicon Valley of the time: entrepreneurs, investors and workers poured in, companies were started, fortunes were made. As the local Daily Moon observed in 1902, “Today Battle Creek is more like a boom town in the oil region than a staid respectable milling center.” A new industry had taken over the town: “There are … large establishments some running day and night” and all were churning out the new product that launched this gold rush.
Even before the boom, Battle Creek was something of a manufacturing center producing such things as agricultural equipment and newspaper printing presses. But this was different. Battle Creek was now known as “Health City.” So what was this new miracle product? Mystère.
Well, it was dry breakfast cereal, corn flakes and all that. By 1902, more than 40 companies were making granulated or flaked cereal products with names like Vim, Korn-Krisp, Zest, X-Cel-O, Per-Fo, Flak-ota, Corn-O-Plenty, Malt-Too; each labeled as the perfect food.
And how did Battle Creek come to be the center of this cereal boom? Again mystère?
For this, things turn Biblical and one has to go back to the Book of Daniel and the Book of Revelation; the first is a prophetic book of the Hebrew Bible, the second a prophetic book of the New Testament. Together, the books offer clues as to the year when the Second Coming of Christ will take place. Belief in the imminence of this time is known as adventism. No one less than Isaac Newton devoted years to working on this. Newton came to the conclusion that it would be in the year 2060, although sometimes he thought it might be 2034 instead; superscientist though he was, Newton could have made a mistake – he also invested heavily in the South Sea Bubble.
The First Great Awakening was a Christian revival movement that swept England and the colonies in the 1700’s; among its leaders were John Wesley (of Methodism) and Jonathan Edwards (of “Sinners in the Hands of an Angry God”). During the Second Great Awakening in the first part of the 19th century, in the U.S. the adventist William Miller declared the year of the Second Coming to be 1844. When that passed uneventfully (known as the Great Disappointment), the Millerite movement splintered but some regrouped and soldiered on. One such group became the Jehovah’s Witnesses. Another became the Seventh Day Adventists.
The “Seventh Day” refers to the fact that these Adventists moved the Sabbath back to its original day, the last day of the week, the day on which the Lord rested after the Creation, the day observed by the Biblical Israelites and by Jews today, viz., Saturday. The early Gentile Christians, after breaking off from their Jewish roots, moved their Sabbath to Sunday because that was the day of rest for the non-Jewish population of the Roman Empire; they likely also wanted to distance themselves from Jews after the revolts of 66-73 AD and 132-136 AD, uprisings against the power of Rome.
The Seventh Day Adventists do not claim to know the year of the Second Coming but do live in anticipation of it. Article 24 of their statement of faith concludes with
  “The almost complete fulfillment of most lines of prophecy, together with the present condition of the world, indicates that Christ’s coming is imminent. The time of that event has not been revealed, and we are therefore exhorted to be ready at all times.”
In 1863, in Battle Creek, the church was formally established and had 3,500 members at the outset. So the plot thickens: “Battle Creek” did you say?
Their system of beliefs and practices extends beyond adventism. They are basically pacifists and eschew combat. Unlike the Quakers, they leave the decision to serve in the Armed Forces to the individual and, typically, those who are in the military serve as medics or in other non-combatant roles. They are also vegetarians; they proscribe alcohol and tobacco – coffee and tea as well; they consider the health of the body to be necessary for the health of the spirit.
It is their interest in health that leads to a solution to our mystery. As early as 1863, Adventist prophetess Ellen White had visions about health and diet. She envisaged a “water cure and vegetation institution where a properly balanced, God-fearing course of treatments could be made available not only to Adventists, but to the public generally.” Among her supporters in this endeavor were her husband James White and John and Ann Kellogg. The plot thickens again: “Kellogg” did you say? The Kelloggs were Adventists to the point that they did not believe that their son John Harvey or their other children needed a formal education because of the imminence of the Second Coming. In 1866, the Western Reform Institute was opened in Battle Creek realizing Ellen White’s vision. By then the Whites had taken an interest in the self-taught John Harvey Kellogg which eventually led to their sending him for medical training with a view to having a medical doctor at the Institute. He finished his training at the NYU Medical College at Bellevue Hospital in New York City in 1875. In 1876, John Harvey Kellogg became the director of the Institute and he would lead this institution until his death in 1943. In 1878, he was joined by his younger brother Will Keith Kellogg who worked on the business end of things.
John Harvey Kellogg threw himself into his work. He quickly changed the name to the Battle Creek Medical Surgical Sanitarium; he coined the term sanitarium to distinguish it from sanatorium to describe a place where one would learn to stay healthy. He described the Sanitarium’s system as “a composite physiologic method comprising hydrotherapy, phototherapy, thermotherapy, electrotherapy, mechanotherapy, dietetics, physical culture, cold-air cure, and health training.” Physical exercise was thus an important component of the system; somewhat inconsistently, sexual abstinence was also strongly encouraged as part of the program
Kellogg’s methods could be daring, if not extreme; what web sites most remember him for is his enema machine that involved yogurt as well as water and ingestion through the mouth as well as through the anus.
Through all this, vegetarianism remained a principal component of the program. The Kelloggs continually experimented with ways of making vegetarian foods more palatable and more effective in achieving the goals of the Sanatarium. In 1894, serendipity struck: they were working to produce sheets of wheat when they left some cooked wheat unattended; when they came back they continued processing it but produced flakes of wheat instead of sheets – these flakes could be toasted and served to guests at the Sanitarium. They filed for a patent in 1895 and it was issued in 1896. For an advertisement for Corn Flakes from 1919, click HERE .
John Harvey Kellogg showed this new process to patients at the Sanitarium. One guest, C. W. Post grasped its commercial potential and started his own company, a company that became General Foods. See what we mean by Silicon Valley level rewards – that was “General Foods,” which was the Apple Computer of the processed food industry, with an industrial name modeled after General Electric and General Motors. (The name is gone today; the company eventually merged with Kraft.) Post’s first cereal product in 1897 was Elijah’s Manna later rechristened Grape-Nuts Flakes – in the name, only Flakes is accurate as the ingredients are wheat and barley. But the Gold Rush was on.
In 1906 Will Keith Kellogg founded the Battle Creek Toasted Corn Flake Company; this company was later named Kellogg’s and, to this day, it is headquartered in Battle Creek and continues to bless the world with its corn flakes and other dry breakfast cereals
Through all this, the Sanitarium continued on and, in fact, prospered. Its renown spread very far and very wide and a remarkable set of patients spent time there. This list includes William Howard Taft, George Bernard Shaw, Roald Amundsen and Sojourner Truth.
Perhaps the simplest proof of the efficacy of the Kelloggs’ methods is that both brothers lived past 90. For another proof, let us go from the 19th and 20th centuries to the 21st century and let us move from Battle Creek Michigan to Loma Linda California.
Loma Linda is the only place in the U.S. that made it onto the list of Blue Zones – places in the world where people have exceptionally long life spans. (The other places are in Sardinia, Okinawa, Greece and Costa Rica). The reason is that this California area has a large population of Seventh Day Adventists and they live a decade longer than the rest of us. They follow the principles of their early coreligionists: a vegetarian diet, physical exercise, no alcohol, no tobacco – to which is added that sense of community and purpose in life that their shared special beliefs bring to them.
Playful Postscript:
For the unserious among us, much of the goings on at the Sanitarium could be the stuff of high comedy. In fact, it inspired the author T. Coraghessan Boyle to write a somewhat zany novel The Road to Wellville which was later made into a movie. The characters have fictitious names except for Dr. John Harvey Kellogg (played by Anthony Hopkins in the movie). The title itself comes from the booklet written by C.W. Post which used to be given out with each box of Grape-Nuts Flakes.
Tragic Postscript:
In the 1930’s a group broke off from the Seventh Day Adventist church and eventually became known as the Davidian Seventh Day Adventists. In turn, in the 1950’s there was a split among them and a new group, the Branch Davidians, was formed; so we are at two removes from the church founded in Battle Creek. In the 1982 Vernon Wayne Howell, then 22 years old, moved to Waco Texas and joined the Branch Davidians there; he subsequently changed his name to David Koresh: Koresh is the Hebrew ( כֹּרֶשׁ ) for Cyrus (the Great), the Persian king who is referenced in the Book of Daniel and elsewhere in the Hebrew Bible; Cyrus is the only non-Jew to be named a Messiah (a person chosen by God for a mission) in the Hebrew Bible (Isaiah 45:1) and it was he who liberated the Jews from the Babylonian Captivity. As David Koresh, Howell eventually took over the Waco Branch Davidian compound and turned it into a nightmarish cult. There followed the horrifying assault in 1993 and the deaths of so many. To make an unimaginable horror yet worse, this assault has become a rallying cry for paranoid militia people – its second anniversary was a motivation for Timothy McVeigh and Terry Nichols when they planned the Oklahoma City bombing.

Hemingway

Ernest Hemingway famously favored Anglo-Saxon words and phrases over Latin or French ones: thus “tell” and not “inform.”  Scholars, critics and Nobel Prize committees have analyzed passages such as
“What do you have to eat?” the boy asked.
“A pot of yellow rice with fish. Do you want some.”
“No, I will eat at home; do you want me to make the fire?”
“No, I will make it later on, or I may eat the rice cold.”
Not a single word from French or Latin; not a single subordinate clause, no indirect discourse, no adverbs, … .
Some trace this aspect of his style to Hemingway’s association with Gertrude Stein, Ezra Pound, Djuna Barnes and modernist writers. Others trace it to his first job as a cub reporter for the Kansas City Star and the newspaper’s style guide:  “Use short sentences. Use short first paragraphs. Use vigorous English. Be positive, not negative.”
However that may be, Hemingway was true to his code and he set a standard for American writing. Still, it is often impossible to say whether a word or phrase is Anglo-Saxon or not. For example, the multi-word sound has four meanings each derived from a different language: “sound” as in “Long Island Sound” (Norse), as in “of sound mind” (German), as in “sound the depths of the sea” (French), as in “sound of my voice” (Latin). To add to the confusion, the fell in “one fell swoop” is from the Norman French (same root as felon) though nothing sounds more Anglo-Saxon than fell. Among the synonyms pigeon and dove, it is the former that is French and the latter that is Anglo-Saxon. Nothing sounds more Germanic than skiff but it comes from the French esquif. So where can you find 100% Anglo-Saxon words lurking about? Mystère.
Trades are a good source of Anglo-Saxon words — baker, miller, driver, smith, shoe maker, sawyer, wainwright, wheelwright, millwright, shipwright; playwright doesn’t count, it’s a playful coinage from the early 17th century introduced by Ben Jonson. Barnyard animals also tend to have old English origins  — cow, horse, sheep, goat, chicken, lamb, … Body parts too — foot, arm, leg, eye, ear, nose, throat, head, … .
Professions tend to have Latin or French names — doctor, dentist, professor, scientist, accountant, … ; teacher, lawyer, writer and singer are exceptions though.
Military terms are not a good source at all; they are relentlessly French — general, colonel, lieutenant, sergeant, corporal, private, magazine, platoon, regiment, bivouac, caisson, soldier, army, admiral, ensign, marine and on and on.
However, there is a rich trove of Anglo-Saxon words to be found in the calendars of the Anglican and Catholic Churches. The 40 days of Lent begin on Ash Wednesday and the last day before Lent is Mardi Gras. Lent and Ash Wednesday are Anglo-Saxon in origin but Mardi Gras is a French term. Literally, Mardi Gras means Fat Tuesday, and the Tuesday before Lent is now often called Fat Tuesday in the U.S. But there is a legitimate, traditional Anglo-Saxon name for it, namely Shrove Tuesday. Here shrove refers to the sacrament of Confession and the need to have a clean slate going into Lent; the word shrove is derived from the verb shrive which means “to hear confession.” The expression “short shrift” is also derived from this root: a short confession was given to prisoners about to be executed. Another genuinely English version of Mardi Gras is Pancake Tuesday, which, like Mardi Gras, captures the fact that the faithful need to fatten up before the fasting of Lent. Raised Episcopalian and one with a strong attraction to the ritual and pageantry of Catholicism, Hemingway was in fact listed as a Catholic for his 2nd marriage, the one to Pauline Pfeiffer. Still Hemingway never got to use the high church Anglo-Saxon term Shrove Tuesday (or even Pancake Tuesday) in his writings. On the other hand, Hemingway always wrote “Holy Ghost” and never would have cottoned to the recent shift to the Latinate “Holy Spirit.”
Staying with high church Christianity — Lent goes on for forty days until Easter Sunday; the period of Eastertide begins with Palm Sunday which celebrates Christ’s triumphant entry into Jerusalem for the week of Passover. The Last Supper was a Passover Seder dinner. Thus, in Italian, for example, the word for Passover and the word for Easter are the same; if the context is not clear, one can distinguish them as “Pasqua” and “Pasqua Ebraica.” Something similar applies in French and Spanish. So how did English usage come to be so different? Mystère.
Simply put Easter Sunday is named for a Pagan goddess. In the Middle Ages, the author of the first history of the English people, the Venerable Bede, wrote that the Christians in England adopted the name the local Pagans were giving to a holiday in honor of their goddess of the spring Ēastre – you cannot get more Anglo-Saxon than that.
In addition to Eastertide, the Anglo-Saxon root tide is also used for other Christian holiday periods – Whitsuntide (Pentecost), Yuletide (Christmas), … . This venerable meaning of tide as a period of time is also the one that figures in the expression
    Neither tide nor time waits for no man.
Nothing to do with maritime tides whether high, low or neap. Hemingway would likely have avoided this phrase, though, because of its awkward, archaic double negative.
Sometimes French words serve to protect us from the brutal frankness of the Anglo-Saxon. Classic examples of this are beef and pork which are derived from the French boeuf and porc. When studying a German menu, it is always disconcerting to have to choose between “cow flesh” and “pig flesh.” Even Hemingway would have to agree.
An area where Hemingway is on more solid ground is that of grammar. The structure of English is basically Germanic. The Norman period introduced a large vocabulary of French and Latin words, but French had very little influence on English grammar. The basic reason for this is that Norman French used Latin as the language for the administration of the country; thus the Domesday Book and the Magna Carta were written in Latin. Since French was the language of the court, however, English legal vocabulary to this day employs multiple French words and phrases such as the splendid “Oyez, Oyez.”
While the grammar of English can be classified as Germanic, there are some key structural elements that are Celtic in origin. An important example is the “useless do” as in “do you have a pen?”  English is one of the only languages that inserts an extra verb, in this case do, to formulate a question; typically in other languages, one says “have you a pen?” or “you have a pen?” with a questioning tone of voice.  Another Celtic import is the progressive tense as in “I am going to the store,” which can express a mild future tense or a description of current activity. This progressive tense is an especial challenge for students in ESL courses.
Danish invaders such as the Jutes have also contributed to English structure – for example, there is the remarkably simple way English has of conjugating verbs: contrast the monotony of  “I love, you love, it loves, we love, you love, they love” with other languages.
When languages collide like this, one almost invariably emerges as the “winner” in terms of structure with the others’ contributing varying amounts of vocabulary and with their speakers’ influencing the pronunciation and music of the language. The English language that emerged finally at the time of Chaucer was at its base Anglo-Saxon but it had structural adaptations from Celtic and Norse languages as well as a vast vocabulary imported whole from Latin and French. This new language appeared suddenly on the scene in the sense that during all this time it wasn’t a language with a literature like the other languages of Britain – Old English, Latin, French, Welsh, Cornish, … ; so it just simmered for centuries but eventually the actual spoken language of the diverse population forced its way to the surface.
Hemingway himself spent many years in places where romance languages are spoken – Paris, Madrid, Havana, … . Maybe this helped insulate him from the galloping changes in American and English speech and writing and let him ply his craft in relative peace. How else could he have ended a tragic wartime love story with a sentence so perfect but so matter-of-fact as “After a while I went out and left the hospital and walked back to the hotel in the rain.”

Ireland

Ireland is known as the Land of Saints and Scholars, as the Emerald Isle, as the Old Sod … . For all its faults, it is the only place in Europe that has never invaded a neighboring or distant land militarily. That doesn’t mean that the Irish weren’t always making war on one other. Still, Ireland itself has been invaded multiple times. But by whom and why? Mystère.
Part I: From Romans to Normans
First we must discuss the invasion that failed to take place, invasion by the Romans. Julius Caesar invaded Britain (Britannia to the Romans, Land of the Celts) not once but twice (55 B.C. and 54 B.C). However, though the last Roman legion didn’t leave Britain until 404 A.D., the Romans never undertook an invasion of Ireland. Why not? Mystère. Well, their name for it, Hibernia, meant “land of winter” which was perhaps reason enough to stay away; cf. the verb “hibernate.” This meant that at the time of the fall of the Roman Empire, Ireland had not been Romanized and Hellenized the way Gaul and other Celtic lands west of the Rhine or south of the Danube had been.
The first invasion from the East was by Christian missionaries from Britain who came with the purpose of converting the Irish – no mystery there – and by the mid 4th century, the Church had a foothold in Ireland. A major contribution of the Church was the introduction of Latin, the lingua franca of Europe, and even Greek, tools which would at last bring the literature and learning of the Greco-Roman world to Ireland.
In fact, a notable figure in Church History emerged at this time: Pelagius hailed from Ireland (what St. Jerome thought) or Britain (what others think); he was very well educated in both Latin and Greek and, around 350 A.D., went off to Rome to ply the trade of theologian. Pelagius opposed St. Augustine and his grim views on the fall of mankind and predestination (cf. Calvin); instead Pelagius rejected the doctrine of original sin and preached the need for the individual freely to find his own way to God and salvation – quite a modern viewpoint, really. Unfortunately, Augustine won out and became a saint and a city in Florida, while Pelagius was branded a heretic. However, Pelagius’ writings continued to be cited in the Irish Church during the Middle Ages, which points to a spirit of independence and which raises questions about the Church’s orthodoxy during that period.
In the 5th century, Rome took an interest in Ireland and a Gaul named Palladius was sent by the pope to be the first bishop of the Irish in 431. Patricius, a Brittano-Roman, the man known to us as St. Patrick, was sent the following year. It is tempting to think (and some do) that Palladius and Patricius were sent to combat the Pelagian heresy and shore up orthodoxy in the Irish church, an issue that will come up yet again. Though this version of events does detract some from the glory of St. Patrick, it in no way suggests that he did not drive the snakes out of Ireland.
While the rest of Europe including Britain was plunged into the dark ages and subject to barbarian invasions, the relatively peaceful land of Ireland became a center of monasticism and learning.
By the year 800, however, Viking invaders began to arrive on Irish shores. One of their goals was to plunder monasteries, something they proved uncommonly skilled at; another more constructive goal was to set up settlements in Ireland. They founded towns that still bear Nordic names, such as Wexford and Waterford of crystal fame. These groups eventually merged with the native population and are known to historians as the Norse-Irish. In fact, some classic Irish surnames trace back to the Viking invasions, e.g. MacAulife (Son of Olaf) and MacManus (Son of Magnus).
The year 1066 is remembered for the Norman invasion of England under William of Normandy (Nick Brimble in The Conquerors TV series). The year 1171 is remembered for the Norman invasion of Ireland under Henry II (Peter O’Toole in both The Lion in Winter and Becket). The Normans declared Ireland to be under the rule of the English king, established a feudal system of fiefdoms, built castles, signed treaties, broke treaties, etc. Actually, Henry II already ruled over vast holdings; as Roi d’Angleterre, Duc d’Anjou and Duc de Normandie, he inherited Britain and large areas in France. And then thanks to his marriage to the most powerful woman in Europe, Elėanor d’Aquitaine (Katherine Hepburn in The Lion in Winter, Pamela Brown in Becket), his domains were expanded to include southwestern France. All this was called the Empire Angevin. So, land rich already, why did Henry need to launch this invasion? Mystère.
For one thing, Henry was instructed by Pope Adrian IV by means of a formal papal edict Laudabiliter to invade and govern Ireland; the goal was to re-enforce papal authority over the too autonomous Irish Church (nothing new there). Adrian was the only Englishman ever to be Supreme Pontiff ; his being English does suggest that his motives in this affair were “complex.” For another thing, the deposed King of the Irish Province of Leinster, Dairmait Mac Murchada, had come to England in 1166 to seek Henry’s help in winning back his realm and, surely, solidarity among kings was motivation enough! Perhaps though, it was simply to give Henry’s barons more land to lord over – remember it was his son King John, who would be forced to sign the Magna Carta by angry barons at Runnymede in 1215. In any case, by the late 15th century the area under English control was the region around Dublin known as the English Pale. The term pale is derived from an old French word for pike and basically means a stockade; by extension it means any defended delimited area. The expression “beyond the pale” comes from the fact that it was dangerous for the English to venture beyond that area.
Interestingly, a side-effect of the Norman invasions of Ireland was the spread of the patronymic “Fitz” which derives from the French “fils de” meaning “son of” or “Mac” in Gaelic. Thus we have the Irish names FitzGerald, FitzSimmons, FitzPatrick, etc. (The name FitzRoy, meaning “natural son of the king,”  is not Irish as such but rather Anglo-Norman in origin.
One more thing: in the 14th century, there was the brief and unsuccessful incursion into Ireland by Edward, the brother of Robert the Bruce (Angus Macfayden in Braveheart, Sandy Welch in The Bruce). This was a Scottish attempt to create a second front in their own struggle against the Anglo-Normans by attacking the English Pale.

 

Part II: From Tudors to Modern Times
The Tudor invasions of Ireland under Henry VIII and Elizabeth I broke through the English Pale and brought the entire island under English rule, but not necessarily under English control as rebellions constantly broke out. However, the process of extirpation of Gaelic culture, language and laws was begun and the system of plantations and settlements took more and more land away from the Irish population. This was now colonialism in the full modern sense of the term. But why was this so important to the British crown – mystėre?
For one thing, the English gained control over the oak forests (once dear to the Druids) that covered the island; the deforestation of Ireland provided the British merchant fleet and the Royal Navy with the timber needed for Britannia to rule the waves. The conquest of Ireland was the first step in the creation of the British Empire, the settlements in Virginia and New England being next steps that followed quickly.
This is not a “feel good” story. Rebellion and brutal repression are a constant theme. Catholicism became nationalism for the Irish and became a wedge used by the British to isolate and subjugate the population. In the early 17th century, King James I introduced laws and regulations that flagrantly favored Protestants and penalized Catholics. Things only worsened with the infamously brutal military campaign waged in Ireland by Oliver Cromwell in mid-century.
Interestingly, during the wars in Ireland in the 17th century, Irish earls and soldiers retreated to France to serve in the French army and continue the struggle against the British. But that is how the great Bordeaux wine, Chateau Lynch-Bages, the fine cognac, Hennessy, and the stately Avenue MacMahon in Paris all got those Irish names – another mystery solved en passant.
With the overthrow of the last Catholic English King James II and the installation of William and Mary on the throne in 1689, there began the period of the Protestant Ascendancy, a ruling clique of the right kind of Protestant (no Presbyterians and, of course, no Jews) that was in control of Ireland for over two hundred years. The system lasted into the 20th century. We pass over in silence the malevolence that this regime and Robert Peel’s government in London showed toward the population of Ireland during the Great Famine of the 1840’s.
There was a long-standing literary tradition in Ireland going back to the pre-Christian era and the prose epic Táin Bó Cúailnge (The Tain), click HERE . As English took root, Irish prose and poetry followed eventually leading to eminent writers both Protestant (Oscar Wilde), Catholic (James Joyce) and Jewish (Bram Stoker). In fact, for writers like Joyce it was an imperative to show that Irish authors could bring the English language to new heights.
After WWI, Ireland was partitioned into the independent Irish Republic and the British province of Northern Ireland, part of the U.K. After the seventy years of “The Troubles,” there was the combination of the Good Friday accords in the 1990’s and common membership in the European Union which brought about peace. The planned Brexit move could well jeopardize this delicate balance.
Though it has played a role in world history, Hibernia is not that large a land. To put things in perspective, the population was about 3 million in 1800 and it is just over 5 million today.
During the Protestant Ascendancy and until recently, millions of Irish emigrated to the United States and to places throughout the British Empire, including England and Scotland themselves. Their descendants and the Irish in Ireland have found a way of coping in an Anglo-Saxon world, as full-fledged citizens of the countries they live in. So in the diaspora, we have John Lennon and Paul McCartney, we have Georgia O’Keefe and Margaret Sanger nėe Higgins, … . The Irish Republic itself has emerged as a very modern European state and a force in industry, culture and politics – separation of Church and State, hi-tech operations, musician/activist Paul David Hewson (aka Bono) and so on. Ulster can claim Nobel Prize winning poet Seamus Heaney and John Stewart Bell, the physicist and author of the revolutionary “Bell’s Theorem” for which he would most certainly have received a Nobel Prize had he not died unexpectedly in 1990.
But, as sociologists have noted, once a population is conquered and suppressed, they are blamed for their own history and a prejudice against them can linger on, however subtly. For example, in the English language today, almost all the commonly used terms that are Irish in origin have a negative connotation of mischief or worse: blarney, malarkey, shenanigans, paddy wagon, hooligan, limerick, donnybrook, leprechaun, shillelagh, phony, banshee. The only exceptions that come to mind are colleen and shamrock. However, sociologists have also noted that in the U.S. it is an advantage to have an Irish name when running for political office – thereby providing one solution to the mystery of what’s in a name.
.

America

Countries are sometimes named for tribes (France, England, Poland), sometimes for rivers (India, Niger, Congo, Zambia), sometimes for a crop (Malta, honey), sometimes for a city in Italy (Venezuela, Venice), sometimes for the name Marco Polo brought back from China (Japan), sometimes for a geographic feature (Montenegro), sometimes with a portmanteau word (Tanzania = Tanganyika + Zanzibar).
On the other hand, there are countries named for an actual historical personage, some twenty-six at last count, and many of these trace back to the European voyages of discovery.
Several island nations are thus named for Christian saints: Sao Tome e Principe, St Kitts and Nevis, Saint Lucia, Saint Vincent and the Granadines, the Dominican Republic.
Voyages out in the Indian Ocean and the Western Pacific Ocean led to other countries being named for Europeans. The island nations of Mauritius and the Seychelles are named for a Dutch political figure (Prince Maurice van Nassau) and a French Minister of Finance (Jean Moreau de Sėchelles), respectively. The archipelago of the Philippines is named for King Philipp II of Spain.
In the Americas, there are four mainland countries named as a result of those voyages and named for actual people: Bolivia (Simon Bolivar), Columbia (Christopher Columbus), El Salvador (Jesus the Messiah) and, of course, the United States of America (Amerigo Vespucci).
Amerigo Vespucci – who dat? How did it come to pass that the two continents of the New World are named for a Florentine intellectual and not for Christoforo Colombo, the hard-working, dead-reckoning, God-fearing mariner from Genoa? Mystère.
To start, Columbus believed that he had reached the Indies, islands off the east coast of Asia. In elementary school, we all learn that Columbus believed the world was round and that gave him the courage to strike out west in search of a route to the Indies. But Columbus wasn’t alone in this belief; it was widespread among mariners at the time and even went back to the ancient Greeks. The mathematician Eratosthenes of Alexandria, known as “the Father of Geography,” came up with an ingenious way of measuring the earth’s circumference and gave a remarkably good estimate. Later, in the middle ages, scholars in Baghdad improved on Eratosthenes’ result; that work was included in a treatise by the geographer Alfraganus that dates from 833 A.D. and this estimate is referenced in Imago Mundi a Latin text Columbus had access to. However, a kind of Murphy’s Law intervened; at some point in the chain of texts and translations, due presumably to a clash between the longer Arabic mile (7091 ft.) and the shorter Roman mile (4846 ft.), confusion arose. The self-taught Columbus was thus led to believe that the world was much smaller than it really was; in 1492 he was truly convinced that he had reached the outskirts of Asia. In fact, scholars argue that Columbus would never have undertaken his voyage west had he not thought the route to Asia was much shorter than it is. It is likely that, to the end, Columbus held firm that he had reached the Indies; in any case, he certainly didn’t say otherwise until it was too late.
Enter a young, well-connected Florentine. Amerigo Vespucci was working in the 1490’s in Spain for the Medici banking empire. After news of Columbus’ first voyage reached Europe, many navigators sailed west from Europe to report back to crowns and banks on the possibility of new riches. Among them, John Cabot (aka Giovanni Caboto), himself an agent of the Medici bank in England, who captained a voyage in 1497 that reached the Canadian mainland. Vespucci, for his part, participated in several of these voyages of discovery out of Portugal and Spain. During this time, he sent a letter to his onetime class mate Lorenzo di Pierfrancesco de’ Medici in Florence detailing some of his adventures. This letter was translated into Latin, the lingua franca of Europe, and given the title Novus Mundus. It was published in Florence in 1502 (or early 1503); the letter went viral and was translated and reprinted throughout Europe. It states plainly that Vespucci had seen a New World:
 “… in those southern parts I have found a continent more densely peopled and           abounding in animals than our Europe or Asia or Africa …”
He describes encounters with indigenous peoples along the east coast of today’s South America and recounts his travels all the way down to Argentina. Vespucci’s proclamation is to be contrasted with Columbus’ assurance that he had reached the Indies themselves. A second text, known as Lettera al Soderini was published shortly after in Italian and it too was translated and read all over Europe. Scholars continue to debate, however, whether the published texts, especially this second one, were the actual letters of Vespucci or exaggerated accounts written up by others based on his letters.
Enter a young, brilliant German mapmaker, Martin Waldseeműller, who held a position at the cartography school founded by the Duke of Nancy at Saint- Diė-des-Vosges in the Lorraine area of Eastern France. Waldseeműller headed up a team to produce a map of the world that would take the latest discoveries into account. Inspired by Vespucci’s letters, he and his Alsatian colleague Martin Ringmann boldly named most of what is now South America in honor of Vespucci labeling it America. To that end, they took the Latin form of Amerigo which is Americus and made it feminine to accord with the Latin feminine nouns Europa, Africa and Asia.
Click HERE for the map, published in 1507, that introduced America to the world. Just think though, the name Amerigo is not Latin in origin but is derived from the Gothic name Heinrich. Waldseeműller and Ringmann were both German speakers. So had Waldseeműller and Ringmann resorted to their native German rather than Latin, we would be living in the United States of Heinrich-Land. That was close, wasn’t it?
Waldseeműller was less bold in the maps he made a bit later in 1513, labeling the area he had called “America” simply as “Terra Incognita” as he was likely criticized for the bold stroke of 1507. He also added the information that the new discoveries were due to Columbus of Genoa on behalf of the monarchs of Spain. The myth of Queen Isabella pawning her jewels to finance Columbus has the role of establishing the primacy of the monarchs in underwriting the early voyages of discovery while in fact it was more the Medici and other banks who funded the explorations; in Columbus’ case it was the financier Luis de Santángel – a converso, by which is meant a Jew who converted to Christianity (this was the time of the Inquisition). Later voyages were financed by companies themselves such as the Dutch East India Company and the Massachusetts Bay Colony.
Vespucci, for his part, went on to an important career in Spain where he was appointed to the position of Pilot Major of the Indies, in charge of voyages of discovery (piloto mayor de Indias).
Enter a young, brilliant Flemish mapmaker with a mathematical orientation. Gerardus Mercator’s world map of 1538 depicted the two new world continents as distinct from Asia and labeled them North America and South America. This stuck. Later in 1569, he introduced the Mercator Projection which was a major boon to navigation. The disadvantage of this projection is that land masses towards the poles look too big, e.g. Greenland. The great advantage is that it gets compass directions and angles right. In sailor-speak, a straight line on the projection is a “rhumb line” at sea. So if the flat map says sail directly North West, use your compass to find which direction is North West and you’ll be pointed the right way. This is an example of craft anticipating science. Mathematicians had to work hard to get a proper understanding of the geometric insights underlying the Mercator Projection; in math-speak, this kind of projection is called a “conformal mapping.” Conformal mappings have important applications still today in fields such as the General Theory of Relativity – something about “flattening out space-time.” The upshot is that Mercator’s authority helped establish North America and South America as the names of the newly discovered continents.

Continents

The world is divided into innumerable islands and seven continents. These seven have Latinate names which were created various ways and which then were adopted and perpetuated by European mapmakers.

So, to start, how did these names of the three continents known to the ancient Western World come down to us: Europe, Africa and Asia? Are they autochthonous, pre—Hellenic goddesses, as Athena is to Athens? Mystère.

To begin with Europe itself, according to myth Europa was a Phoenician princess, the daughter of the King of Sidon. Disguised as a white bull, Zeus managed to whisk her off to Crete where he seduced her and then made her Queen of the island. Her name was first used as the name of the island and then came to designate the Greek speaking world and eventually the whole of Europe. She had three sons by Zeus. One was the Minos of the Labyrinth and Minotaur fame (the bull motif being important in the Minoan culture of Crete); one was Sarpedon whose descendant of the same name is a heroic warrior in the Iliad who is slain by Patroclus; the third was Rhadamanthus who became a judge of the dead, in charge (according to Vergil) of punishing the unworthy.

The origin of the name Africa is subject to scholarly debate. The simplest theory is that it came from the name of a North African people known to the Carthaginians, themselves colonists from Phoenicia; from Carthage the name made its way into the Greco-Roman world.

The name Asia comes from Herodotus, the early Greek traveler and historian who in turn took it from the Hittite name Assuwa which simply designated the east bank of the Aegean Sea; in Herodotus’ usage, it meant the land to the east of the Greece and Egypt, notably Persia. It eventually came to mean the eastern part of the Eurasian land mass.

By the way, autochthonous was the winning word at the 2004 National Spelling Bee.

The four remaining continents were not known to the ancients but were charted and named as a result of voyages of discovery and mapmaking from 1490’s to the 1890’s, five hundred long years.

First, how is it that North America and South America are not named for Christopher Columbus, truly a mystère.

The continents of North and South America take their names from the feminine form America of the Latin version Americus of the first name of Amerigo Vespucci. Vespucci famously wrote “I have found a continent” in a letter that was published in 1502 as a tract and entitled Novus Mundus in Latin. The appellation for South America was used as early as 1507 by the mapmaker Martin Waldseeműller and was later employed by cartographers for both continents, notably by Gerardus Mercator (of Mercator Projection fame).

This leads us to the question of how that next magnificent continent unknown to the European Old World got its name, viz. Australia. Was this the name the indigenous people gave to the land? Mystère.

In the ancient world, there was a widespread belief that the there must be masses of land south of the equator to balance all that land north of the equator. If you think of the world as flat, this makes sense – something has to keep it from tipping over. So in some sense these lands were “known” to the Greeks, Romans and later Europeans. In Latin borealis means northern and we have the aurora borealis (aka the Northern Lights); in Latin, australis means southern and in the Southern Hemisphere, we have the aurora australis (aka the Southern Lights). This belief in undiscovered southern lands persisted into the modern era and was promulgated by eminent mapmakers such as Mercator: in his famous globes and projections, the Southern Pacific contains a land mass labeled Terra Incognita Australis. Here is a link to a sample map of the era with a huge land mass in the south.

That there was, in fact, “new” land down under in the southern part of the Pacific was known to Western mariners and explorers from the 1500’s on. And, of course, the aboriginal Australians had been there for some 50,000 years making their culture the longest continuously lived culture in the world, by a lot. Portuguese explorers most likely reached Australia but it was on the Spanish side of the Line of Demarcation, that imaginary line drawn by Pope in 1493 (and revised in 1494) that divided the world between the two pioneering imperialist powers, Spain and Portugal. In the early 1600’s, the Dutch explorer Willem Janszoon encountered indigenous people in the Northwest; later in the 1640’s, working for the Dutch East India Company, Abel Tasman probed the western and southern coasts of Australia named the region New Holland, discovered Tasmania, then missed the left turn at Hobart and sailed on to New Zealand. The Dutch didn’t follow up on the charting of New Holland, presumably because the area lacked readily obtainable riches such as gold or spices and did not lend itself to European style agriculture and settlement.

Enter Captain Cook and his famous voyages to the South Seas with reports of albatrosses and other creatures, adventures that, in particular, inspired the Rime of the Ancient Mariner – although the grim myth of the albatross is of Coleridge’s own making. In 1770, Cook explored the South East coast of Australia, entered Sydney harbor at Botany Bay and named the area New South Wales. The colonization began in 1788.

It was in 1803 that a British navigator Matthew Flinders (who had sailed with Captain Bligh but luckily on the Providence, not on the Bounty) became the first European to circumnavigate the island continent and to establish that New Holland and New South Wales were both part of a single island land mass. He used the name Terra Australis on his charts and in his book, A Voyage to Terra Australis. And this later became simplified as Australia; as put by Flinders

“Had I permitted myself any innovation upon the original term, it would      have been to convert it into AUSTRALIA; as being more agreeable to the ear, and an assimilation to the names of the other great portions of the earth.”

Just think, if Flinders had used his native English instead of Latin, Down Under would be called South-Land today!

And just think, if Flinders had sailed on the Bounty, Marlon Brando and Clark Gable would have played Mr. Flinders instead of Mr. Christian in the movie versions of Mutiny on the Bounty!

Last but not least (in area it is slightly bigger than Europe and Australia), there is the new continent of Antarctica. Did some mapmaker just coin it or does it have a history? Mystère.

The term Antarctica, meaning “the opposite of the Arctic,” does have a history going back at least to Aristotle. In maps and in literature, it was often used to designate a given “land to the South.” It was also used by Roman and medieval writers and mapmakers; even Chaucer wrote about the “antarctic pol” in a technical treatise he authored. Throughout the 19th century, expeditions, sealers and whalers from Russia, the United States and Great Britain probed ever further south; the felicitously named Mercator Cooper, who sailed out of Sag Harbor NY, is credited as the first to reach the Antarctic land mass in 1854. The first map known to use Antarctica as the name of the continent itself dates to 1890 and was published by the Scottish mapmaker John George Bartholomew. Bartholomew held an appointment from the crown and so used the title “Cartographer to the King”; this doubtless emboldened him to invoke cartographer’s privilege and name a continent, harking back to Waldseeműller and Mercator.

Brooklyn

The Brooklyn NY subway map shows a predilection for heroes of the American War of Independence: 13 subway stops in all. This total is to be contrasted with Manhattan’s paltry 2, Boston’s measly 2 and Philadelphia’s disgraceful 0. This expression of patriotism in Brooklyn has its roots in the way streets and avenues were named back before 1898 when the Brooklyn Eagle had writers like Walt Whitman and when Brooklyn was still a proud city with a big league baseball team of its own and not a mere “outer borough.”
In Brooklyn, two signers of the Declaration of Independence are so honored: Benjamin Franklin with 3 stops and Charles Carroll with 1 stop; in addition, three other founding fathers are also so honored: John Jay with 1 stop, George Washington with 2 stops, and Alexander Hamilton with 3 stops (as Fort Hamilton).
But then there are the idealistic European aristocrats from three different countries who fought valiantly along with the Americans – a Frenchman (Lafayette), a German (DeKalb) and a Pole (Kosciuszko ) – and who are also honored with eponymous subway stops; in fact, DeKalb has 2 stops in his own name and Kosciusko also has a bridge named for him.
The Marquis de Lafayette served as Washington’s aide-de-camp and later, as a field officer, he played a key role in blocking forces led by Cornwallis until American and French forces could position themselves for the war-ending siege at Yorktown, VA. In 1917, upon arriving in France, Gen. Pershing, the head of the American Expeditionary Force in France, famously said “Lafayette, we are here.”
But who are the other two heroes with subway stations in Brooklyn? Mystère.
The Baron Jean DeKalb  hailed from Bavaria and had a long career in the Bavarian Regiment of the French Army; for an image click HERE .  Before the French Revolution and the introduction of the citizens’ army, the kings and princes of Europe relied on mercenaries (e.g. the Hessians at Trenton) and foreign regiments to supplement their standing armies; so DeKalb’s career path was not all that unusual for the day. In 1763, he was ennobled with the rank of Baron for his valor on the field of battle, then married well in France and installed his family in a chateau that still stands not far from Paris at Milon-La-Chapelle.
DeKalb first came to the colonies in 1768 on a spying mission for Louis XV’s government and then came back again in 1777 to join Washington’s army with the rank of Major General. He served at Valley Forge and in 1780 he led his division south to the Carolinas to join the force under General Horatio Gates, a hero of the Battle of Saratoga. Gates faced Cornwallis at the Battle of Camden in South Carolina on Aug 16, 1780 and suffered a disastrous defeat. DeKalb died a few days later from multiple wounds received during the battle; his epitaph at the Bethesda Presbyterian Church graveyard in Camden reads “Here lie the remains of Baron DeKalb – A German by birth, but in principle, a citizen of the world.”
Tadeusz Kosciuszko was a Polish nobleman, born at a time when Poland was being partitioned by encroaching foreign powers; for an image, click HERE .  He was a brilliant military engineer, a hero of the Battle of Saratoga who was responsible for some key decisions that led to victory. Subsequently, he was entrusted with the task of fortifying West Point. It was his plans for the fortifications that Benedict Arnold (yet another hero of the Battle of Saratoga) tried to sell to the British.
But Kosciuszko and Lafayette both survived the war, went back to Europe and lived well into the 19th century. Did they retire to their estates or did they carry the torch of liberty back with them? Mystère.
In truth, both men did play significant roles in revolutions to come – but revolutions that did not quite have the “happy ending” of the American Revolution. Both men were jailed for their activism but both men kept the faith to the end.
Upon returning to Poland, Kosciuszko became involved in the struggle with Russia to keep part of Poland independent. He led an uprising there in 1794 against the occupiers, only to be defeated and imprisoned by the army of Catherine the Great. At that point in time, Poland became completely partitioned among the Prussians, Austrians and Russians and would not re-emerge as an independent country until 1919, the 13th of Woodrow Wilson’s 14 points.
Kosciuszko was freed by Catherine’s son and successor, the czar Paul I. He then came back to the United States and renewed a friendship with Thomas Jefferson. During the American Revolution, Kosciuszko had taken a stand for the abolition of slavery and back in Poland, he called for the liberation of the serfs. In America, in 1798 he put together a will which placed Jefferson in charge of the American estate he had from Congress as a war hero; Jefferson was to use the funds from the estate to buy freedom for slaves and to provide for their education. Kosciuszko eventually went back to Europe and lived in Switzerland until his death in 1817. Before his death, he wrote to Jefferson urging him to carry out the terms of his will. But, Jefferson delegated others to take this on and the will was never executed as planned though the struggle over it reached the U.S. Supreme Court three times. As with so many things Jeffersonian, there is a debate about his role in this matter: on the one hand, the historian Annette Gordon-Reed called the will “a litigation disaster waiting to happen”; on the other hand, the biographer Christopher Hitchens wrote that Jefferson “coldly declined to carry out his friend’s dying wish.”
After the American War of Independence, Lafayette went back to France and soon became involved in the events that led to the French Revolution of 1789. After the fall of the Bastille on July 14, he was put in command of the Revolution’s National Guard, its security force. Lafayette was a co-author (aided, in particular, by Jefferson ) of the seminal Declaration of the Rights of Man and of the Citizen and it was he who presented it to the National Assembly in August of 1789. But, in short time he fell afoul of the radical revolutionaries and fled France in 1792, only to be captured and jailed by the Austrians. He was later liberated at Napoleon’s behest, came back to France, but would not participate in Napoleon’s imperial government. After the latter’s fall and the restoration of the Bourbon monarchy in the person of Charles X, he served as a liberal member of the Chamber of Deputies, the new parliament. Charles X became increasingly autocratic and when the king moved to dissolve the Chamber of Deputies in July 1830, the Parisians cried “aux barricades” and launched the July Revolution. Lafayette took a leadership role and was once again named head of the National Guard.
However, Lafayette used his influence not to create a new republic but to bring to the throne a liberal monarch in the person of Louis-Philippe, a man who had spent time in the USA and who Lafayette believed shared his democratic views. Lafayette and his fellow citizens were soon disillusioned. Things came to a head in June 1832; after Lafayette’s oration at the funeral of an opponent of Louis-Philippe, angry Parisians once again erected barricades. Despite Lafayette’s call for calm, what is known as the June Revolution led to armed and bloody confrontation with the forces of the king.  It is this June Revolution that is the background for the Broadway show Les Miz. The musical is based on the novel Les Misérables by Victor Hugo who was an actual witness to the events of this revolution.
Lafayette was outraged by the bloody suppression of the June Revolution of 1832 and other acts of brutality by the state; at his death in 1834, Lafayette was still struggling for the rights of man.
By the way, Louis-Philippe, the last King of France, was finally dethroned by the Revolution of 1848.

Inventors: TV and FM

Who invented the light bulb; answer, Edison. Who invented the cotton gin; answer Eli Whitney. Who invented radio; answer, Marconi. Who invented television; mystère.
By television, we mean the black-and-white “boob tube” of the late 1940’s and 1950’s – the medium Marshall McLuhan wrote about, not the color-rich flat screen marvel of today. In those early days, one struggled with test-patterns and shaky images, oriented antennas on roof tops, tuned the vertical hold and horizontal hold with surgical precision; but one never had the problems with the sound that one had with AM radio, which often fell victim to static and interference. The reason the TV sound was so good is that it used FM radio technology. So who invented FM radio? Another mystère. 
And why were we all listening to Superman and the Hit Parade on AM radio if FM was the better medium – that too is a mystère.
It is strange that something as pervasive as TV or FM doesn’t have a heroic story of some determined young engineer struggling against all odds to go where only he or she thinks they can go – an origin myth. Even relatively recent Silicon Valley innovations such as the Hewlett Packard oscillator and the Apple personal computer have that “started in a garage” story.  It turns out that for TV and FM, each does have its hero inventor story, but these stories are complicated by patent battles, international competition, corporate intrigue and in the end personal tragedy – no happy endings, which is probably why there has never been a Hollywood bio-pic for either hero inventor.
In the 20’s and 30’s television systems were being developed in the US, the UK, Germany, and elsewhere. In the UK, the Scottish inventor John Logie Baird developed an electro-magnetic system. In Germany, Manfred von Ardenne pioneered a system based on the cathode-ray tube (the picture tube, a key element in electronic television) and the 1936 Olympics were broadcast on TV to sites all over Germany; however, this system did not have a modern TV camera but used a more primitive scanning technique.
Meanwhile back in the USA, Philo Farnsworth, a young Mormon engineer who was born in a log cabin in Beaver, Utah and who grew up on a ranch in Rigby, Idaho, was tackling the subatomic physics underlying television. From his lab in San Francisco, Farnsworth filed patents as early as 1927. He gave the first public demonstration of an all-electronic TV system with live camera at the Franklin Institute in Philadelphia on August 25, 1934; this is our TV of 1950. So television does have a classical origin story with Philo Farnsworth the hero of the piece. However, from the time of his earliest patents, Farnsworth encountered fierce opposition from the Radio Corporation of America (RCA). This company, originally known as American Marconi, was founded in 1899 as a subsidiary of British Marconi; after World War I, at the behest of the military, company ownership was transferred to American firms and the company was rechristened. By this time, RCA had launched NBC, the first AM radio network, and was keen to control the development of radio and the emerging technology of television. Vladimir Zworykin was a Russian émigré who had studied in St. Petersburg with Prof. Boris Rosing, an early television visionary. In 1923 Zworykin filed a patent while working at Westinghouse in Pittsburgh and this patent had a certain overlap with Farnsworth’s work; Zworykin then moved to RCA and RCA used this patent, its financial clout and its powerful legal teams to bludgeon Farnsworth for years, tying him up in endless court battles.
Finally though, RCA was forced to recognize Farnsworth’s rights and peace of a sort was made when RCA licensed Farnsworth’s technology and later demonstrated television at the 1939 World’s Fair to great acclaim. However, War World II intervened and commercial television was put on a back burner until after the war. But by then, Farnsworth’s patents were about to expire and RCA simply waited them out. Although he continued working on challenging projects, in the end Farnsworth was depressed and drinking heavily and died in debt in 1971.
The story of FM radio and its inventor, Edwin Howard Armstrong, has a similar arc to it. Armstrong was born in 1890 in New York City and grew up in suburban Yonkers. He graduated with an engineering degree from Columbia University and eventually had his own lab there – but with an arrangement that left him ownership of his patents. Armstrong did important work on vacuum tube technology that eliminated the need to wear headphones to listen to a radio – this work also led to ferocious patent battles with Lee De Forest, the inventor of the original vacuum tube. Continuing to work to improve radio, Armstrong was granted patents for FM in 1933.
Working with RCA, Armstrong demonstrated the viability of this new technology by broadcasting from the Empire State Building. However RCA had its existing network of AM stations and this new technology was not compatible with RCA’s AM radios and would require that the listeners have a new kind of receiver. So, although Armstrong did set up an FM broadcasting operation, RCA let commercial FM radio just sit here; then WWII came and everything new that did not contribute directly to the war effort was delayed. After the war, legal skirmishes with RCA continued until Armstrong’s patents expired in 1950 – this has an eerily familiar ring to it. In fact, it gets worse: RCA convinced the FCC to re-standardize the FM frequency allocations and this had the planned effect of disrupting the successful FM network that Armstrong had established on the old band, further delaying the spread of the FM medium. Tragically, confronted by growing financial problems and exhausted by legal battles, Armstrong committed suicide in 1954.
RCA went on to play a role in color television and to enjoy a reign as a leading television manufacturer. But it did have its comeuppance and it self-destructed in the 1970’s – thrashing around trying to find new ways to increase revenue and even cooking the books. The company as such was ultimately disbanded in 1986, though the brand name is still used by Sony, Voxx and others.
Even with FM receivers becoming more widespread, something which continued to keep listeners tuned to AM radio throughout the 1950s was the fact that AM and FM programming were not decoupled: wealthier stations would buy FM licenses and simply simulcast the same programming on AM and FM. There were station identifications such as this one
“This is Bob Hope; you are tuned to the call letters of the stars – WMGM, AM and FM in New York”
and WMGM would broadcast the same shows on both media. In the 1960’s the FCC limited this practice and it was the newly liberated FM that introduced the nation to the Motown sound and to the marvelous folk-inspired music of Bob Dylan, Joni Mitchell, Joan Baez, Leonard Cohen and so many others.
.
===========
COMMENT
What about sound in cinema? It developed at the same time and used some of the radio technology. What ’30’s era film did not have an acknowledgement of RCA? Think of big clumsy cameras that only move forward and back and could pivot but not move sideways, with a sound band on the film that needed to be added, and the early cameras made so much noise they had to be enclosed (back when the sound was direct), and imagine trying to track Fred and Ginger. One famous 3 minute shot without splicing needed 47 takes, and Ginger’s feet were bleeding in her shoes at the end. Remember the RKO globe topped by a radio tower?
Jim Talin

Jean-Louis

 

Why is it that Frenchmen all seem to have these double names: Jean-Louis, Jean-Jacques, Jean-Claude, Jean-Francois, Pierre-Marie, and more. Likewise on the distaff side, we have Marie-Claude, Marie-Therese, Marie-Antoinette, Marie-Paul, Anne-Marie, and more. Why all the hyphenated names? Why the dually gendered names? Aren’t there enough simple names to go around? Mystère.

Well, it turns out that until very recently there simply were not that many names to go around to meet the needs of 50 million Frenchmen (to use Cole Porter’s statistics).

This story begins at the time of the French Revolution. The Revolution brought forth many things – the metric system, the military draft and the citizen army, the Marseillaise, … . There was also the end of slavery in the French colonies (until undone under Napoleon) and there was a new calendar. This calendar set the Year I to begin during our 1790; it divided the year into twelve parts, corresponding more or less to the signs of the Zodiac. These replaced the standard Roman months (6 pagan gods and rituals, 2 dictators and 4 numbers) with seasonal names; thus Aries became Germinal (when seeds sprout) and Libra became Brumaire (when the fog rolls in). The system lasted until the 11th of Nivôse (Capricorn, when snow falls) of the Year XIII (1802).

In France, the day 18 Brumaire has a sinister ring (much like the Ides of March); it is, in fact, a synonym for coup d’ėtat because this is day on which Napoleon (in the Year VIII) staged the military coup that ended the Revolution and led to the ill fated empire. It is in his essay The 18 Brumaire of Louis Bonaparte that Karl Marx quotably says history repeats itself, “the first time as tragedy, the second as farce.”

During the period when Napoleon was the Consul but not yet Emperor the legislative body in France was the Consulat. This body succeeded the Directoire which itself followed the Convention (and la Terreur) which replaced the Assemblėe Nationale of the short-lived Constitutional Monarchy which ended the absolute monarchy of the Ancien Rėgime. The Consulat, apparently to put a stop to the new practice of using first names for children that were inspired by the Revolution itself, enacted the Law of the 11th of Germinal of the Year XI: henceforth only a name from a religious or other official calendar or a name from ancient history could be used. The registrar (officier de l’ėtat civil) could refuse any name he or she considered unacceptable.

So for Catholic France, this meant that only the names of saints who had a feast day on the liturgical calendar were acceptable. All this was inconsistent with the aggressive anti-clericalism of the Revolution but very consistent with the traditionally top-heavy structure of French governance.

In 1800, the population of France was about 30 million and growing; in contrast the U.S. population was only about 5 million though growing at faster rate. Even with a smaller population than France until about 1880, throughout the 19th century Protestant Americans used names from the Hebrew Bible (Abraham, Ahab, Rebecca, Rachel) to supplement the supply of Anglo-Norman names (William, Alice, …) and Anglo-Saxon names (Edward, Maud, …). This elegant solution was not available to the French.

While the Revolution swept across France, there were areas of resistance to it, among them Brittany.  This region was always independent in spirit – what with its own Celtic language and music and very traditional Catholicism. The royalist Chouans fought a bloody civil war against the Revolution, protesting military conscription, the secularization of the clergy and, doubtless, the new calendar. All this is recounted in Balzac’s novel Les Chouans as well as in historical romances, movies and TV shows. In the present day, in the world of fashion there is the stylish chapeau chouan, inspired by the impressive clickable big-brimmed hats the Chouans wore. By the way, Balzac, a Taurus, was born on the 1st of Prairial in the year VII at the very end of the Eighteenth Century.

Despite coups d’ėtat, more revolutions, the disastrous defeat in the Franco-Prussian war, the Paris Commune, the Pyrrhic victory of WWI and the ignominy of WWII, that law going back to the Revolution stayed in effect.

So even after WWII, for a child to “exist” in the eyes of the French government and to benefit from schooling, the national health service, etc., his or her name still had to be acceptable to that local official according to the Law of the 11th of Germinal of the Year XI. However, trouble was brewing; the Goareng family in Brittany had twelve children and gave them all old Breton names; but only six of these were acceptable to the French registrars; so in the eyes of the government, six of these twelve children did not exist. In the tradition of the Chouans, the family took on the centralized French state and fought back through the French courts and United Nations courts and finally won satisfaction at the European Court in the Hague in 1964. Following that, the French government agreed in 1966 to extend the list of acceptable first names to include traditional regional names (Breton names, Basque names, Alsatian names, … ).

This satisfied the Goareng family but the struggle had so weakened the position of the French government that the law was modified again in the year CLXXXII (1981); finally the dam broke in the year CXCIV (1993) when this whole business was ended. In the time since, many once unacceptable names have been registered. By way of example, Nolan is now a popular boy’s name and Jade is a top girl’s name.

 

 

Joshua and Jesus

In the Hebrew Bible Joshua succeeds Moses as the leader of the Israelites and leads the invasion of the Land of Caanan. Most spectacularly, in the Book of Joshua, with the aid of trumpets and the Lord’s angels, he conquers the walled city of Jericho – an event recounted in the wonderful spiritual “Joshua Fit the Battle of Jericho.”
In Greek speaking Alexandria, there was a large Jewish community and around 250 B.C. a translation into Greek of the first books of the Hebrew Bible was made by Jewish scholars. This translation is called the Septuagint (meaning 70) because seventy different scholars translated the text independently; according to the Babylonian Talmud and other sources, when their translations were compared, they were all identical down to the last iota.
In Hebrew the name Joshua is יֵשׁוּעַ (Yeshu’a); in the Greek of the Septuagint, it becomes Ιησους roughly pronounced as “ee-aye-soos.” In Hebrew the word for Messaiah is מָשִׁיחַ (Mashiach); in the Greek of the Septuagint, it becomes Χριστός.
The gospels of the New Testament were written in Greek in the latter part of the first century A.D. In the Greek text, this name of Jesus is Ιησους ; but this is the same as the Septuagint’s name for Joshua. Is Jesus’ real name “Joshua”? Should the Greek Χριστός be rendered as “Messaiah” in English? Mystères.
The short answer is Yes; Jesus and Joshua have the same Hebrew name. Had the New Testament Gospels been written in Hebrew or Aramaic, Jesus Christ would have been called Yeshua Hamashiach by his disciples or, in English, Joshua the Messaiah.
That Jesus and Joshua shared the same Hebrew name is not new news. In fact, Moishe Rosen, the founder of Jews for Jesus,  authored Y’ESHUA, the Jewish Way to Say Jesus, published by the Moody Bible Institute in 1982. Also in the 1980s, there appeared the “Joshua” novels of Joseph F. Grinzone which are about a Christ-like figure, a carpenter, who touches people’s lives with his example, his teachings and his miracles; the author chose the name Joshua exactly because it is an alternative reading of the name Jesus in the Gospels.
But, it’s complicated. In moving names from one language to another, typically a letter is replaced by its closest relative in the target language and adjustments are made for the sake of grammar or sound; that said, the two words might not sound at all alike or look at all alike. Because the Greek name Ιησους is sounded out as (something like) ee-ay-soos (close to the Spanish pronunciation), it became Jesus in Latin. One more thing: if you capitalize the first three letters of Ιησους, you have I H S for iota, eta, sigma; these are the letters that form the Christogram that traditionally adorns vestments and altar cloths in Roman Catholic churches. Another popular Christogram is Xmas, where the Χ is the Greek chi which is a symbol for Christ (Χριστός). These Christograms and the Kyrie are last links back to the early Greek Christian church.
The good news of the gospels reached Rome from the Greek speaking eastern part of the Mediterranean. The first Christians in the Latin speaking part of the Roman Empire did not translate “Ιησους Χριστός” from the Greek much less from a Hebrew or Aramaic original source; instead they imported the Greek name all of a piece. This had the effect of making the name “Jesus Christ” special. Otherwise, the name would have been shared with others: Joshua of the Book of Joshua is called Joshua the Messiah in the Hebrew Bible and others are called the Messiah as well – even the king of Persia, Cyrus the Great, is called Messiah in Isaiah 45:1.
When one is raised in a Christian religion in an English speaking country, the name “Jesus Christ” is magical and absolutely unique, a name that no one else has had nor will ever have. Would saying “Joshua the Messiah” have made the Son of Man simply too human, simply one Joshua among many? And would that have interfered with our understanding of the Mystery of the Incarnation and of the Mystery of the Holy Trinity? Or would it have enhanced our understanding?
So the distinction between the names of Jesus and Joshua began in early Western Christianity. And the distinction has persisted. In St. Jerome’s translation of the Bible into Latin, the Old Testament name for Joshua is rendered as “Josue” and, of course, the New Testament name of Jesus is “Jesus.” St. Jerome had access both to the Hebrew text and to the Septuagint, so he was likely aware of this “inconsistency.” His translation, written at the end of the 4th century was the standard one in Western Christendom until the Reformation. Similarly, Jerome did not render the Greek Χριστός as “the Messiah” or even “the anointed one.” Indeed “Christus” had been the standard rendering of Χριστός in Latin since the very beginning; this is also attested to by the way the pagan Roman writer Tacitus referred to the Christians and Christus at the beginning of the 2nd Century in his Annals: he recounts how Nero blamed them for setting the fire that burned Rome in 64 A.D. and then ordered a persecution. Not at all fair of Nero, but it is a testament to the very rapid spread of Christianity across the Roman Empire.
There was a point in time when the name Jesus could have been replaced with the name Joshua in English (or vice-versa for that matter but no one has yet suggested replacing Joshua with Jesus in English translations of the Old Testament). The Anglican authors of the King James Bible had access to the Hebrew Bible, to the Septuagint, to Saint Jerome’s work, and to the Greek New Testament; they, like St. Jerome, kept the name Jesus because, presumably, it was what people knew and loved. The same can be said of Χριστός which they also did not translate from the Greek, but rather they followed St. Jerome’s example.
But the plot thickens. Is Jesus the only one to have the name Ιησους in the New Testament? After all, there are multiple people named Mary or John or James. Again, mystère.
But here the answer depends on which edition of the gospels you are reading. In the latest edition of the New Revised Standard Version (Oxford University Press, 2010), Matthew 27:16–17 is rendered as follows:
 At that time they had a notorious prisoner                                                                           whose name was Jesus Barabbas.                                                                                                 So after they had gathered, Pilate asked them,                                                                 “Whom do you want me to release to you:                                                                             Jesus Barabbas, or Jesus who is called the Messiah?”
Note that Barabbas has the first name Jesus in this text. In St. Jerome’s translation and in the original King James from 1611 A.D, “Jesus Barabbas” is simply “Barabbas.” In other words, Barabbas does not have a first name in these classic translations and the name Jesus is reserved exclusively for Jesus the Messiah. This practice of dropping Barabbas’ first name goes back, at least, to the third century: the church father Origen (d. 254 A.D.) declared that Jesus must have been inserted in front of the name Barabbas by a heretic. Origen was certainly not alone in this; until recent scholarship went back to original texts on this point, only Jesus the Messiah was named Jesus in the New Testament.
Of course, the crowd then cries “Give us Barabbas” – the most inflammatory passage in the gospels for Christian-Jewish relations. This account of events appears in all four gospels but Saint Matthew adds “His blood be upon us and our children.” To some, all this feels staged; could it have been included to deflect guilt from the Romans, placing the blame for the Crucifixion on the Jews?
After all, Christianity at the time of the gospels was becoming dominated by Gentiles and Hellenized Jews of the Diaspora (like St. Paul himself); the empire they lived in was the Roman Empire, and Jewish rebellions in the Holy Land were repeatedly being put down by those same Romans; crucifixion was employed by the Romans as punishment for crimes against the state and not to settle squabbles among coreligionists of a conquered population – so if the ministries of John the Baptist and later Jesus did have a social-political dimension that was problematic for the Romans, Christians could side-step this by re-positioning the Crucifixion as an intra-Jewish affair.
Or was it to prove to Christians that the Jews had broken their covenant with the Lord and that this covenant now belonged to the Christians.
Addendum:
This second interpretation is the standard Christian one and historically was the basis of the Catholic Church’s position that those who practiced Judaism would not enter Paradise; this position was only revised after WWII and the Holocaust. Pope Benedict XVI added that the Jewish people are not responsible for the death of Christ. Other Christian Churches have also updated their thinking. These are important steps. The roots of anti-Semitism in the Christian world are complex and do not come down simply to these Gospel passages; official Nazi anti-Semitism itself was not Christian in origin but came with a package of pagan race-based beliefs. Anti-Semitism in Western Europe today is Muslim in origin. Jews in the U.S. themselves say that America, which has never had an established religion, has been an exceptionally good place for them. The terrible outburst of anti-Semitism that recently killed 11 people in a synagogue in Pittsburgh tells us how far we all still have to go despite the post-War efforts by Christian churches; the rot is deeper.

California

California is called the Golden State and lives up to its billing. It is truly a magical place – the coast line, the rivers and bays, the sierras, the deserts, the redwoods, the gold rush, the marvelous climate and on and on. Its name could be Spanish. But its name is very unlike those of other states that were once part of New Spain and which have Spanish names. Colorado, Florida, Nevada, and Montaña are all recognizable Spanish words, while California is quite different.  Maybe it is Latin like Hibernia and Britannia (Ireland and Britain). Mystère.

What is marvelous is that California is named for a mythical island of Amazons created in a fantasy adventure novel The Adventures of Esplandian, written by Garci Rodriguez de Montalvo and published in Seville in 1510. How the author Montalvo came up with the name for this island is a matter of serious scholarly debate. In any case, early Spanish explorers thought California (Baja and Alta) to be an island and named it for the island of the novel. The island in the novel is as golden as California itself:

… there exists an island called California very close to a side of the Earthly Paradise; and it was populated by black women, without any man existing there, because they lived in the way of the Amazons. … Their weapons were golden and so were the harnesses of the wild beasts that they were accustomed to taming so that they could be ridden, because there was no other metal in the island than gold.

The beasts they tamed were griffins (half-lion, half-eagle) and the griffins were unleashed on the enemy in battle. The novel pits the Christian hero against Muslim Turkish foes. The queen of California was the beautiful Calafia (aka Califia) who leads her Amazons into battle on the side of the Turks. Needless to say, she is defeated by Esplandian, but then is whisked off to Constantinople (which somehow is still in Christian hands) where she converts to Christianity and marries one of the Christian knights. For the period, this was a happy ending.

Cervantes makes The Adventures of Esplandian one of the books on Chivalry that contributed to Don Quixote’s madness and, in fact, it is the very first of Don Quixote’s books chosen to be burned by the priest and doctor who are trying to “cure” him.

It is fitting that the land of California with its dream factories of Hollywood and Silicon Valley should be named for a magical island of fantasy. Field research confirms that Californians are blissfully unaware of all this; there are exceptions, though.  There is a mural from 1926 of Queen Califia and her Amazon warriors at the Mark Hopkins Hotel in San Francisco

that is worth visiting. Much more recently, the late French-American artist Niki de Saint Phalle created a sculpture garden in Escondido in honor of California’s queen.

 

https://upload.wikimedia.org/wikipedia/commons/7/70/TheDons_Detail.jpg

Plymouth and Cape Cod

Massachusetts is notorious for its peculiar pronunciations, especially of place names. For example, there are Gloucester (GLOSS-TAH) and Worcester (WOO-STAH). And there are confounding inconsistencies:  on Cape Cod, there are opposing pronunciation rules for Chatham (CHATUM) and Eastham (EAST-HAM), two towns within minutes of each other; closer to Boston, the president John Quincy (QUIN-SEE) Adams was born in Quincy (QUINZ-EE). No threat of “a foolish consistency” here.

However, we all know (thanks to the Thanksgiving holiday) not to pronounce Plymouth as PLY-MOUTH but rather to say PLIM-UTH ( /plɪməθ/ in the dictionary). But, a short hop from Plymouth, just over the Sagamore Bridge, there are the Cape Cod towns of Yarmouth and Falmouth. How should we pronounce these town names? Does the Plymouth pattern apply? One would think so since all the towns are close to one another but then there is the example of Chatham and Eastham. So should these Cape Cod towns be pronounced YAH-MUTH and FOUL-MOUTH, whatever?

In point of fact, the Plymouth rule does apply to both names and MUTH wins out and MOUTH loses again. But how can it be that these three towns have such special names, all ending in “mouth”? Mathematically, it’s not really possible that this could happen by pure chance. Mystère.

Since all these towns are on the coast, a first guess is that “mouth” could refer to the place where a mighty river reaches the sea; however, none of these MUTH towns qualify. Towns in New England are typically named for a town or city in Olde England and this turns out to be the case here. An internet search and the mystery is solved: back in England all three namesake towns are at the mouths of rivers. And the names of those rivers?  The Yare, the Fal and (this is the tricky one) the Plym. The Yare flows into the North Sea; the Fal and the Plym go to the English Channel.

Field research has shown that Cape Codders from Yarmouth and Falmouth both are blissfully unaware of all this. For Plymouth, more research is required.

In the end, Falmouth and Yarmouth might have all those beautiful beaches, but Plymouth has that rock .