Femmes Fatales of Yesteryear, Part II

For his classic poem The Ballad of Women of Times Gone By, François Villon rhapsodizes over the snows of yesteryear and the femmes fatales of yesteryear; naturally, he selects his heroines most carefully.
In the first stanza, he singles out two renowned courtesans of the ancient world.
There is Thais who followed her lover, one of Alexander’s generals, on the Macedonian march of conquest; after Alexander’s death, her paramour became Ptolemy I of Egypt – launching the dynasty of the Ptolemies that only ended with Cleopatra.
And there is Flora the Roman beauty – so prosperous in her chosen profession and so magnanimous of spirit that, according to legend, she financed the first Floralia ceremonies in Rome: springtime flower festivals and lusty happenings, annual six day events that lasted long into the Christian era. With the growth of Roman power, these exuberant ceremonies quickly spread throughout the empire – quite understandably since the Floralia “were much appreciated by conquered peoples for their licentious nature,” to translate from a prudish French source.
Things get a bit comic though in this first stanza when Villon references Alcibiades (Archipiades in the text) who was, in fact, a man. Alcibiades was known in his day as the most beautiful youth in Periclean Athens – apparently Villon and his contemporaries took him to be a woman so universal were the paeans to his beauty in classical writings. The less easily befuddled among us today hold Alcibiades more to account for his role in the disastrous siege of Syracuse in the Peloponnesian War between Sparta and Athens.
But then Villon leaps forward to the late Middle Ages invoking Héloïse and Marguerite de Bourgogne – both most worthy of the poet’s attention. The actual story of Marguerite and the scandal of The Tower of Nesle involves other dazzling women – among them are the two other daughters-in-law of King Philippe Le Bel, Jeanne de Bourgogne and Blanche d’Artois both of whom figured in the actual historical events but who were mostly left at peace by the legends and literature that followed. However, the story of The Tower of Nesle also involves Isabelle de France, King Philippe’s daughter who was the one who aroused the suspicions of her father about the future queen Marguerite’s extra-regal activities. And this is the Isabelle known to history as the She-Wolf of France (la Louve de France)! Should she not be there among the femmes fatales of Villon’s poem?
Well here is her story: daughter that she was of the King of France, at the tender age of 12, she was married off to Edward II, the King of England; this was a dynastic marriage arranged to keep the peace between England and France – as the Duke of Aquitaine and Gascogne, the Plantagenet Edward II controlled a large part of France but was in feudal terms a vassal of the King of France. It is also interesting that although the English Court at the time of these scandals was very much French, the French Salic Law never became part of English law: this law was inspired by Marguerite’s story and prevented a queen from being the reigning monarch; on the contrary, in England there were the impressive reigns of Queen Elizabeth I, Queen Anne and Queen Victoria. But the plot thickens: the same Isabelle, who had denounced Marguerite, herself led a successful rebellion against her own husband Edward II aided by her lover Roger Mortimer, baron of Wigmore and descendant of Normans who came with William the Conqueror. Edward II was thus forced to abdicate in favor of the 14 year old Edward III, his son with Isabelle; the deposed king died imprisoned at Berkeley Castle not long after, either by natural causes or on the orders of Mortimer – historians differ. Edward III only being 14 years of age when made king, Isabelle served as Queen Regent and ruled the country – she did well, making peace with Robert the Bruce and the unruly Scots, for one thing. When Edward III did take control at age 18, he promptly saw to it that Mortimer was executed – but Isabelle, though kept away from Court, lived out her days playing the model grandmother in a style befitting the daughter, wife and mother of a king.
As with Marguerite de Bourgogne and The Tower of Nesle, the story of Isabelle de France is too good to have been passed up by the world of letters, Villon notwithstanding. And this time it was Christopher Marlowe himself who seized the occasion.
    BTW, Marlowe’s star continues to rise; the New Oxford Shakespeare now lists him as co-author of all three of the Henry VI  plays – this attribution was made using a sophisticated Artificial Intelligence program which determines authorship by matching phrasings against other works by the writer in question etc – thus pretty well settling at least one question involving Marlowe’s contributions to Shakespeare’s work.
Marlowe, it seems, had a predilection for plots involving close ties between men and, true to form, in his 1592 play Edward II, he develops the story around the close and controversial relationship Edward had with his favorite Piers Gaveston. Like Dumas’ play The Tower of Nesle, Marlowe’s play too has been made into films – most recently there is the 1991 film Edward II by British filmmaker Derek Jarman: here it is Edward’s relationship with Gaveston that triggers Isabelle’s alienation – although historians tend to think that it was Edward’s dalliance with his next favorite Hugh Dispenser the Younger that drove Isabelle to open rebellion – and, indeed, in the end Isabelle did have Hugh Dispenser dispatched in a most ghastly way.
Books too continue to be written on this dramatic chapter of British history; already in this century we have Paul Doherty’s Isabella and the Strange Death of Edward II (2003).
And the world of art has been there since the beginning. For a medieval image of Isabelle de France, taken from the original Froissart’s Chronicles, the late 14th Century history of the 100 Years War, click HERE . Admittedly, Isabelle would have been better served by a master painter of the Renaissance but that would only have been possible some 100 or more years later. For a print from Froissart’s Chronicles of the first steps in the execution of Hugh Dispenser, click HERE .
Given all this, Isabelle de France clearly deserves her place in the pantheon of femmes fatales of yesteryear. Did François Villon only overlook her because she was Queen of England and not Queen of France? We will never know, hélas. But a simple way to give her her due is to recreate the lines that are manifestly missing from Villon’s poem, inserting them into the middle of the stanza devoted to Heloise and Marguerite; after all the poem is dedicated to dangerous women of the past, she certainly qualifies and fits in so well with the other two.
There must be a circle in Hell reserved for those who tamper with great poetry (the crime of lèse-poésie or is it lèse-poète), but for Isabelle’s sake a poetic sacrilege is justified here and so we propose that lines be added both to Dante Gabriel Rossetti’s Victorian translation and to Villon’s original poem.
We note with pride that the quatrains below follow Rossetti’s and Villon’s rhyme. Also like these poets, we have recourse to the language of yesteryear: in this case the arcane word mariticide which denotes the murder of a husband.
Following Rossetti:
And where is the Isabelle so intelligent
That she drove her lover to regicide
Thus becoming the Queen Regent
Thanks to her little mariticide
Following Villon:
Où est cette Isabelle si brilliante
Qui poussa son amant au régicide
Ce qui fit d’elle la reine régente
Grace a son petit mariticide
For the full Rossetti text, click HERE ; for that of Villon, click HERE .
One more treat: for Villon’s ballad sung with classical syllabication by the great French chanteur Georges Brassens, click HERE .

Femmes Fatales of Yesteryear

François Villon is the 15th Century French poet who is most famous in the English speaking world today for the plaintive line
    Where are the snows of yesteryear?   (Où sont les neiges d’antan?)
which serves as a refrain in his poem Ballade des Femmes du Temps Jadis. The translation of “antan” as “yesteryear” is due to the Victorian poet and pre-Raphaelite painter Dante Gabriel Rossetti, who did much to popularize Villon’s poetry in 19th Century England. The title of the poem itself also presents a challenge to a translator. Most Victorians preferred The Ballad of Women of Yore which sounds too folksy today, but which still is decidedly better than Rossetti’s own The Ballad of Dead Ladies. Today, to keep it simple, it is commonly rendered in English as The Ballad of Women of Time Gone By.
The poem tells of dynamic women of history, starting with Flora and Thais, grand courtesans of antiquity. The second stanza leaps forward from the ancient world to the Middle Ages and the stories of two pairs of star crossed lovers: Heloïse and Abelard; Marguerite de Bourgogne and Jean Buridan.
Here is the second stanza as written by Villon in what is now archaic French:
Où est la très sage Heloïs,
Pour qui fut chastré et puis moyne
Pierre Esbaillart à Sainct-Denys?
Pour son amour eut cest essoyne.
Semblablement, où est la royne
Qui commanda que Buridan
Fust jetté en ung sac en Seine?
Mais où sont les neiges d’antan!
Here is Rossetti’s version which does stay true to the rhyme, meter and feel of Villon’s poetry with archaisms of its own: ween = think, dule = agony, teen = misery:
Where’s Héloise, the learned nun,.
For whose sake Abeillard, I ween,
Lost manhood and put priesthood on?.
(From Love he won such dule and teen!)
And where, I pray you, is the Queen.
Who willed that Buridan should steer.
Sewed in a sack’s mouth down the Seine?
But where are the snows of yester-year?.
But “Lost manhood” is so lame compared to Villon’s vivid “chastré” (castrated) and the important visual reference to the Benedictine abbey at St. Denis (Sainct-Denys) is just dropped – hmm, High School English teachers used to insist that poetry should always strive to be concrete and that it is prose that delights in the abstract, a point apparently missed by poet Rossetti.
Finally here is a contemporary literal English translation – from a website named the Bureau of Public Secrets:
Where is Héloïse, so wise, for whom
Pierre Abelard was first unmanned
then cloistered up at Saint Denis?
For her love he bore these trials.
And where now can one find that queen
by whose command was Buridan
thrown in a sack into the Seine?
Where are the snows of yesteryear?
In depicting them the way he does as women of intrigue, Villon casts Héloïse and Marguerite as femmes fatales much in the style of a 1940s film noir, say Gloria Grahame and Jane Greer. In fact, although we tend to think of Héloïse as a sweet young thing who was seduced by her tutor, hers is the complex story of an accomplished and ambitious woman; in Villon’s poem, she is “sage Heloïs,” but Villon’s word “sage,” which in today’s French means “obedient,” is translated into modern French by words like “savante” and into English as “learned” (Rossetti’s choice) or “wise” (the Bureau of Public Secrets’ choice). Indeed, Héloïse went on to have a brilliant career in the Church reaching the rank of abbess and being named a territorial prelate (prelate nullius), a “bishop without a diocese” who reports directly to the Pope – the highest clerical position a nun can attain. Abelard, once her tutor and lover, suffered castration at the hands of members of Héloïse’ family but he did manage to carry on with his theological and philosophical work which has earned him a place among the deep thinkers of the Middle Ages.
And then there are the queen and the Buridan who finish out this stanza. Jean Buridan, like Abelard, was also one of the great intellectuals of the late Middle Ages – logician, philosopher, theologian, physicist, professor at the University in Paris. (There is more on him in the post on the pre-history of AI – click HERE – including the most clever proof of the existence of God of the age of Scholasticism, pace Anselm and Thomas.)
But who was the Queen who is purported to have had Buridan placed in a sack and drowned in the Seine like a cat? Historians tell us that she was Marguerite de Bourgogne, Queen of Navarre, wife to the dauphin Louis Le Hutin (the Quarrelsome) and daughter-in-law to one of the most powerful of French kings, Philippe Le Bel (the Handsome) – the king who destroyed the Knights Templar of DaVinci code fame, the king who successfully contested the power of the papacy which resulted in moving the popes from Rome to Avignon for 100 years. Next in line to be Queen of France and renowned for her beauty, Marguerite brought youth and gaiety to the Court. However, in today’s idiom, a tabloid headline of springtime 1314 would have read: “Marguerite de Bourgogne caught in flagrante delicto” and the text would go on to recount how Marguerite was having trysts with young knights in a tower across the Seine from the Palace of the Louvre – La Tour de Nesle (The Tower of Nesle).
Marguerite was outed by her sister-in-law, Isabelle de France, the daughter of Philippe Le Bel, wife of the English King Edward II and Queen of England – herself renowned for her wit and her beauty but also known as the She-Wolf of France (la Louve de France). Isabelle’s suspicions about her vivacious sister-in-law were aroused when she observed that two handsome young Normand knights who were visiting the English Court were sporting especially stylish aumônières (alms purses) that she herself had given as gifts to Marguerite – regifters beware! For a picture of an elegant aumônière of that era, click HERE .
Marguerite was then found out by the agents of her father-in-law, Philippe Le Bel. The knights, in fact two brothers, were tortured, forced to name Marguerite and then sadistically executed. Marguerite herself was imprisoned; while she was in prison, Philippe Le Bel died and she became Queen of France when her quarrelsome husband assumed the throne as Louis X. She died not long after, either from tuberculosis or by the hand of an assassin delegated by Louis X – the latter theory bolstered by the fact that there was no reigning pope at the time to dissolve the royal marriage (the cardinals were busy in a conclave in Lyon trying to elect a French pope to be installed at Avignon, eventually John XXII): so Marguerite’s timely demise left the king free to remarry, which he promptly did, this time to Clémence de Hongrie. Not surprisingly, there is no crypt for Marguerite at the royal necropolis in the Basilica of St. Denis, just outside Paris; however, the Benedictine abbey to which Abelard was confined still stands there despite being omitted in Rossetti’s too unfaithful translation.
History and legend quickly merged and the polymath Jean Buridan, an ordained priest but a Parisian man-about-town, was soon “implicated” in the affair. The version of the story that Villon employs adds the twist that Marguerite would have her lovers, Buridan among them, tied up in sacks and thrown into the Seine as the parties fines (discreet French phrase for orgies) would wind down. In the tale, there is also a shift from young knights to university students of the Latin Quarter: Buridan gets involved when he realizes that his students are disappearing without explanation but Buridan thwarts his fate by having friends waiting in a boat to fish him out when he is thrown into the River Seine.
Villon is known as le poète maudit (the cursed poet) and much of his own turbulent life story is known only through police records: he was banished from Paris three times, the first time for killing a priest in a brawl – amazing how being banished from Paris was punishment enough, an attitude Parisians still share! So it is not all that surprising that he would show interest in a sinner like the Marguerite of these legends. But a baroque tale like this calls for the pen of an Alexandre Dumas and, indeed, with Frédéric Gaillardet, he did co-author the hit play La Tour de Nesle (1832) where the lovers’ trysts are cast as mysterious, murderous bacchanals and where infinitely more intrigue (including parricide and incest) is added to the connection between Marguerite and Buridan – the character of Marguerite is so strong that it is considered the model for the dazzling Milady de Winter of The Three Musketeers. Apropos, Dumas actually fought a duel (though with pistols) with Gaillardet over a dispute about authorship – both men survived unnerved but unscathed and the two reconciled some years later in Paris.
The play has been made into several movies going back to the early silent era: the pioneering French director of full-length films Albert Capellani made his La Tour de Nesle in 1909 – two full years before his Les Misérables! But then over 200 movies have been made based on Dumas’ work over the years, certainly a record for authors – and the most glamourous Hollywood stars have been cast as Milady: Lana Turner, Faye Dunaway, Milla Jovovich. And then there have been radio shows, TV versions etc; today movie trailers and full showings are available on youTube. The story of Marguerite and Buridan has also inspired artists; for a painting of them together by the eminent 19th Century salon painter Frédéric Peyson, click HERE .
Medieval carryings on aside, the 14th Century scandal of La Tour de Nesle had serious political consequences: the affair lent support to the thesis that women lacked the moral qualities to serve as reigning monarchs on their own. The issue came up right away with the sudden death in 1316 of Louis X whose child with Clémence de Hongrie, Jean 1er Le Posthume, died in infancy – which made Marguerite and Louis X’s daughter Jeanne the logical heir to the throne; instead after a proper bout of intrigue (and it didn’t help that the affair of La Tour de Nesle cast doubt on the legitimacy of Jeanne), the throne was accorded to Louis’ scheming brother who became Philippe V le Long (The Tall). All these machinations became codified in the Salic Law of Succession (La Loi Salique) which postulated that not only could a woman not become monarch of France but that, per the Encyclopedia Britannica, “persons descended from a previous sovereign only through a woman were excluded from succession to the throne.” This issue continued to lead to constant conflicts over the succession to the throne of France – a longer one of these conflicts being the 100 Years War (1337–1453). BTW, the last woman “of time gone by” that Villon alludes to  in his poem is Joan of Arc herself, without whom that war would have lasted 200 years! Interestingly, the mischievous poet does not refer to her victories in battle but rather to how the perfidious English burned her at the stake:
Et Jehanne, la bonne Lorraine,
Qu’Anglois bruslèrent à Rouen
Or in the Bureau of Public Secrets’ literal translation:
and Joan, the good maiden of Lorraine
who was burned by the English at Rouen
One can see the not-so-subtle condescension shown by Villon toward Joan of Arc – rather like calling Golda Meir the “sweet girl from Milwaukee” rather than the “Iron Lady of Israeli politics.”
But getting back to the tale of La Tour de Nesle, Marguerite de Bourgogne is not the only femme fatale in the story. There is also Isabelle, the She-Wolf of France, who outed poor Marguerite; clearly the sobriquet “She-Wolf” implies that she was someone to reckon with and someone worthy of Villon’s attention. More to come. Affaire à suivre.

Post Scriptum

For Villon’s text of the complete poem, click HERE .
For Rossetti’s, click HERE .
For the Bureau’s, click HERE .

An American Back in Paris

Jet lagged beyond belief, we arrived at our daughter’s new place in the 16th arrondissement at 7 am where we exchanged hugs, then listened to instructions on how things work in the apartment until she ran off to the Ivory Coast for a week to work on an industrial insurance claim at a sugar plantation – something involving gas turbines! Because of Covid, it was our first visit there – a penthouse on the 10th floor of a modern building with two long balconies, one for the sunrise and one for the sunset (perfect for l’apéritif with a view of the Eiffel Tower as well). All this makes for simple but elegant dinner parties with friends.
Paris never disappoints. We settled in with ease and life quickly took shape: we had our caviste (wine merchant), our boucher (butcher), our poissonier (fishmonger – with a discrete oyster bar where  a Chardonnay from Burgundy was served), our fromager (cheese merchant), our boulanger (baker), our epicerie bio (organic food shop), a news stand (NY Times in the morning, Le Monde in the afternoon) and an all purpose supermarket for everything else. Life was good. Missing, compared to the Paris we once knew, were the droguerie (the “penny store” of old with soaps, detergents, mops, brooms), the mercerie (notions store), the charcuterie (pork store), the crèmerie (milk, crème fraîche, beurre en motte aka tub butter), the horse butcher and the tripe butcher – times had changed. Another change – bicycles and electric scooters (known as trottinettes) everywhere, zipping down the new bike lanes with no regard for pedestrians or traffic lights for that matter. Yet another change – the Scandanavian au pair girls have given way to Philippina nannies.
As far as Covid was concerned, one felt reassured. People all wear masks in the streets, shops, metros and busses; to enter a restaurant, café, department store, museum or theater, the “passe sanitaire” (a proof of vaccination) is required which is a QR Code that a very pleasant, attractive person reads with a smartPhone as you are welcomed into the establishment – Gallic charm at work. (We were able to get ours on-line from a French government web site before leaving!)
Paris life requires endless walking – up and down metro stairs, mile long treks to change trains at stops like Montparnasse-Bienvenue or Stalingrad, strolls along the Seine, hunts down narrow streets for specialty boutiques, museums, … .
Paris life is expensive but c’est la vie – €100 for apéritifs and servings of steak Tartare for lunch for two at Aux Deux Magots, the café where Jean-Paul Sartre (“Hell is other people”) and Simone de Beauvoir (“One isn’t born a woman, one becomes a woman”) wrote their existentialist novels, plays and essays during the après-guerre. To boot, the café is across from the magnificent church of St Germain des Prés, the 11th century augury of the age of Gothic cathedrals.
Paris life follows protocol. Thus Parisians are true to their fine dining code – good restaurants are empty at 7:55 pm and packed at 8:05. Oysters as hors-d’oeuvres, still de rigeur. And as an honest citizen of Cape Cod, it behooves me to admit that the French oysters are marvelous – meatier, tastier, more briny than those of chez nous.
We made a day-trip pilgrimage to Giverny in the company of charming friends to visit the Lily Ponds and the new Musée des Impressionismes with works by Monet, Caillebotte, Bonnard, … .
We made a weekend trip to Rheims, the heart of the Champagne country. (This has been an important town since pre-Roman times; Rheims is the old spelling which has been kept in English, as in the Douay-Rheims Bible, but which has morphed into Reims in French). In town, there is the magnificent cathedral where the kings of France were crowned and the caves where champagne is created through a laborious but rewarding process. The original plan was also to visit the nearby town of Troyes, a great center of commerce in the late Middle Ages – whence come avoirdupois weight and troy weight. Ironically, though, we had to reschedule that visit for our next trip because an important attraction, La Maison Rachi, the Jewish museum named for the great Troyen Talmudic scholar of the 11th Century, was not open for visits that weekend! In the following century, Chrétien de Troyes wrote early novels based on Arthurian legends such as Perceval ou le Conte du Graal, the tale of the quest for the Holy Grail – the source for Wagner’s Parsifal. For more on Troyes, Rheims and the Fairs of the Champagne Region in the late Middle Ages, there is Fernand Braudel’s magistral History of Capitalism; for this investigator’s (short) post on bubbly itself – Dom Perignon and all that – click HERE
On the way back from Rheims, we stopped at the site outside the city of Compiègne where the Armistice was signed on November 11, 1918. The tone was religious; the simple exhibits somehow captured the depth of the tragedy of that war and the visitors shared it.
Back in Paris, your intrepid voyager was able to fill a gap in his travel portfolio – a visit to the area named for St Dennis, or more Gallically, St Denis – an inner suburb of Paris, a metro stop even. Right there on the center square is the magnificent Basilica of St Dennis and a Benedictine abbey. In the Seventh Century, the good King Dagobert founded the abbey; the basilica itself, an early Gothic marvel, goes back to the 12th Century. For a delightful children’s song about Le Bon Roi Dagobert featuring the wonderful French comic actor Bourvil, click HERE .
Today the basilica is best known for its crypts where nearly all the French kings and queens were buried from the 10th century through to the 19th – previously St Germain des Prés had been the royal necropolis. So here were the crypts of legendary kings such as Charles the Hammer, Pepin the Short, Louis the Fat, Robert the Pius, John the Good, Charles the Handsome, Philippe the Handsome, Louis the Quarrelsome and legendary queens such as Anne of Brittany, Blanche of France, Clementia of Hungary, Isabella of Aragon, Constance of Castile, Blanche of Navarre, …
Your intrepid voyager also had a secret agenda as the unappointed ambassador of the Cape Cod town of Dennis MA. Should the municipality ever seek to have a “sister city” in France, there is a perfect candidate, St Denis naturally. Now, based on the hagiographic literature, St Dennis himself was posted by Pope Fabian to the northern outpost of Lutetia in Trans-Alpine Gaul (the present day Paris) in the 3rd Century to serve as the city’s first bishop. Unfortunately, this was a time of persecutions under the emperors Decius and Valerian; St Dennis’ proselytizing angered the Roman authorities there and they marched him up the Street of the Martyrs (Rue des Martyrs) to the Mountain of the Martyrs (Montmartre) where he was unceremoniously beheaded. But then St Dennis joined the ranks of the cephalophoric saints and martyrs (who in all number fifty) as he picked up his head and carried it some three miles to the site of his eponymous basilica where he indicated he was to be buried. This established the location as a holy place and burial ground, soon popular with pilgrims. The abbey and the basilica followed in due course.
For a statue of St Dennis in his classic pose, click HERE .
For a more elaborate statue – from of the Cathedral of Notre Dame in Paris, no less – click HERE .
The plot thickens. In Latin, the good saint’s name is Sanctus Dionysius and people from the town of St Denis are known as “Dionysiens” in French or “Dionysians”in English. Now serious towns on Cape Cod have euphonious names for their citizens (“Fleetians” in the case of Wellfleet, for example). So logically the town of Dennis should follow the good example of the French and its citizens should be known to one and all as “Dionysians.” Moreover, being associated with Bacchic revelry would surely raise the town’s profile and likely increase tourism – a consummation devoutly to be wished for beach towns. On the other hand, it might not be well advised to change the names of churches and businesses to chimeras like the Dionysian Union Church or the Dionysian Public Market.
To bolster the chances of this campaign to rechristen Dennis’ citizens, this investigator searched high and low for a small statue of the cephalophoric saint starting with the Basilica itself and then prowling the religious articles stores in the Saint Sulpice area – all without success. The villain in the piece seems to be the hierarchy of the Catholic Church who were busy last century downplaying folk favorites like St Christopher and St Dennis as too “pagan” – Vatican II and all that. St Christopher has apparently clawed his way back into favor so there is hope for a restoration of homage to other such beloved saints whose legends have inspired such awe in the faithful over the centuries. As for St Dennis, a severed head might not be as attractive a symbol as a silver medal but after all he has been a patron saint of France far longer than Joan of Arc and certainly deserves our respect. Kickstarter anyone?

Christian Anti-Semitism III

Fast forward to 19th Century in Europe where pseudo-scientific theories of race emerged. The term “Aryan” comes from a Sanskrit word for “noble” and it was originally used to denote the early speakers of Indo-European languages. The word was co-opted most notoriously by a French aristocrat, diplomat and writer: in his Essay on the Inequalities of the Human Races (1855) Arthur de Gobineau employed the names of Noah’s sons (Japheth, Shem, Ham) to divide the “Caucasians” into three groups: the Japhetites (aka Aryans) were the master race, destined to rule and, in Gobineau’s scheme, they were centered in Germany; the Jews were the Semites, busily contributing to the decay of Aryan Europe; the Moors of North Africa comprised the Hamites. Gobineau self-servingly declared himself to be Aryan, claiming that as a French nobleman his roots went back to the Hamite-bashing Germanic Franks: “732 and All That.” BTW, a more honest borrowing of “Aryan” is the modern name of Persia, namely Iran.

Gobineau thus added racism to xenophobic and religious anti-Judaism; the term anti-Semitism itself was only popularized around 1879 by the German race-agitator Wilhelm Marr (1819-1904), notoriously in his Zwanglose Antisemitische Hefte (Informal Anti-Semitic Notes). Marr himself started out as a left wing activist but the failure of the revolutions of 1848 in Europe pushed him opportunistically to the right. As the Encyclopedia Brittanica and others rightfully point out this term “anti-Semitism” is a misnomer since Arabs and other non-Jewish populations are ethnically Semites. (In fact, Arabs and Jews share tribal religious practices such as dietary laws and male circumcision, rituals which likely predate Judaism itself.) But the term anti-Semitism stuck and added a murderous racial animus to anti-Judaism as anti-Semitism became a dangerous political force in Europe.

Anti-Semitism also spawned a new industry: forged documents supporting the worst conspiracy theories directed against Jews. One notorious example is The Protocols of the Elders of Zion (1903); though long known to be fraudulent, it was assigned reading for many German school children after the Nazis came to power; the forgery is still circulating widely today and has found a new life on the internet.

In the annals of suicide on a continental scale, nothing compares with what happened to Europe with the two world wars. The Holocaust, an incommensurable part of Europe’s self-destruction, led to the death of millions of its Jews and turned countless others into refugees – all the more insane in that the Jews were an integral part of the creation of modern Europe and its continuing dynamism – scientific, cultural and economic. In the après-guerre of the 1940s, the once haughty nations of Europe became client states of the US or the USSR and their once great colonial empires were in the process of dismantelement; leadership in technology had passed to new centers; German was no longer the language of Science.

Often a society will contain an energetic minority like the Jews in Europe, a minority that can play a significant role in the dynamics of the group as a whole – the Coptic Christians in Egypt, the Chaldean Christians in Iraq, the Jains in India, … . Sociologists argue that the presence of such alternative groups is actually critical to the functioning of a complex society. On the other hand, xenophobic animosity toward these groups can linger quietly for centuries only to erupt suddenly – and now we are seeing anti-Christian violence in Egypt; religious minorities are at the brink of extinction in war-torn Iraq and Syria. But still even after the humiliating defeat of Germany and Austria in the nightmare of the First World War, how the power of anti-Semitism could be so readily exploited so as to lead to the horrors of the Holocaust leaves one silent – nothing can be said that makes sense really and silence is at least a sign of respect for the victims.

Indeed the history of the animus against Jews in Christian Europe has a long arc, a complex and painful story. The magistral 1959 novel Le Dernier des Justes (The Last of the Just) of André Schwarz-Bart actually takes on the story of hostility toward European Jewry from the massacres at the time of the Crusades through to the Holocaust. The book’s theme comes from the Talmudic legend of the Lamed-Vav, the thirty-six just men without whom the world cannot survive. It was one of the two books that most influenced the young Noam Chomsky as related in a recent NY Times interview: “Astonishing book, had a tremendous impact … that kind of book you read, and you walk around in a daze for a couple of days.” (The other book was All God’s Dangers, the biography of African-American sharecropper Nate Cobb.)

Today, while “anti-Semite” is still one of the most biting accusations one can think of, it frankly risks being diluted by politicians and lobbyists who declare all and any criticism of the policies of the state of Israel to be anti-Semitism – this kind of thing muddies already turgid waters and makes dialogue all the more difficult. On the other hand, it is true that criticism of Israeli policies is often motivated by naked anti-Semitism or anti-Semitism parading as something else, exacerbating things all the more.

But Christian-Jewish relations in Europe would not be the same after the Holocaust. Already during WW II, while hiding from the Gestapo in occupied France, history professor Jules Isaac managed to write a detailed guide to rethinking Christian teachings and Christian readings of scripture – published as Jésus et Israël in 1948, an English translation was published in 1971. In 1947 the International Council of Christians and Jews (ICCJ) organized a meeting at Seelisberg in Switzerland to formulate a program for combating anti-Semitsm. There the pre-publication manuscript of Jésus et Israël was circulated and it had a real influence on the “Ten Points of Seelisberg” published by the conference. For this text, click HERE . Isaac also went on to be one of the founders of l’Amitié judéo-chrétienne de France and continued to campaign for Christian-Jewish understanding.

But Isaac was not new to the fight against anti-Semitism and not new to working with Christians. As a young man, he made the friendship of Charles Péguy, the Catholic writer and poet, and with Péguy, he joined the ranks of the dreyfusards, the supporters of the cause of French army captain Alfred Dreyfus, a victim of brutal anti-Semitism – the struggle to exonerate Dreyfus lasted from 1886 to 1906, a long campaign that included Emile Zola’s legendary pamphlet J’Accuse (1898). Isaac worked with Péguy to launch the periodical Cahiers de la Quinzaine (1900-1914) – a political (e.g. it was strongly dreyfusard) and literary revue (e.g. it launched the career of future nobelist Romain Rolland). During the Great War, Dreyfus served at Verdun and Chemin des Dames and was promoted to Lieutenant-Colonel at the end of the war; Péguy was killed at the front on the eve of the Battle of the Marne; Isaac was wounded at the Battle of Verdun. For a video interview with Isaac on the topic of their friendship and Péguy’s “passion pour la verité,” click HERE .

After the conference at Seelisberg, Isaac worked assiduously with the Catholic Church on Christian-Jewish relations – even securing audiences with Pope Pius XII (1949) and with Pope John XXIII (1960). This latter meeting proved most fruitful and, spurred on in large part by Isaac’s tireless efforts, the Second Vatican Council issued a historic document Nostra Aetate (In Our Time, 1965) which recognized that all major faith traditions had validity and shared values – including Judaism and Islam.

But still, it was only very recently in 2011 that Pope Benedict XVI exonerated the Jews from the responsibility for Jesus’ death – in his book Jesus of Nazareth, Holy Week Benedict argues that the real villains were the Temple authorities and not all Jews, finally saying what, frankly, is obvious from the point of view of Christian Theology: Jesus’ death “does not cry out for vengeance and punishment, it brings reconciliation. It is not poured out against anyone, it is poured out for many, for all.”

Judaism, Christianity and Islam are known as the Abrahamic Religions as all trace their roots back to Abraham of Ur. In a kind of Oedipal/Freudian fit, the two “sons” of Judaism have turned on the “father.” Thus Islam too has baked anti-Judaism into its scripture with numerous Koranic verses such as “Allah has prepared for them [the Jews] a grevious scourge. Evil indeed is what they have done.” However, historically, the Islamic world was much more accepting of Jews than the Christian world; in the Middle Ages Judaism thrived in the Caliphate: thus the Rabbinic Tradition was continued with the Babylonian Talmud; the definitive Masoretic text of the Tanakh (the Hebrew Bible) was compiled; the philosopher Moses Maimonides wrote The Guide for the Perplexed, … . However, in the last century, driven largely by the Arab-Israeli conflict, once great centers of Jewish life like Teheran, Baghdad, Alexandria and Cairo lost their Jewish populations. In this century, Egyptian and Syrian television have put on shows based on the The Protocols of the Elders of Zion. Now virulent Islamic anti-Semitism in the basest sense of the term is the order of the day and it spreads via Islam from the Middle East throughout the world, a tragic replay of the Christian Middle Ages.

In Western Europe, classical Christian anti-Semitism is, let us say, largely quiet – even the Passion Play people at Oberammergau have sanitized their centuries old texts and now organize bridge-building group-travel to Israel. However, in France, Muslim anti-Semitism has taken on deadly proportions; again it is complicated – aggravated by the Palestinian situation, by inveterate Muslim anger that in colonial North Africa Jews were made French citizens but Muslims were not, etc. In Germany it is smoldering to the point that the Berlin government’s admission of one million refugees from the Middle East has people worried about the resulting increase in the Muslim population there. The US overall has been a good place for Jews – per Occam’s Razor, no established religion is the simplest explanation. That is not to say that anti-Semitism hasn’t played an ugly role in American life. Ominously, the present danger comes most from the white-supremacy movement where anti-Semitism has taken root – e.g. the neo-Nazi chant of “Jews will not replace us” at Charlottesville in 2017, the murderous attack on the synagogue in Pittsburgh in 2018. Daniel Goldhagen was indeed right to title his book on the current threat of global anti-Semitism The Devil That Never Dies.

Christian Anti-Semitism II

In the Roman Empire there were two strains of anti-Jewish writings, those of the Christians and those of the secular Greco-Roman intellectuals. If one considers that the Jews made up 5-10% of the population of the Empire, mostly concentrated in cities (Alexandria, Cyrene, Antioch, Rome, …), the secular literature is not directed against a tiny fringe group but rather against a visible subset of the Empire’s population. And, numerous non-Christian Greco-Roman texts with anti-Jewish references have been preserved. To drop a name, one can start with Cicero who, in his legal defense Pro Flacco (59 BC) blames the Jews for resisting the Roman takeover:

“… that nation has shown by arms what were its feelings towards our supremacy. How dear it was to the immortal gods is proved by its having been defeated, by its revenues having been farmed out to our contractors, by its being reduced to a state of subjection.”

Cicero’s hostility is not based on race or religion but on politics – for him, the problem is that the Jews have not proved grateful for having been dragged into the Empire.

In the Augustan era, the poet Horace mentions Jews three times in poems which are called Sermones in Latin and Satires in English. These allusions are playful but mischievous, poking fun at male circumcision, for example. In one place, Horace implies that the Jews were ardent proselytizers [Satires, 1.4]

When I have spare time I scribble.
That is one of those venial faults
Of mine; and if you refuse to indulge it
A great band of poets would come to my aid,
For we’re in the clear majority, and, like the Jews, we’ll force you to join our gang.

This can strike today’s reader as odd in that Judaism is not associated with missionary work in the modern world; however, there is debate among scholars as to what the situation was in the ancient world – in any case, it seems logical that strict and universalist monotheism would impel believers to try to convert others out of simple human solidarity.

Judaism, as a religion, had the respect of the Roman state which was very tolerant of the traditional religious systems of the subject peoples of the Empire. Indeed, under Julius Caesar, Judaism became an officially recognized religion and was protected as such. Jews thus were not required to participate in Roman religious ceremonies and their behavior, though thought strange, was officially accepted.

However, in the first century AD, the political situation in the Holy Land became explosive – to what extent the movements of John the Baptist and Jesus of Nazareth are implicated in these developments is a topic for historians. The First Jewish War (67-74) led to the destruction of the Temple in Jerusalem in 70 AD. Moreover, after that the Jews had to pay a tax, the “Fiscus Judaicus,” to practice their religion; however, this tax did not apply to Christians meaning that there was a perceived break between them and the Jews well before the end of the First Century.

However, once Christianity frankly broke with Judaism it was no longer part of a traditional religion but rather what the Romans called a “superstitio” and, to cite the Roman historian Tacitus, a catastrophic one: “exitiabilis superstitio”! (Some other religious movements were also categorized as superstitions by the Romans such as the Druidism of the Celts.) Ironically, this break with Judaism made Christianity eligible to be the target of persecutions – which lasted into the 4th Century; mostly these were local in nature, organized by regional governors but some were organized from the very top down under Roman emperors such as Domitian, Marcus Aurelius, Trajan, Diocletian, … .

After the First Jewish War, there was the Kitos War (115-117) which was launched by Jewish uprisings in Cyrenaica (present day Libya), Egypt and Cyprus; this was followed by the Second Jewish War (132-135, also known as the Bar- Kokbah Revolt).

In 135 AD with the end of the Second Jewish War, the emperor Hadrian changed the name of Judea to Syria Palaestina and that of Jerusalem to Aelia Capitolina; in addition after 135 AD,  the Jews were henceforth barred from Aelia Capitolina except for the Tisha B’Av, the saddest day in the Hebrew calendar, the day when disasters that have befallen the Jewish people are commemorated. Judaism carried on in the Diaspora; in Palestine outside of Jerusalem the Palestinian Targum emerged and the texts of the Mishnah and the Jerusalem Talmud were developed

BTW Hadrian himself was not a war-monger; on the contrary, his reign was peaceful except for the Second Jewish War; Machiavelli, the Father of Political Science, lists him among the “Five Good Emperors” of Rome – a ruler very different from Tiberius and Caligula, earlier emperors who exhibited hostility toward the Jews.

The Jews’ falling out of favor politically was mirrored in the Roman literature. In the satires of Juvenal (early 2nd Century AD), the Jewish way of life is ridiculed in rougher terms. To start, Juvenal especially finds it bizarre that the Jews’ deity is immaterial and is not represented by idols [Satire XIV]:

… [The Jews] worship nothing save clouds and the divinity of heaven

He then mocks Moses, makes fun of male circumcision and criticizes the Sabbath

… [The Jews] treat every seventh day
As a day of idleness, separate from the rest of daily life.

In his Histories (written 100-110 AD), Tacitus devotes multiple chapters to the Jews, the longest such discussion of any Greek or Roman author. He even traces their origins to Crete – a theory shared by other ancient writers and bolstered some by references to the Hebrew name for Cretans, Cherethites, in the Hebrew Bible; he even posits that the name Iudaeus [“I” not “J” since classical Latin did not have the letter “J”] is derived from Mt. Ida in Crete – in Greek mythology the site of The Judgment of Paris. For Botticelli’s surprisingly modest version of the event which launched the Trojan War, click HERE . In his treatment of the Roman destruction of the Temple in Jerusalem in 70 AD, Tacitus defends his fellow Romans by slandering the enemy, for example

“The Jews regard as profane all that we hold sacred; on the other hand, they permit all that we abhor.”

Cicero, Juvenal and Tacitus are nastier than Horace but their anti-Judaism is not embedded in their pagan religion. Moreover, though they are treated as a group apart, the Jews are not treated as a race apart. Instead hostility toward Jews is fueled by military conflict with Rome.

The situation in the Christian writings is different. Already in the Gospels, the charge that “the Jews killed Jesus” is enshrined as doctrine. But it doesn’t stop with the accusation of deicide; in his 2013 book on global anti-Semitism The Devil That Never Dies, Daniel Goldhagen relates that in all there are 450 anti-Jewish verses in the New Testament. To cite the formulation used by French inter-faith activist Jules Isaac in his Has Anti-Semitism Roots in Christianity? (Genèse de l’Anti-Sémitisme, 1956): Christianity “added theology to historical xenophobia.”

In the period leading up to the time when Christianity became the official religion of the Empire towards the end of the 4th Century, there was another source of Christian-Jewish controversy. The Christians strove to calibrate the story of Christ with the prophecies of the coming Messiah in the Hebrew Bible. For that they took the position that Christian authority over these scriptures had superseded that of the Jews, a doctrine known as supersessionism. Even the learned Tertullian, “the Father of Western Theology,” got involved and wrote an influential tract Adversus Judaeos at the end of the second century; Tertullian argued that the Jews had rejected God’s grace and that now the Christians were the people of God; he analyzed the Jewish scriptures to align prophecies with the life of Christ and even proved that Jesus was the Messiah from dates and predictions in the Book of Daniel. (Isaac Newton did something similar to prove that the Second Coming would be in 2060 AD! Get ready!)

But as Christianity became a legal religion and then the official religion in the Empire, the tone mounted with such texts as St John Chrysostom’s sermons: in Greek Κατα Ἰουδαίων, again Adversus Judaeos in Latin (386-387 AD). St John was a formidable and influential force; the epithet “Chrysostomos” means “golden-mouthed” in Greek, and he was renowned as a speaker and writer; a Father of the Church, he rose to become Archbishop of Constantinople. His writings helped to make Christian anti-Judaism even more virulent, something especially dangerous given that Christianity had just become the official religion of the Empire (380 AD). In these texts, in addition to the usual accusations about the death of Jesus there is a lot of inflammatory trash such as

“the synagogue is not only a brothel and a theater; it also is a den of robbers and a lodging for wild beasts”

Chrysostom’s legacy itself has proved most grim: his writings stoked anti-Judaism throughout the Middle Ages and they were invoked by the Nazi machine to justify the Holocaust to Austrian and German Christians.

St. Augustine was the last great Christian writer of the Roman Empire. He coined the term “original sin” (peccatum originale, in Latin) and codified Catholic guilt forever – in all fairness the concept traces back to St Paul and, certainly in the popular imagination, to the story of the Fall of Adam and Eve. Augustine’s influence endured through the Middle Ages and onto the Reformation when reformers like Martin Luther and John Calvin drew inspiration from him. Though he says harsh things about Jews in some of his writings, his view of the role of the Jews in history was original: the Jews must survive as witnesses to fact that they were behind the Crucifixion of Jesus; he compared the Christians to Abel and the Jews to Cain, saying how the Jews bore “the mark of Cain.” Technically, “the mark of Cain” meant that the Jews were to be protected from violence (as it protected Cain as he journeyed East of Eden), but it was not always interpreted correctly, alas. Augustine predicted that Jews would be there for the End of Time when they would at last be converted – which would fulfill a prophecy of St. Paul (Romans 11:25-27).

With anti-Judaism thus packed into the theology and lore of Christianity, it would survive the fall of Rome and be exported throughout Europe with the spread of Christendom in the Middle Ages. “The Jews Killed Jesus” remained a mantra of Christianity and the calumny survived the Middle Ages and the Reformation; with the voyages of discovery, it spread to European colonies throughout the world.

Affaire à suivre. More to come.

Christian Anti-Semitism I

In his 2013 book on anti-Semitism, The Devil that Never Dies, Daniel Goldhagen starts the list of calumnies against Jews with: “Jews have killed God’s son. All Jews, and their descendants for all time, . . . are guilty.”
Indeed, this accusation has been a deadly trigger for anti-Semitism in the Western world for nearly 2000 years. In the Christian narrative, the crucifixion of Jesus is carried out by Roman soldiers who flog him, nail him to a cross with the mocking label I.N.R.I., who cast lots for his garments – all under the authority of a Roman governor in a Roman province. But it is the Jews and not the Romans who are held responsible.
Bethlehem and Jerusalem are in historical Judea. At the time of the birth of Jesus, Judea was part of the realm of King Herod the Great who had been appointed by the Roman Senate. After his death, his son Herod Antipas ruled Galilee and Perea with the title of tetrarch; his son Archelaus was named ethnarch of Judea, Samaria and Idumea. When faced with Archelaus’ misrule, the Romans took direct control of these areas and named a procurator (governor) for the new Roman Province of Judea. The fifth procurator was Pontius Pilate who took office around 26 BC, assigned to the position by the Emperor Tiberius’ right-hand-man Lucius Aelius Sejanus.
The Crucifixion is dated around 30-33 AD, after which the first Christians huddled in Jerusalem around St James (called the Brother of Jesus by Protestants and St James the Lesser by Catholics). At this point in time there was a large Jewish population in Palestine and a significant Jewish Diaspora; according to A Concise History of the Jewish People (2005) by Naomi E. Pasachoff and Robert J. Littman: “By the 1st century CE perhaps 10 percent of the Roman Empire, or about 7 million people, were Jews, with about 2.5 million in Palestine.”
At the time of Christ, the Jewish population of the Empire consisted of the Hellenized Greek speaking Jews of the Diaspora, the Aramaic speaking provincial Jews of Palestine and the Hebrew speaking rabbis and priests in Jerusalem. These groups also had their own literatures. The Hellenized Jews had their own somewhat bowdlerized version (click HERE) of the Hebrew Bible in Greek, The Septuagint, to which they added new books such as the The Wisdom of Solomon (non-canonical for Jews, apocryphal for Lutherans and Anglicans, deuterocanonical for Catholics). In the countryside of the Holy Land, the Targums (Aramaic commentaries on biblical passages, doubtless an important part of the education of John the Baptist and Jesus himself) were recited in the synagogues. And in Jerusalem, there was the learned oral and written rabbinical literature in Hebrew.
Starting with the synagogues of the Diaspora, St Paul and other missionaries spread the Good News throughout the Greco-Roman world; Christianity grew quickly. The Roman historian Tacitus, is the first in the pagan literature to reference the Crucifixion; in his Annals (circa 116 AD) he recounts how Nero placed the blame on the Christians for the great fire that ravaged Rome in 64 AD; Tacitus approvingly assigns the responsibility for Jesus’ Crucifixion to the Roman administration in Judea: “Christus, from whom the name [Christian] had its origin, suffered the extreme penalty during the reign of Tiberius at the hands of one of our procurators, Pontius Pilatus.” Tacitus lauds Pilate further saying this execution checked “the evil” that is Christianity, even if only for a time.
But in Christian texts, responsibility was systematically shifted from the Romans to the Jews. Already in his first epistle, chronologically the first book of the New Testament, 1 Thessalonians 2:14-16 (circa 50 AD), St Paul attributes the death of Jesus to the Jews (NIV – New International Version):
    “For you, brothers and sisters, became imitators of God’s churches in Judea, which are in Christ Jesus: You suffered from your own people the same things those churches suffered from the Jews who killed the Lord Jesus and the prophets and also drove us out. They displease God and are hostile to everyone in their effort to keep us from speaking to the Gentiles so that they may be saved. In this way they always heap up their sins to the limit. The wrath of God has come upon them at last.”
Some critics argue that this passage was not written by St Paul himself but is a later interpolation: for one thing the passage is not consistent with 1 Corinthians 2:8 where he puts the blame on people of authority (NIV): “None of the rulers of this age understood it, for if they had, they would not have crucified the Lord of glory.”
It is also thought that the phrase “The wrath of God has come upon them at last” refers to the destruction of Jerusalem which took place in 70 AD. And why does Paul single out the Jews as persecutors of the Christians when the Romans would have been the ones persecuting Christians in Judea at that time – in fact that had been Paul’s own job according to the Acts of the Apostles! On the other hand, Acts 6 (written 50 years after the event) also recounts the story of the Deacon Stephen who was stoned to death by a Jewish mob in Jerusalem, making him the first Christian martyr, a horror witnessed by St Paul.
The historical Jesus was at the head of a movement that was begun by John the Baptist. The connection between Jesus and John is a key part of the Christian story. The Gospel of Luke takes care to say that they were even cousins – charmingly relating The Visitation, the trip of the pregnant Mary to visit her cousin Elizabeth who, though beyond child-bearing age, too was miraculously pregnant, herself with John. For a painting of Jesus and John together as infants by the Spanish baroque artist Bartolomé Esteban Murrillo, click HERE .
Jesus is described as becoming part of John’s mission with his baptism in the River Jordan in Mark, Matthew and Luke. That John’s movement had a political dimension is attested to by his arrest and beheading at the hands of the Roman vassal Herod Antipas – a tale so rich in symbolism as to inspire paintings by artists from Titian to Redon, a play by Oscar Wilde and an opera by Richard Strauss. For Redon’s painting of the result of Salome’s Terpsichorean efforts, click  HERE .
Jesus too meets a political fate in that, in the Roman world, crucifixion was reserved for enemies of the state or perpetrators of heinous crimes. Certainly, driving the money-lenders from the Temple in Jerusalem had political implications. Some argue that Jesus was an outright political activist – as in the PBS documentary The Last Days of Jesus and the recent book Zealot by Reza Aslan. But this theme is not new: by way of example, in Senior Year Theology at Fordham University in 1962, this writer presented a most modest analysis of this kind in a term paper which received the grade of B with the comment that applying “class warfare” to Jesus’ mission was both nothing new and nothing of interest.
The Gospels present the Passion as a one-week set piece constructed to put the blame for Christ’s death on Jews. Interestingly, in that PBS documentary, the strong suggestion is made that the Gospels are indeed telescoping a longer period and myriad political intrigues into one hyper-charged week, intrigues involving powerful figures such as Sejanus and Herod Antipas – Sejanus’ involvement in Jesus’ story has also been the stuff of novels since the 19th century.
In the first century, relations were not always good between Jews and Gentiles. In fact, Tiberius himself banished Jews from Rome in 19 AD; the Jewish philosopher Philo of Alexandria reports on Jewish-Gentile violence there in 40 AD, an event followed closely by the emperor Caligula’s failed blasphemous attempt to have a statue of himself erected in the Temple in Jerusalem; soon after, in his account of a legation to the emperor (In Legatio ad Gaium, XVI.115), Philo wrote that Caligula “regarded the Jews with most especial suspicion.” It is important to note that these conflicts are not rooted in Roman religion. In fact, the Romans were very tolerant of the traditional religious systems of the subject peoples of the Empire (better for keeping the “pax” in the Pax Romana).
Things political and military came to a head in the Holy Land with The First Jewish War (67-74). The destruction of the Temple in Jerusalem by Titus took place in 70 AD and the conflict itself continued on until the Masada campaign of 73/74 AD. Certainly this was a point in time when the Gentile Christians and the Hellenized Jewish Christians of the Diaspora would have wanted to distance themselves from the Jews and ingratiate themselves with the Romans and Greeks.
The first Gospel written is that of St Mark, himself a Jew, and it is dated in the decade following the destruction of the temple – an event to which it refers with Jesus’ ominous prediction that the Temple would be left with “not a stone upon a stone” (Mark 13:2). Mark’s version of the week of the Passion contains all the basic elements of the story laying the blame for the Crucifixion not on the Romans but on the Jews – more precisely, on the chief priests and the mob in Jerusalem.
Mark artfully places responsibility not on the Roman procurator Pilate but on the Jews, one dramatic incident being the story of Barabbas. It just happened that the Friday of the Crucifixion was the one day of the year when the Roman governor would offer the multitude in Jerusalem the privilege of pardoning a prisoner who was about to be executed. (No historical evidence outside the Gospels for any practice of this kind has been found.) The prisoner Barabbas, who was also awaiting execution, was guilty of the worst sorts of crimes, but when Pilate proposed to the crowd that he pardon Jesus and not Barabbas, the animosity of the Jews toward Jesus showed itself (Mark 15:11-14, NIV):
    But the chief priests stirred up the crowd to have Pilate release Barabbas instead. “What shall I do, then, with the one you call the king of the Jews?” Pilate asked them. “Crucify him!” they shouted. “Why? What crime has he committed?” asked Pilate. But they shouted all the louder, “Crucify him!”
John 18:40 makes it even more dramatic, having the Jews cry “Give us Barabbas” – haunting and chilling, making for a powerful moment in Passion Plays.
Matthew 27:25 adds the most damning cry of all
    “His blood be on us and on our children!”
Matthew also inserts the “thirty pieces of silver” into the story of Judas’ betrayal of Jesus, further damning the Jews.
John 19:5 adds that dramatic sadistic moment where Pilate says “Ecce Homo” (“Behold the Man”) as he displays the humiliated Jesus, badly scarred and beaten, wearing the crown of thorns – a riveting scene portrayed on canvas by Bosch, Titian and many others. Pilate says “I find no basis for a charge against him” but the Jews cry out “Crucify him, crucify him.” John also has Jesus and Pilate engage in a philosophical exchange about the nature of “truth,” further distancing Pilate from responsibility for the Crucifixion.
Luke adds yet another sidebar to the effect that from the outset Pilate didn’t want to condemn Jesus: Pilate tried to hand the prisoner Jesus over to Herod Antipas since it was in Galilee and not Judea that Jesus had been most active; but Herod Antipas already had the blood of John the Baptist on his hands and so decided to pass.
All of these maneuvers to portray Pilate as reluctant to execute Jesus were effective to the point that, in the Ethiopian Church, the teaching is that Pilate later became a Christian, a martyr and a saint!
In 1 Corinthians, Paul lays blame for the Crucifixion on the “the rulers of this age.” Perhaps, Mark thought that his text would be understood too as laying blame for Jesus’ death on the Temple leadership in Jerusalem and not on the rest of his fellow Jews. However, his account of a “mob” of Jews in Jerusalem spurred on by the “chief priests” crying out for blood, in the end, implicated all the Jews of the Roman Empire. No distinction was made by the Christians between that mob and the Jews of Galilee who rallied to Jesus’ mission, of the Jews of Jerusalem who hailed Jesus as he entered the city on Palm Sunday or of the Jews of the Diaspora, many of whom were among the first to embrace Paul’s teachings.
It is unlikely that Mark authored his version of the Passion which focuses on Jewish guilt all by himself – it must already have been circulating in the growing Christian world. “The Jews killed Jesus” accusation must have taken shape in that forty year period between the Crucifixion and the destruction of the Temple by Titus; given that passage above from 1 Thessalonians, the accusation would even appear in the very earliest extant Christian text. One can speculate that early on this calumny might have even been a recruitment tool that played off existing anti-Jewish feelings in the Empire. Anti-Jewish feelings, yes – but not part of the Roman religious system. In fact, Judaism, as a religion, had the respect of the Roman state having been declared an official religion already under Julius Caesar, shortly after the Roman takeover of Palestine (circa 45 BC). But the doctrine “The Jews killed Jesus” embeds hostility toward Jewish people as a race into Christian theology itself; it is this basis in religion that has made anti-Semitism so exportable – and it would soon be exported to lands in Northern and Central Europe that were never even part of the Roman Empire and then later across the world.
More to come. Affaire à suivre.

Capitalism to the Rescue

The New York Times podcaster Ezra Klein recently interviewed Noam Chomsky discussing a range of topics including the climate crisis. It has been capitalism with its industrial revolutions and lust for economic growth that are at the root of the problem and Chomsky addressed the question of whether the climate crisis could be handled successfully if the capitalist system stayed in place. Chomsky insisted that there wasn’t time to make fundamental political and social changes in the US or elsewhere. By default then capitalism will have to be part of the solution to muster the power and resources needed in the time available. For the full text, click  HERE .
This wasn’t the first time that someone on the Left turned to capitalism to move society in the right direction. Marx himself wrote that capitalism and its power to innovate were still going to be necessary for a period of time to push technological, industrial and organizational progress to provide the tools necessary for the transition to communism and the new centralized economy. Even Lenin employed this logic as the basis of his New Economic Policy (NEP); indeed, in his 1918 text Left Wing Childishness, he declared “Socialism is inconceivable without large-scale capitalist engineering based on the latest discoveries of modern science.” Lenin wrote this upon getting back to Russia from his exile in Switzerland: after the fine cigars from Davidoff ‘s and other perquisites that he enjoyed in bourgeois Zurich; Russia was just proving too primitive for him, it would it seem.
A more recent example of this phenomenon where people on the left call upon capitalism to work on the world’s problems is Accelerationism, a movement that emerged from the work of late 20th century disillusioned Marxist-oriented French philosophers – disillusioned by the realization (post May ’68) that capitalism cannot be controlled by current political institutions nor supplanted by the long awaited revolution. The paradoxical response in the 1970s by Deleuze, Guattari, Lyotard and others, then, was to call for the development of technologies and other forces of capitalist progress to bring society as rapidly as possible to a new place – they quote Nietzsche: “accelerate the process.”
Caveat Lector: The term “accelerationist” was co-opted recently by some white supremacist groups in the US in an attempt to give an intellectual veneer to their plot to hasten the collapse of society.
The bad news is that it is exactly capitalist growth that has brought us to the current climate crisis. But then there is that old proverb “It takes a thief to catch a thief”: the good news is that capitalism historically has excelled at using technology to change the world, albeit no matter the environmental or human cost.
In fact, linking science and engineering to industry and the economy has been the driving force of capitalism for 300 years. The Industrial Revolution began in England in the early 18th century launched by steel, coal and steam power. Historians of capitalism like Fernand Braudel point out that steam power and other sophisticated technologies were already in place earlier in history, in ancient Alexandria for example; but that the link was never made between these technologies and commerce and industry. In fact, up until the Industrial Revolution, technologies were typically applied to things military: even Archimedes and Leonardo worked on weapons systems and the term “engineering” meant “military engineering” until the 19th century when “civil engineering” was introduced. It was only in the second half of the 19th Century in Bismark’s Germany, however, that research science itself became a driver of capitalist progress; in particular, the transfer of knowledge from universities to commerce began there with the chemical industry and other fields followed. To boot, companies actually began hiring scientists minted by the modernized German university system and began establishing their own labs. The result was that by 1914, Germany had become both the world’s leading industrial country and leading scientific country – the internal combustion engine and the diesel engine, quantum mechanics and relativity, etc. (For scholarship, see H. Braverman: Labor and Monopoly Capitalism.)
Aside: Interestingly, in the interview, Chomsky remarks that the power of this model of transfer of open university research to industry is threatened by recent changes. Traditionally, to cite Chomsky, “[in university labs] people are working 80 hours a week. But it’s not to make money. They can make a lot more money elsewhere. It’s because of the excitement of the work. The challenge of solving problems. That’s what drives people. … [But] in the early ’80s, government laws were changed so that universities could get patents and researchers could get patents on the work that they were doing. OK. That had a cheapening effect. It meant that you really were imposing a structure in which people were working in order to make money, not to solve problems. And I think I don’t know how to measure exactly, but my impression is it had a cheapening effect on the nature of the university system.” This rings true especially given that the hallmark of the Scientific Revolution of the 17th century was open publication and distribution of new found knowledge – in constrast to the esoteric, guarded knowledge of the alchemists.
So given the special power of capitalism to drive progress, as Marx, Lenin and the Accelerationists had to concede, it is natural for governments to turn to capitalism itself to save us from planetary catastrophe. After all, we are in a period where science, technology and capitalism are in almost perfect synch.
However, we are also in a period in the US where the federal government is weak and ineffective. Which poses a real dilemma since government leadership will almost certainly be required going forward and we are so far from the times of FDR, alas.
The military aside, since the Reagan presidency, “starve the beast” has been the conservative battle cry to shrink government. In the process, the government has become a wimp and has sold the American people short. Deregulation has led to the Savings and Load crisis of the 1980s and the Wall Street crash of 2008 – along with government bailouts, of course. Lax even non-existent anti-trust enforcement has led to concentration in industry; reduced taxation has led to infra-structure collapse to the point that the American Society of Civil Engineers gave the country’s infrastructure a grade of C minus in 2021 – admittedly up from a D, the rating four years before. The biased Supreme Court is an “accessory to the crime” constantly sapping the government’s ability to run the country: in the last 15 years alone, it has stymied attempts at gun control with its lethal decision on “gun rights” (District of Columbia v. Heller, 2008); it has made big money the only player in town when it comes to political funding (Citizens United); it has ended the federal oversight of state election procedures required by the Voting Rights Act of 1965 (Shelby County v. Holder, 2013); and the list goes on.
That one sector of government favored by conservatives is the military; and it is the only area of government with much credibility left which means that more and more problems get turned into military problems as the military budget itself grows. Rather than serving the nation’s interests first, in foreign policy the military has continued to be used in the service of Big Industry – the invasion of Iraq under George W. Bush and Dick Cheney comes to mind and how the war was going to pay for itself with Iraqi oil. Industry also exercises undue influence through the Military Industrial Complex which has become so powerful that members of Congress won’t criticize the military budget because their states are home to defense contractors. But to their credit commanders are aware of climate change and are taking steps to deal with its potential impact on military operations; this could prove important.
If G.W. Bush’s presidency was tragedy, the Trump presidency was farce. Starting with massive tax breaks for corporations and the wealthy, this regime did everything it could to weaken the government further, one of the more bizarre and calamitous being the cancelation in 2018 of the National Security Council Directorate that was charged with preparing for the next pandemic.
All of this represents a tremendous shift of power and authority from the government to the private sector and the military. Today the Supreme Court is the only branch of government that can wield its power effectively; congress can do nothing except threaten not to vote for the budget; the last two presidents have been reduced to endless executive orders to get anything done.
It is possible, though, that Big Capital and Big Industry will come to see that in the long run they can make even more money and have even more control if they help solve the climate crisis. But given Big Energy’s long campaign of climate change denial and the example of the tobacco companies denying the risks of smoking, it is hard to be optimistic. On the other hand, there is the success of Tesla which is now worth more than the next six car companies together and the commitment of automobile companies to go electric in the coming decade. Then too there has been the surprising growth of solar and wind in the last couple of years – trends in clean energy that have surprised people lately! Still the question is whether it will be possible for the federal government in its current weakened state ever to launch an effective “Green New Deal” with programs, incentives and legislation. That being said, adding some less scholastic justices to the Supreme Court, and ending the filibuster would be steps in the right direction.
In China, the government is firmly in charge and could redirect its capitalist economy should it choose to and already a program to be the world’s largest producer of electric vehicles is in place – but at the same time, China is on a “coal spree” according to the Yale School of the Environment. So they are not likely to set a good example in the short run. It looks like the plan is to run environmentally friendly vehicles on electricity from coal fired plants!
Another source of pessimism is Paddy Power, the legendary Irish betting parlor where they will make book on almost anything including climate change. Simply put, you have to give heavy odds to bet that things will continue to worsen and any bet on improvement is a real long shot.
But the US government has surprised before and capitalists themselves might just wake up and realize that the only way to save capitalism is first to save the planet. Chomsky himself holds out hope: “We know how to do it. The methods are there. They’re feasible.”

Biden and the Court

On April 9th, 2021, President Joseph Biden ordered that a commission be established to study the need for structural reform of the United States Supreme Court, in particular with respect to the actual number of justices on the Court, with respect to term limits and with respect to the Court’s unchecked ability to declare acts of Congress unconstitutional.
There have long been reasons to be unhappy with the Supreme Court. It has consistently ruled in favor of the powerful with its history of decisions designed to trample on the rights of minority populations and working people: all very consistent with the view of the lawyer Thrasymachus in Plato’s Republic: “everywhere there is one principle of justice, that is the interest of the stronger.”
By way of overview, there is John Marshall’s decision in Johnson v. McIntosh (1823) where he declared that Native Americans had no real rights and were simply wards of the state. For this, the Marshall Court resorted to a papal bull (yes, a pronouncement by the Pope in Rome) to justify its decision: in Romanus Pontifex (1452), Pope Nicholas V ordered the Portuguese King Alfonso V to “capture, vanquish, and subdue the Saracens, pagans, and other enemies of Christ,” to “put them into perpetual slavery,” and “to take all their possessions and property.” This is the basis of the Discovery Doctrine which the US inherited from England and which the Marshall Court made part of American law; it was even invoked recently in an opinion, written by RBG no less, in a 2005 ruling against the Iroquois, City of Sherrill v. Oneida Indian Nation of New York.
A most catastrophic Supreme Court ruling was the Dred Scott Decision of 1857. This decision dictated that slavery had to be legal in all states and territories (the right to private property and all that). Impressed with the Court’s authority, James Buchanan (considered the worst president ever until recently) naively thought that this decision settled the burning issue of slavery once and for all – instead it led quickly to the Civil War.
Reconstruction and the 14th Amendment were quickly gutted by post bellum decisions in the Slaughter House Cases (1873) and the Civil Rights Cases (1883) by, among other things, weakening the power of the Equal Protection Clause to protect the civil rights of African Americans. Workers’ rights were gravely impaired with In Re Debs (1894) which authorized using Federal Troops to break strikes. And then in 1896, Plessy v. Ferguson upheld segregation in schools and other public places with the tag “separate but equal.” In 1905, the 5-4 Court ruled against the state of New York in one of its more controversial decisions, Lochner v. New York; appealing to laissez-faire economics this time, the majority ruled that the state did not have the authority to limit bakery workers hours to 10 hours a day, 60 hours a week even if the goal was to protect the workers’ health and that of the public.
The story of the case Federal Baseball Club v. National League (1922) would be ludicrous were it not so serious. In the decision written by celebrity jurist Oliver Wendell Holmes Jr., it was declared, against all logic, that Major League Baseball was not a business and so not subject to anti-trust laws – thus validating the Reserve Clause which made professional ballplayers serfs and the Lords of Baseball feudal barons. The situation continued until the courageous St Louis centerfielder Curt Flood’s case reached the Supreme Court in 1972; naturally the Court, true to its code, ruled against Flood and for Major League Baseball (again serving the interests of the powerful as per Thrasymachus), but the case did force open the door for player-owner negotiations and the end of the Reserve Clause. BTW, it was Justice Holmes who famously said that freedom of speech did not give one the right to cry “fire” in a crowded theater.
The 6-3 decision in Korematsu v. United States (1944) upheld the government’s internment of American citizens of Japanese descent during WW II. From the beginning it was roundly denounced as racist – nothing of the sort was directed against white Americans who supported the vocal pro-Nazi German American Bund. Indeed Columbia University Law Professor Jamal Greene ranked Korematsu among the four “worst-case” Supreme Court rulings in a 2011 Harvard Law Review article (along with Dred Scott, Plessy and the Lochner labor-law case).
Only the Warren Court in the 1950s and 1960s had a consistent constructive philosophy on issues like Civil Rights; this Court both made landmark decisions like the unanimous Brown v. Board of Education (1954) and it also made follow-up decisions to uphold legislation such as the Civil Rights Act of 1964 and the Voting Rights Act of 1965. On the negative side, this Court set the stage for overt judicial activism on the part of a conservative Court majority and a spate of dangerous 5-4 decisions followed as the Court shifted right.
Fast forwarding to the 21st Century and the Roberts Court, in 2008, there was the disastrous 5-4 District of Columbia v. Heller decision which has fueled mass shootings and galvanized the paranoid “militia movement.” Here Justice Scalia resorted to Jesuitical trickery in overturning the long-standing interpretation of the 2nd Amendment. In the name of “originalism,” claiming to know what James Madison really meant, Scalia rewrote the centuries old reading of the Amendment and stipulated that the phrase “A well regulated Militia, being necessary to the security of a free State” was meaningless – thus paradoxically making “originalism” a cover for a radical interpretation of the text and creating new “2nd Amendment rights” out of thin air. The decision has made the US and its gun violence a disgrace among nations.
The Roberts Court has continued this tradition of decisions that short-circuit American life and assault the citizenry. For example, the Court has dramatically increased the role and power of Big Money in American elections while limiting voting rights of citizens. The decision in the Citizens United case (2010) abrogated the McCain-Feingold election reform legislation of 2002 and made Big Money supreme in matters electoral – in the process it overturned two rulings of the preceding Rehnquist Court. (More recently, the Court again overturned its own precedent in yet another 5-4 decision and dealt labor unions a severe financial blow in Janus v. AFSCME (2018).) In Shelby County v. Holder (2013) the Roberts Court, showing no respect for Congress, vitiated the Voting Rights Act of 1965, thus unleashing a wave of measures in state legislatures to restrict voting, a process which has taken on even greater momentum in the wake of 2020 Democratic victories in Georgia and other Red states. The net effect of these decisions has been a disaster for American Democracy. To make matters worse, from the oral arguments, it looks as though this Court will maintain its indefensible record on voting rights in the case currently before it concerning ballot collection in Arizona.
Indeed today the Court presents a structural problem for American democracy as nine unelected political, ideological officials with life-time appointments reign over the legislative and executive branches of government – not only are they not elected to the Court, none of today’s nine sitting justices has even ever held elective public office! The power that has devolved on the Supreme Court was not anticipated by the framers of the Constitution. For example, in his essay Federalist 78, Alexander Hamilton dismissed worry that the Court could ever become even nearly as powerful as the two other branches of government: as this master of metonymy put it, the Court would have “no influence over either the sword or the purse.” In fact, this position is much that of Montesquieu, the Enlightenment philosopher who formulated the theory of balance of power among three branches of government.
The blatantly partisan stunt of then Senate Majority Leader Mitch McConnell which denied Barack Obama a supreme court nomination has made the Court more political than ever. All has been made worse with the theatrical, rushed nomination and ratification of Amy Coney Barrett hours before the 2020 election. It is this overtly political tampering with the make-up of the Court that has made the composition of the Court a legitimate issue for Biden.
Ironically, the person most discomfited by the new 6-3 conservative majority is Chief Justice Roberts who can no longer serve as a swing vote to mitigate the conservative majority’s overreaching – Exhibit A: Roberts voted with the minority in the recent (Apr. 9, 2021) 5-4 opinion which overruled the Ninth Circuit Court of Appeals to allow religious gatherings in California to violate state pandemic social distancing restrictions. This is not the first time this Court, on an ever more slippery slope, has showed especial consideration for religious institutions – that was the Burwell v. Hobby Lobby decision (2014). The Court has indeed changed with time; justices once proclaimed that freedom of speech did not give one the right to shout “fire” in a crowded theater lest you endanger your fellow citizens; however, the right to endanger your fellow citizens with potential super spreader events is now guaranteed by freedom of religion!
Mathematically speaking, the fact that so many of these controversial Court decisions for years now have been 5-4 implies that it is ideology and not jurisprudence that is the driving force behind them: if the justices were truly looking at the letter and spirit of existing law and precedent, the majorities would be much more random and some of them would be unanimous.
Getting back to Biden’s commission, it does have some history on its side and there is the fact that Thrasymachus’ simplistic view of justice was rebutted by Socrates himself.
The number of justices is not given in the Constitution, rather it is a matter for Congress. Four times the Congress has enlarged the number of justices on the Court to keep an alignment with the number of Circuit Courts – in 1789 setting it a 6 at the outset, in 1807 making the number 7, in 1837 making it 9 and in 1863 making it 10; it was reduced to 7 in 1866 in order to prevent Andrew Johnson from making appointments to the Court but put back to 9 in 1869 when U.S. Grant became President. Today there are 13 circuits which makes for a nice target number for the commission. But this would be “packing the court” and will probably prove a non-starter, alas.
Membership on the Supreme Court has always been a life-time appointment which is not inconsistent with Article III of the Constitution which proclaims that federal judges shall “hold their office during good behavior.” Term limits of some kind (18 years per the newly proposed Supreme Court Term Limits Act) would introduce a random element that could rejuvenate the Court with more frequent turnover. It would also counter the Republican strategy of appointing unusually young right-wing jurists to the Court. “Term Limits” has a nice ring to it and could conceivably attract support; the Supreme Court Term Limits bill tries to finesse Article III by having justices whose tenure expires on the Supreme Court be reassigned to a Federal Circuit court; but any change of this kind could well require an Amendment to the Constitution itself, no easy task.
Today the Congress can pass no law without starting a lengthy process of judicial review. This is not the case in the UK where a law means what the parliament says it means nor in France where the Conseil Constitutionel is kept weak lest the country become like the US and fall hopelessly into a dreaded “gouvernement des juges.” Judicial review of acts of Congress is not spelled out in the Constitution, but it is mentioned in Hamilton’s Federalist 78 as something needed for a balance of power in government. In any case, the Court ascribed to itself the right to declare an act of Congress unconstitutional in the landmark decision Marbury v. Madison (1803). In logic worthy of the sophists of Plato’s time, John Marshall affirmed that the plaintiff Marbury was right but ruled against him by declaring Section 13 of the Judicial Act of 1789 unconstitutional because it would (according to Marshall) enlarge the authority of the Court beyond that permitted by the Constitution. But the Constitution does not authorize the Court to declare an act of Congress unconstitutional either!
Limiting the Court’s authority to declare legislation unconstitutional would serve at least one important purpose – revivifying the Legislative branch of the government. Today with the Imperial Presidency and activist Courts, Congress has been relegated to an inferior status. Formulating such a restriction would require clever legal argument worthy of Marshall and Scalia; getting it through Congress would need all the skill of Henry Clay and LBJ – and it might even require an amendment to the Constitution which would need someone as persuasive as James Madison, the magician who managed to get ten amendments ratified.

Power in the US VII: The End of History

Early on in the George H.W. Bush presidency, there was the collapse of the Soviet Union and the end of the Cold War; triumphalism was in the air: liberal democracy and capitalism had won. Writing in 1989 in the journal The National Interest, historian Francis Fukuyama penned an article “The End of History,” in which he famously wrote
“What we may be witnessing is not just the end of the Cold War, or the passing of a particular period of postwar history, but the end of history as such: that is, the end point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government.”
The last opposition to laissez-faire capitalism had fallen. This was a triumph, hundreds of years and innumerable conflicts and conquests in the making, tracing back to the birth of capitalism itself, to the Medici bankers of Renaissance Florence and to the London Royal Exchange of the mid 16th century – the latecomer Amsterdam Stock Exchange only dates from 1602.
But rather than sharing the fruits of victory with the people that had made this triumph possible – from the soldiers who fought the wars to the little people who had prayed the Rosary night after night to combat godless communism – Big Capital dropped them like an old shoe. There would be no “peace dividend” for the masses.
But underlying all this was the fact that the nature of Capitalism in America was itself changing, shifting from Industrial Capitalism to Financial Capitalism. Industrial Capitalism is classical capitalism as characterized by the private ownership of the means of production – thus factories, shipyards, transportation, stores, etc. Here is a definition of Financial Capitalism from Wikipedia: “Profit becomes more derived from ownership of an asset, credit, rents and earning interest, rather than productive processes.”
All this would be part of a dramatic shift in American life – the bulk of the population would no longer be important to wealth of the nation. JFK famously exhorted Americans to “ask what you can do for your country.” Well, right now, the answer has to be “not very much” and that’s a problem. Indeed, with the end of the Cold War, Big Industry quickly moved production to lower wage areas, areas with sharply lower environmental and worker protection standards – leaving labor unions without members. The family farm was a thing of the past and agribusiness was firmly in the hands of giant companies like Archer-Daniels-Midland, Tyson, Cargill, Monsanto, et al; rather than the yeoman farmer, agribusiness would largely employ immigrants (documented and undocumented ) for field work and also to staff the packing plants, slaughter houses and other downstream processing points.
The gig economy was launched. Employment became “precarious.” Corporations and the wealthy paid less and less in taxes, off-shore tax havens multiplied, profits were housed overseas to avoid American taxes (Apple is particularly adept at this). Companies were more multi-national, less and less tied to a given nation.
Indeed, the abandonment of the masses quickly turned sadistic: climate change denial, air pollution, water pollution, mountain top removal, reduced sperm count, damaged ova, environmental destruction, absence of recourse when wronged by a corporation, rape of federal lands, crumbling infrastructure, diminishing access to proper health care, increasing expense of higher education and ruinous student loans, failure to deal with the opioid crisis, , growing homelessness, increased incarceration to the point that the US has risen to first in the world in the rate of imprisonment of its citizens, bypassing even that of Apartheid South Africa, and on and on. In particular, the white working class both urban and rural was more and more marginalized to the point where in recent years many saw a savior in the coming of Donald Trump. And this overt aggression against the people has not abated: the death-dealing water crisis in Flint Michigan; the tragic breakdown of the power grid in Texas; the deadly collapse of the 8 lane I-35 bridge in Minneapolis, a bridge no less over the nation’s pride, the Mississippi River, The Father of Waters. And then there have been the failure to address mass shootings and the response to the Covid-19 crisis – half a million deaths, and counting.
One thing that made this break with the people all the easier for Financial Capitalism was that masses of young men were no longer needed to fight the nation’s wars. Indeed, resistance to the draft and to the Vietnam War itself showed that the conscript citizen army was no longer a go-to solution. Instead, a professional standing army was developed attracting young people with promises of life long health care, job training and life-skills, offering a creditable alternative to unemployment and dead end jobs for many. For immigrants, the ploy was even bolder – the path to citizenship. This new high tech force required more sophisticated operatives and the number of women who were taken grew accordingly – already they were 6.8% of US forces in the First Gulf War; today the numbers are much more impressive: according to the Defense Department, women now make up 20 percent of the Air Force, 19 percent of the Navy, 15 percent of the Army and almost 9 percent of the Marine Corps. This new force has since been complemented by unmanned drones, by mercenaries (e.g. Blackwater) and houseboys from private firms employed to spare the troops from tasks like kitchen duty, the KP of old. The Commander-in-Chief thus has his own private army. Add to that how the Congress has the responsibility for managing the nation’s wars and this whole thing looks unconstitutional (Article 1, Section 8).
And the professional military has become the incubator of domestic terrorism. Randy Weaver (Ruby Ridge) and Timothy McVey (Oklahoma City) are early examples of military veterans turned terrorist and the pattern has continued – the US press is startled how so many members of domestic terrorist pro-Trump organizations like the Oath Keepers and others of the mob that attacked the Capitol on Jan. 6 are ex-military.
The example of a petro-state where a country’s resources are exploited to benefit a small elite can be instructive. Consider how a petro-state operates: the ruling elite enrich themselves by making deals with international companies (who even bring in their own labor force); the military are a necessary expense but, to maintain their lifestyle, the elite really do not need to invest in the nation’s infrastructure, education, training, industrial plant, … .
Something analogous is happening here in the US. The gap between the top economic 1% and the general population has become a chasm. The elite do not send their children to public schools; they do not endure the humiliations of the TSA when travelling by air – they use private jets that fly from small airports where TSA does not operate; they have Cadillac medical plans and access to top specialists in private hospitals; and on and on. [Hmm, an elite walled off as the hoi polloi watch the world fall apart – reminiscent of Jared Diamond’s study Collapse.] Indeed, investment in infrastructure, the environment, education, worker training – all continue to stagnate; the health care situation in the US is an absolute disgrace (e.g. the highest rate of infant mortality in the industrialized world) – and ignominiously for a capitalist society, it gives the lowest return per dollar spent on health care. And then there is student debt, the result of a flagrant refusal to invest in the nation’s young people. (This wasn’t always the case; schools in the state university systems used to be tuition free and the idea was in the air to help all students graduating high school: the 1956 National High School debate topic was “Resolved: That governmental subsidies should be granted according to need to high school graduates who qualify for additional training.”)
The failure to invest in the nation and its citizenry makes little sense – a well run company will pay dividends but also use its profits for new plant, for R&D, for worker training; nothing like that is true of the way the US treats its citizenry today. All justified by decrying “socialism” and making an appeal to “American Exceptionalism” – really a sucker punch. It doesn’t have to be like this in a Capitalist society: in contrast, look at France where universal health care kicks in at pregnancy, where education from pre-school to university is free and where worker continuing education is the law (“la loi du 16 juillet 1971”). And the French live better and longer than Americans. In fact, Canada and all the countries of Western Europe have better outcomes than the US which ranks 46th world wide in life expectancy, just after Cuba and just before Panama. For details, click HERE .
The Austrian-American economist Joseph Schumpeter’s concept of “creative destruction” has become a catch-phrase of the TV pundits – capitalism will automatically replace old industries and practices with newer more efficient and more profitable ones – thus Sears by Amazon. But, as Schumpeter himself pointed out, such destruction can also apply to the very institutions that enable capitalism to function so well in the first place. And that is happening to American democracy. The respected, pro-capitalist British weekly The Economist has a research group which analyzes the state of democracy in countries around the world. Ominously, the US now longer rates as a full democracy according to their Democracy Index; rather it is classified as a “flawed democracy” and is ranked only 25th among democratic nations. (In contrast, Canada, Ireland, New Zealand and the countries of Scandanavia score well.) For details, click  HERE
But the process of denying the bulk of the American population the benefits of Capitalism has been made even simpler. Financial Capitalism is by definition a greedy business and in the last decade of the American Century it strengthened its hand with the emergence of all sorts of clever financial shenanigans and deregulation of the banking industry. All this further marginalized the bulk of the American population; all this also led to the Casino Capitalism we find we have today, the Casino Capitalism that led straight to the crash of 2008 where the big banks and corporations were bailed out with taxpayer money – after all, they were “too big to fail.”
Only in a nation divided against itself could the people fail to make common cause in such a dire situation. But that is the beauty of it: by the end of the Reagan presidency, the country was indeed so splintered along lines of class, race, ethnicity, religion, region, politics and ideology that a collective effort was no longer possible. It has only gotten worse since. This is not the “End of History” as Fukuyama saw it, but rather the “End of Democracy in America.”

Power in the US VI: Multiplying the Divisions

In carrying out its reconstruction of the American Way of Life, the Reagan administration was remarkably thorough. Thus the administration’s zeal for deregulation led in the early 1980s to the dismantling of the Federal Home Loan Act of 1932 and the unleashing of Savings and Loan Associations to play with deposits that were insured by the Federal Deposit Insurance Corporation (FDIC). To fan the flames, the FDIC coverage was increased from $40,000 to $100,000 per account, something greeted by Reagan with “All in all, I think we’ve hit the jackpot” as he signed a catastrophic bill on S&Ls into law (the Garn-St Germain Depository Institutions Act). Not surprisingly, one soon had “zombie thrifts” investing in “junk bonds” and outright fraud. To cite Investopedia: “The S&L crisis is arguably the most catastrophic collapse of the banking industry since the Great Depression.” Indeed in ten years (1986-1995) some 1,043 out of the 3,234 S&Ls in the country failed. Big Government was resurrected for a time to pick up the bill, corporate socialism at work.
Reagan’s zeal for deregulation and an antagonism to Science itself contributed to his opposition to environmentalists and their pesky issues like acid rain – foreshadowing Donald Trump, he belittled scientific evidence as it suited him. The AIDS crisis was battering the country in the early 1980s but according to Dr. Anthony Fauci, “he [Reagan] wanted nothing to do with it.”

[Adapertio integralis: et ille Antonius et Publius ipse alumni lycei Regis High School sunt, Anno Domini MCMLVIII]

But, Reagan did attract the support of fundamentalist movements and pastors, currying their favor with opposition to the Theory of Evolution and promotion of teaching Creationism in schools. Such flirtation with fundamentalist religion would never have been countenanced under the WASP Ascendancy – the First Amendment and all that. However, though white and protestant, Reagan was not part of the old WASP world – his mother wasn’t “the right kind of protestant” and his father was an Irish Catholic; for his part, Reagan self-identified as a born-again Christian. Moreover, Reagan began the practice of nominating right-wing Catholics to the Supreme Court which has led to today’s situation where there are no justices raised protestant on the Court at all (Gorsuch is a member of an Episcopal Church now but his background is Catholic including graduation from Georgetown Prep).
Reagan was also instrumental in galvanizing the career of hyper-partisan Georgia congressman Newt Gingrich; Reagan even took Gingrich’s positions as the basic pitch for his re-election campaign – positions developed by Gingrich’s congressional group, the Conservatives Opportunity Society. Gingrich is notorious for bringing to Congress the paranoid partisanship that characterizes today’s Republican party – one more source of divisiveness we can trace back to Reagan. Moreover, the broadcast media can now be dominated by merchants of division like Fox News. It wasn’t always that way. There once was the Fairness Doctrine which required TV and Radio stations to provide non-partisan coverage of topics of public interest, based on the idea that a broadcast license was a matter of public trust. In 1987, the four member Federal Communications Commission (FCC) simply abolished the Fairness Doctrine (three Reagan appointees and one Nixon appointee). Congress attempted to override the FCC decision but the legislation was vetoed by Reagan. The emergence of the late Rush Limbaugh as a purveyor of untruth and conspiracy theories soon followed; Fox News and its ilk came a bit later. As much as anything else, this nullification of the Fairness Doctrine from the top down has created a centrifugal force that has broken the US up into a nation of silos – no wonder “E Pluribus Unum” is no longer the national motto.
Nancy Reagan played a significant role in the Reagan presidency: smart and stylish, a Smith alumna and a Hollywood actress. Ever regal, her first priority was to make the White House a much more glamorous place to live; her spare-no-expense approach did engender criticism but tony publications like House Beautiful and Architectural Digest praised her good taste. In addition to beautifying the White House, Nancy Reagan served as an unofficial advisor to her husband. Her own politics were solidly reactionary – her stepfather Loyal Davis was both a well respected neurosurgeon at Northwestern Medical School in Chicago and someone known for expressing staunch conservative views – socialized medicine being an especial bugaboo. In addition to Nancy, Davis had an influence on son-in-law Ronald Reagan: after their marriage in 1952, Reagan moved steadily to the right and, when the ultra-conservative Senator Barry Goldwater announced his 1964 presidential bid, Ronald Reagan became one of his most fervent campaigners.
Nancy Reagan is also remembered for her Just Say No campaign to reduce drug abuse among the young. She used her position as First Lady to great advantage garnering media exposure (e.g. starring in a two hour PBS documentary) and for political effect (e.g. her address to the United Nations). Though well intentioned and well publicized, it is not clear that the campaign had much of a positive effect in the end.
Her decision making was often abetted by astrological consultation. Her interest in astrology went back many years but was reinforced with the attempted assassination of Reagan in March 1981 which prompted her to engage well-known psychic Joan Quigley. (Keeping it all in the show-business family, Joan and Nancy first met when they both appeared on the Merv Griffin Show back in 1973!) It is all best summed up by a quote from Quigley’s book What Does Joan Say – My Seven Years As White House Astrologer to Nancy and Ronald Reagan: “Not since the days of the Roman Emperors … has an astrologer played such a significant role in the nation’s affairs of State.” When you add in people’s concern for Ronald Reagan’s mental health in this period, the mind boggles.
As this two term presidency ended, the new US power structure built around Big Capital and Big Military was firmly in place; Big Industry was relocating manufacturing abroad; Big Government was in retreat; Big Labor was history and the country was transported back to 1929 as the Wealth Gap took off – today, in the US with ever more tax breaks and tax dodges for corporations and the rich, the situation is eerily reminiscent of pre-revolutionary France where the aristocracy refused to pay anything like their fair share of taxes! And the Zeitgeist kept pace with the reallocation of riches. The Venture Capitalist and the Hedge Fund Manager became culture heroes. Michael Douglas earned an Oscar for his performance as “corporate raider” Gordan Gekko in Oliver Stone’s Wall Street, the 1987 film that tried to add a new principle to the Protestant Ethic: “Greed is good.”
The depth and scope of the restructuring of America that the “Reagan Revolution” wrought simply cannot be overstated. It multiplied and deepened the divisions one sees in the country today, what with tax breaks and stark income inequality, deregulation and casino capitalism, indifference to the environment and to public health, outsourcing jobs and union bashing, the end of the Fairness Doctrine and partisanship in the media, demagogues and Congressional partisan warfare, new rigid sentencing laws and soaring incarceration, etc. – all of which got kick started in this period.
Spoiler Alert: Things only get worse in the three presidential terms that close out the century.
More to come. Affaire à suivre.

Power in the US V: Reaganomics and Class Division

Ronald Wilson Reagan was inaugurated as 40th President on January 20, 1981, launching the “Reagan Revolution” under the banner of “conservatism.” But Reagan’s conservatism was not the community oriented traditional philosophy with that name, but rather the new libertarian brand promulgated by Barry Goldwater and William Buckley. Edmund Burke was out and Ayn Rand was in.
This was indeed a period during which America was splintered anew but not by the Newtonian Politics of the previous years. This time the fissures were engineered from the top down. This “revolution” would usher in a transformation in the American Power structure as the US would move from Industrial Capitalism to Financial Capitalism: Big Capital and Big Money would take center stage; much of Big Industry would be deconstructed and production outsourced overseas; Big Labor would be marginalized; Big Government would be ridiculed and “starved” but Big Military and the military-industrial complex would continue to grow; the War on Drugs would become a war on Black America; growing incarceration would make the private prison industry an investment opportunity; the Wealth Gap would become staggering and the billionaire would replace the millionaire as the symbol of success.
This transformation was guided by pro-business, right-wing think tanks funded by plutocrat éminences grises such as the Koch brothers, Joseph Coors, Richard Mellon Scaife, … . Foremost among these extra-academic thought influencers was the Heritage Foundation which also advocated for the Christian right – it had Moral Majority activist Paul Weyrich among its founders in 1973.
The Reagan administration took office with a policy guidebook in hand, the Heritage Foundation’s study Mandate for Leadership, 1981 edition. The study called for an increase in military spending but for cuts in taxes, cuts in social spending and cuts in government regulations: Reagan gave a copy to each of his cabinet members and the Reagan administration delivered – 60% of the some 2000 proposals in the document were acted on in the first year alone.
Indeed, early in his presidency, Reagan oversaw a drastic tax-cut which ultimately created the billionaire class and which turned much of the storied American working class into a resentful underclass, easy prey for demagogues as we have seen with the Trump presidency. With Reagan’s 1981 tax bill, the top income tax rate was slashed from 74% down to 50% and the capital gains tax was cut to 20%, the lowest it had been since Herbert Hoover’s time.
To make up for some of the resulting revenue shortfall, the income tax was applied for the first time to social security benefits in 1983 legislation – reverse Robin Hood in action; moreover, it was celebrated Democrat, Speaker Tip O’Neill, who saw this legislation through the House of Representatives, a betrayal of trust with the New Deal and a significant step in the alienation of the working class from the Democratic Party. Adding insult to injury, O’Neill helped Reagan overcome resistance in Congress to push through the 1986 tax bill which cut that top income tax rate further to 28%.
These various tax machinations were opening salvos in a new round of old-fashioned “class warfare.” Contrast all this self-interested reallocation of the nation’s riches to favor wealthy individuals with the way Elvis Presley’s manager Colonel Parker used to brag that it was his “patriotic duty” to keep his client in the top 90% income tax bracket.
The War on Big Government was launched. Reagan “famously” framed it this way: “The nine most terrifying words in the English language are: I’m from the Government and I’m here to help.” Cutting taxes also served another conservative goal: reduce the effectiveness of government by “starving the beast.” Conservative lawmakers have stayed true to this code; for example, they continue to underfund the IRS to prevent it from examining tax returns properly even though such enforcement would pay for itself many times over – but then maybe their motives are more than purely ideological.
As president, Reagan was quick to attack the source of worker strength – the labor unions. In 1981, he fired 11,359 striking air traffic controllers and dangerously had non-union replacements hired (scabs, in labor parlance). This impacted air travel for months but Reagan won the fight. Looking back, organized labor never recovered; union busting became a sport. Savvy Japanese and European companies would locate their new plants in areas of the country known to be hostile to unions – basically in the old Confederacy where unions were perceived to be too egalitarian and too fair to women and minorities. (Boeing has tried the same thing more recently by moving production to South Carolina to thwart unions, but to their great chagrin they forgot that much of the expertise in aircraft construction is housed in the work force.) The fact that much of the new economic growth was in industries like high tech didn’t help – unions never had a chance to take root there. Moreover, the shift from Industrial Capitalism to Financial Capitalism made it logical to shift production abroad to low wage countries, further weakening the great industrial unions. Big Labor was soon history: the hope of the labor movement became the teachers’ unions (UFT, AFT) and the service employees’ unions (SEIU), humble guilds compared to the once mighty giants but ones still standing simply because teaching and service jobs cannot be outsourced to foreign countries – at least not yet.
The administration’s economic policies, playfully known as “Reaganomics,” were a version of “trickle-down economics”: that by making the wealthy and the corporations richer, money would flow down and automatically raise wages – “a high tide lifts all boats” was the catch phrase. Nothing trickled down though. Workers earned less in 1989 than in 1979. A nefarious pattern was begun where increases in worker productivity no longer translated into increases in workers’ salaries. With the Reagan presidency and its trickle-down economics, the Wealth Gap began its dizzying rise. For a recent update on all this, click HERE .
Homelessness accelerated in this period; school lunch programs were cut; food stamps were cut, … . And the prison population exploded: in 1980, the total prison population was 329,000; eight years later, the prison population had virtually doubled, to 627,000. Contributing to all this was the Sentencing Reform Act of 1984, which had the effect of lengthening federal prison sentences. And things have only gotten worse, far worse since. Indeed, the US has risen to first in the world in the rate of incarceration of its citizens; and for its African-American population, the US rate even bypasses that of Apartheid South Africa. For a recent NY Times piece, click HERE .
All this led to dramatic growth in the for-profit incarceration business, a malevolent industry which goes back to the antebellum South. This increase in incarceration created a new economic opportunity; by way of example, there is the very profitable corporation CoreCivic – founded in 1983 and traded on the NYSE as CXW. To the naïve observer, private prisons would seem unconstitutional. The Israeli Supreme Court so ruled on human rights grounds when the legitimacy of these prisons was challenged in that country; however, the US Supreme Court has not had to rule on the matter. (Some argue, however, that private prisons actually improve conditions for state prisoners!)
On the other hand, following the Mandate for Leadership this presidency saw the largest peacetime defense buildup in history – high-tech weapons systems, military infrastructure, military pay increases, the “600 ship navy” (which only fell short by 6 in the end), etc. Ever ready to spend on things military, Reagan was an enthusiastic backer of a project straight out of Science Fiction, the Strategic Defense Initiative, SDI or “Stars Wars.” (Apparently his attachment to this idea originated with a 1940 spy movie he co-starred in, Murder in the Air, where death-ray “inertia projectors” were used as weapons against airplanes. Reportedly the George Lucas movies also played a role as did Heritage Foundation proposals.) For its part, SDI itself soaked up lots of dollars but never delivered on its promises. Moreover, under Reagan the Department of Defense was enlisted as a covert actor in foreign policy. On his watch, there were illegal arms sales to Iran and illegal movements of money to the right-wing Contra rebels in Central America – rebels against duly elected governments, no less. This led to several members of the administration being indicted – have no fear, Secretary of Defense Caspar Weingartner and the others were pardoned by George H.W. Bush.
So in addition to the Wealth Gap, two hallmarks of these years were the increase in incarceration plus the growth in military expenditures, all trends which continued into the 21st Century. In his scholarly opus The Collapse of Complex Societies (1988), Joseph Tainter gives a perceptive analysis of the situations that can accompany a civilization’s collapse; in particular, he singles out increased coercive control of the population and increased military spending. Just sayin’.
But it only gets worse. More to come.

Power in the US IV: The Re-Org

From the Civil War through World War II, power in America lay with the WASP Ascendancy, an Eastern leadership class with colonial roots and Ivy League diplomas. After the War, a triumvirate took power: Big Industry, Big Military and Big Government. On the one hand, the triumvirs had an easy time of it as the economies of the US and Western Europe enjoyed the “30 glorious years” of post-war economic prosperity, 30 years which were especially good for organized labor and the emerging “middle class.” But at the same time, the country was constantly rocked by the Newtonian politics of action and reaction to political movements – the civil rights movement, the environmental movement, the anti-War movement, the women’s movement, the Gay Rights movement, … .

After the national calamity of Watergate and the clumsy interregnum of Gerald Ford, outsider Jimmy Carter became president in 1977. For his part, Carter was a proponent of energy conservation and of alternative energy and the Superfund Act was passed during his presidency, but his most remembered effort in this area was the installation of solar powered water heaters on the White House roof. It was a period where optimism was hard to come by. The social and political divides created by the 50s and 60s persisted ; the Cold War and nuclear insecurity continued on; “stagflation” characterized the economy; industry was being humiliated by competition from Europe and Asia; mortgage rates hit 14%; the 1979 Oil Crisis doubled the price of a gallon of gas; personnel at the US Embassy in Tehran were seized as hostages by the Ayatollah Khomeini’s forces. Things were not good; the Triumvirate was floundering.

But in the last 20 years of the 20th century, the power structure of America would go through a “re-org.” Big Military would get bigger yet; Big Industry would stay powerful but would be hollowed out as much of production would be outsourced abroad; Big Government (and its regulations) would be shrunk; Big Capital would emerge as even more important even than Big Industry itself; Big Labor would be marginalized. This is when the programs of the Heritage Foundation and other well-financed right-wing think-tanks gained traction, when Ayn Rand became required reading, when the ideological propaganda of William Buckley went mainstream: libertarian class-warfare under the reassuring label of “conservatism.” This was the shift from the Age of Industrial Capitalism to the Age of Financial Capitalism when the deal became more important than the product; this was the passage from the America of the New Deal and the Great Society to the dystopian world of libertarian ideology and government by think-tank; this was the transformation that left congress dysfunctional and concentrated the power of government in the president and nine unelected justices.

This shift fissured the American populace in a new way. Rather than through a process of action/reaction, division was rammed through from the top down by means of a powerful mechanism – income inequality and its Wealth Gap. By 1980 the long game of right-wing plutocrats Joseph Coors, Richard Mellon Scaife, the Koch brothers et al. would show results, big time: with the Triumvirate flailing, the opportunity for a quiet sort of coup d’état would not be missed. In 1980, economic inequality in the United States was about average for advanced industrialized nations – with France and Italy actually being among the worst. Things have since been reversed and the US now exhibits the greatest gap (according to the World Bank’s GINI index). Moreover, income inequality interacted with other forces of division splitting the country ever more insidiously; thus the Wealth Gap added to the racial divide and led to the situation where today black family wealth has fallen infinitely far behind white family wealth; it exacerbated the urban/rural divide alienating the heartland townsfolk from the Chardonnay sipping coastal elites, who ideologically could be their allies; it stoked the resentment of downward mobile white Americans driving them to ugly identity politics; it created a monstrous health-care gap saddling the US with the highest rate of death in childbirth and the worst medical outcome per dollar spent in the industrialized world. Etc. Etc.

In the early 1970s, despite his defeat at the hands of Lyndon Johnson in 1964, Barry Goldwater was still their man. But Goldwater was too honest in his views and too consistent in his libertarian thinking to succeed as the right-wing presidential standard bearer – he supported abortion rights, civil rights and “the right of our people to live in a clean and pollution-free environment.” And then too he opposed allying the Republican Party with religious groups, an alliance fostered specifically  by Paul Weyrich, one of the founders of the Heritage Foundation and cofounder with Jerry Falwell of the Moral Majority movement.

But for the 1980 presidential election, the right-wing had their new standard bearer ready: Ronald Wilson Reagan, the perfect foil – promising people a Shining City on a Hill, setting the country on the road to dystopia instead. Indeed, the Reagan presidency marks a dramatic turning point in American history. Here was someone from a modest background whose presidency led to a new class hierarchy, a development that has hurtled the nation back into a farcical rerun of the Gilded Age.

Reagan began his career as a movie actor in the late 1930s, interrupted by WW II service at the Army’s First Motion Picture Institute in Culver City CA. In the post-War period, he became an anti-communist informant for the FBI and a star witness before the House Un-American Activities Committee (HUAC). With a push from his powerful agent Lew Wasserman and a nomination by progressive Gene Kelly, he became president of the Screen Actors Guild (SAG) in 1947 and would be elected 6 times in all. In 1952, during his fifth term, Reagan arranged a “blanket waiver” which exempted Wasserman’s company MCA from the SAG rules that prohibited a talent agency from also undertaking film or TV production. Investigations did follow: Reagan’s tax returns were subpoened for evidence of kickbacks and payoffs, MCA was indicted for federal anti-trust violations; in the end, despite clear evidence that Reagan had benefited financially after his concessions to MCA, the investigation was inconclusive. In any case, by the end of the decade, MCA was not only the biggest seller of talent in Hollywood, it was the biggest buyer as well. For more, click HERE.

Indeed, from there Reagan became the producer of the MCA show General Electric Theater (1953-62) also becoming its on-screen host for 9 of its impressive 10 year run; that stint plus two more years as host of the TV show Death Valley Days (1964-1966) made him a nationally known television personality – a harbinger of Donald Trump. Add in his work as General Electric’s corporate spokesperson and all this made him a wealthy man. BTW, Reagan too had a precursor who used television as a launching pad to the presidency: in 1949, Dwight D. Eisenhower’s narrated a very successful documentary TV series based on his memoirs, Crusade in Europe (which also made him a wealthy man thanks to a strange U.S. Treasury ruling that his earnings did not count as income but rather as capital gains, a windfall of $400,000 in 1949 dollars for Ike – the top income tax rate then was 91% while the capital gains tax was a flat 25%).

A New Deal Democrat when younger, Reagan’s politics later veered right. He supported Truman in 1948, Eisenhower in 1952 and Goldwater in 1964. He ran a tactically brilliant campaign for Governor of California in 1966 to handedly defeat the incumbent Pat Brown – the popular, forward-looking New Deal Democrat who had beaten Richard Nixon in 1962. Reagan ran on a simple platform: “to send the welfare bums back to work,” a racially charged dog-whistle and “to clean up the mess at Berkeley,” a politically charged dog-whistle as UC Berkeley was a center of the counter-culture and anti-war agitation – [Adapertio integra: Publius ipse apud agitatores tunc fuit.]

Once elected, he continued his war on the University of California and on education at all levels. In 1966, California had one of the top education systems in the nation; it suffered badly under Reagan – something it still hasn’t recovered from. For a detailed account, click HERE .

Reagan sought the Republican nomination to run for the presidency in 1976 but was outplayed by appointed president Gerry Ford. At which point, he retired to his Santa Barbara ranch to pose for photo-ops in cowboy gear and to wait for his moment, a moment that would come a couple of years later. He dominated the Republican primaries in 1980 and sailed through to the nomination. Though well behind in the polls at the outset, his impressive performance in the one debate with the incumbent Jimmy Carter turned the polls around and, in the end, he cruised to victory.

Reagan’s administration took office with a policy guidebook at hand, the Heritage Foundation’s study Mandate for Leadership, 1981 edition. The study called for cuts in taxes, cuts in social spending and cuts in government regulations but for an increase in military spending – all since cornerstones of the ideology of the Republican party.

More to come. Affaire à suivre.

Power in the US III: Newtonian Politics

The WASP Ascendancy was born of the Civil War and lasted until WW II. As the country soldiered on and emerged victorious on VE Day and VJ Day, it was more or less worthy of the name United States. But the new US Power Elite – the troika of Big Government, Big Industry and Big Military – soon ran into trouble keeping the country together as patterns of division emerged, patterns that still haunt the nation. How did we get here? Mystère.
Since WWII, in the US we have been seeing political backlash follow hard on any attempt at dealing with change – a Newtonian kind of Political Physics: Newton’s Third Law “for every action in nature there is an equal and opposite reaction” becomes “for every movement, a counter movement.” This kind of dialectical process erupted repeatedly in the period from the 1950s through the 1970s, each time further dividing the country.
For example, the success of the Labor Movement and New Deal programs like Social Security generated a carefully orchestrated reaction financed by rich backers from Big Industry. Among these were the Koch brothers, Charles and David who started the Fred and Mary Koch Foundation in 1953 as a funding source for right-wing causes – a family tradition as their father Fred Koch was a founder of the John Birch Society. This became part of the larger post-War movement known as “conservatism.” But this was linguistic sleight of hand; the brand of conservatism promulgated by pundits like William Buckley during this period was a far cry from that of Edmund Burke or that of Senator Robert Taft Sr., the leading conservative of the post-War years. The more theatrical wing of this movement is known as libertarianism – staffed by debaters so artful that they actually manage to make the accusation of “Class Warfare” a weapon against their opponents! Divisiveness was their hallmark; add right-wing talk radio and today’s Fox News and you have a powerful and divisive propaganda machine still at work. Pathetically, as part of his effort to sell his new book, Charles Koch is now expressing regret for the divisions in America caused by his decades long “partisanship” (WSJ Nov 13, 2020).
The landmark Supreme Court decision Brown v. Board of Education (1954) was a signal victory for the Civil Rights Movement; the reaction was fierce, epitomized by the national disgrace of howling mobs trying to prevent children from going to integrated schools, most infamously in Little Rock, Arkansas. For a reminder, click  HERE . The Southern Christian Leadership Conference (SCLC) led by the Rev. Martin Luther King Jr. was a non-violent movement for Civil Rights that generated violent reaction from the outset as did the activism of organizations like SNCC and CORE. King’s 1964 Nobel Peace Prize “outraged” groups like the Klu Klux Klan and the White Citizen’s Council. And then the Black Panthers unsettled many people to the point that there was sudden support for gun control. Overt haters like George Wallace made racism part of national political campaigns, crying “segregation now, segregation tomorrow, segregation forever.” But the Civil Rights Movement led to the Civil Rights Act (1964) and the Voting Rights Act (1965); tragically here the backlash was stark and violent, with the assassination of King following quickly. Politically, the reaction was especially blatant in the states of the former Confederacy, states that had voted solidly Democratic since Reconstruction: they all flipped and have been reliably “red” states ever since.
An irony here is that it was the Civil Rights movement and the end to segregation in the South that created the prosperous Sunbelt and made world cities of Atlanta, Dallas and Houston: before then corporations like IBM shied away from the segregated South because these companies themselves had an integrated workforce; and the area was not attractive to foreign firms and investors for similar reasons. Atlanta in particular owes a debt to its native son MLK. The baseball team from 1902 to 1965 was unapologetically named the Atlanta Crackers, a minor league franchise; it was only in 1966 that the major league Atlanta Braves moved in. Indeed, the Atlanta subway system MARTA would not have been feasible if Black passengers had to use separate entrances and turnstiles and go to the last car or whatever – especially when over half the population is African-American. Add the honor of hosting the Olympics in 1996 plus the prestige of having the busiest passenger airport in the world since 1998.
There was a period of movement toward national unity during the Kennedy presidency as the nation rallied behind JFK for Cold War dramas like the Cuban Missile Crisis and as people were energized by his calls for American greatness such as “achieving the goal, before this decade is out, of landing a man on the Moon.” New York even accepted the news that California was now the most populous state in the union graciously. But the escalation of the Vietnam War under Lyndon Johnson dramatically split the country, launched as it was by the charade known as the Gulf of Tonkin Resolution (which set the bar low for manufactured pretexts for going to war). Vietnam intensified the generation gap as the draft was reinstated and body-bags returned from Asia. The Peace Movement and resistance to the war engaged national figures like Senators Robert Kennedy and Eugene McCarthy and led to the national disgrace of the 1968 Democratic Convention in Chicago where machine-mayor Richard Daley unleashed the police and National Guard on anti-war protestors. Protestors were met with yet more violence at Kent State University on May 4, 1970 when four students were shot dead by National Guard troops – something absolutely incomprehensible. For that iconic tragic photo, click HERE .
The reaction against the War in Vietnam also led to the end of the conscript army and the creation of a professional army (which looks unconstitutional to this observer as Article I, Section 8, Clause 12 stipulates that “The Congress shall have Power … To raise and support Armies, but no Appropriation of Money to that Use shall be for a longer Term than two Years;”). In many ways soldiers today are like paid mercenaries: service in the military is most often an economic choice made more attractive by the perennial GI Bill, VA medical care and other post-service perquisites. Military service also offers a path to citizenship for immigrants which is a significant pay-off, indeed. If nothing else, this move to a professional military has deprived the US of one of a country’s most important tools for blending its citizens and building national identity, shared national service. Now, the Pentagon and its commander-in-chief, like warlords, have an army of their own, a population apart from the rest of us – another tear in the nation’s fabric. A sociological side-effect of the move to a hi-tech professional army (and one with more and more women at that) has been to undermine one of the rocks on which male supremacy is built: the need for masses of young men to fight the nation’s inevitable wars (e.g. French feminist writer Virginie Despentes).
The Women’s Movement sparked a powerful reaction from the very start. While participants in this movement had to endure the barbs of self-congratulatory “male chauvinist pigs” like Norman Mailer and macho construction workers, there was also strong reaction on the distaff side – for example, the movement headed by formidable writer and activist Phyllis Schlafly; her Eagle Forum pressure group was even able to defeat the Equal Rights Amendment just as it seemed on the way to ratification. Reaction to the Women’s Movement was also intensified by fundamental social changes that were taking place such as women’s access to higher education, the availability of inexpensive and effective birth control, smaller family size, the higher divorce rate and the massive entry of women into the workforce. But rather than helping deal with these changes with national child care and pre-K with nutritious hot lunches (as in France with its renowned Ēcole Maternelle system), Big Government in the US has left people to fend for themselves.
The environmental movement in the US was launched with Rachel Carson’s alarming book The Silent Spring (1962); this was also the time when climate scientists began warning about CO2 emissions and global warming. The response by pesticide companies and the fossil fuel industry was deafening; however, some legislation was passed and the Environmental Protection Agency (EPA) was set up, under Richard Nixon no less. But the reaction to environmentalism gathered force leading to the creation of a new lobby, “climate change deniers.” Indeed climate change denial has become a core position of the Republican Party making them a true danger not just to the US but to the planet itself. Venality triumphant – to quote Paul Krugman’s column in the NY Times (Dec 12, 2019): “But why have Republicans become the party of climate doom? Money is an important part of the answer: In the current cycle Republicans have received 97 percent of political contributions from the coal industry, 88 percent from oil and gas. And this doesn’t even count the wing nut welfare offered by institutions supported by the Koch brothers and other fossil-fuel moguls.”
The Gay Rights Movement broke out into the open with the Stonewall Inn uprising against the police and the mob (1969). Newtonian politics produced a homophobic reaction immediately; opposition to the movement unified the religious and non-religious right and spawned a crusade: there were campaigns such as Save Our Children, a media savvy organization led by former Miss Oklahoma, Anita Bryant; not to be outdone, Pastor Jerry Falwell’s movement the Moral Majority made homophobia central to its platform from the outset.
In the summer of 1969, Woodstock electrified half the population and horrified the other half just as the “Swinging 60s” was giving way to the “Me-Generation” of the 1970s. From divided, the population became atomized. The War in Indo-China continued, entire countries were bombed back to the stone-age. Recreational drug use was rising and Harvard research scientist turned LSD guru, Timothy Leary was exhorting Americans to “Tune In, Turn On, Drop Out.” In reaction, under President Richard Nixon, the Controlled Substances Act of 1970 intensified the “War on Drugs” and turned a good swath of the population into serious lawbreakers. A milestone in political stupidity was reached when Nixon authorized the Watergate break-in as he was coasting to a huge victory in the 1972 presidential election over anti-war candidate George McGovern. The scandal further polarized the country and the decade wore on miserably: disco music and leisure wear, oil embargos and endless lines at gas stations, 14% mortgage rates, the fall of the Shah and hostage taking in Iran. The post-WASP leadership was clearly failing; to quote Lincoln paraphrasing St. Mark, “a house divided against itself cannot stand.”
Moreover, during this time, Capitalism in America was itself in a period of transition. The divided country was struggling economically. By the mid 1970s powerful companies from Japan, Germany et al. were competing successfully in critical sectors – automobiles, aviation, ship-building, appliances, cameras, machine-tools, electronics, … . But American capitalism had resources and was itself ready to react. To start there was the Almighty Dollar which made the banks and financial markets of New York so dominant. And finance was becoming more mathematical and more bold, armed with options and other financial derivatives such as futures, forwards, swaps, mortgage-backed securities and other trendy securitizations. The deal was becoming more important than the factory or the product.
The Age of Industrial Capitalism was giving way to the Age of Financial Capitalism. Big Industry was yielding to Big Capital. And the 1980s and 1990s would see the country fracture along new fault lines. To the Newtonian pattern of action/reaction would be added a new process of division from the top down. To Gallicize a Bette Davis line: Fasten your seatbelts; it’s going to be a bumpy fin de siècle.

Power in the US II: The “Après-Guerre”

The US emerged from WWII more powerful and more united than ever. The baby-boom was about to begin and a new middle class would be built by the combination of strong labor unions, the GI Bill, access to higher education and 30 years of economic prosperity all to the tune of a top income tax rate of 90%. It was a period when the state invested in young people – e.g. undergraduate tuition was free at public institutions like the elite engineering schools at UC-Berkeley and City College of New York.

From the Civil War through WW II, the leadership of the country was in the hands of the fabled WASP establishment, what with their colonial era roots and Ivy League educations. After WW II, WASPs were still in positions of leadership; for example, Henry Cabot Lodge Jr. (wow- both a Cabot and a Lodge) and Robert Taft Sr. (wow – grandson of President Taft) were powerful senators. But the WASP Ascendancy was over. In his work The Power Elite (1956), C. Wright Mills singles out, in so many words, Big Industry, Big Government and Big Military as the new US power structure. Big Labor too had some influence in this period. And political and economic leadership also had to be shared with emerging powerful Western states like California and Texas and with new groups such as Catholics and Jews.

With the Bretton-Woods monetary accords (1944) and the founding of the United Nations (1945), the US was assuming a new role in the world. The Almighty Dollar had become the world’s reserve currency and internationalism had replaced the non-interventionism of the WASP leadership (e.g. it was Senator Henry Cabot Lodge Sr. who scuttled plans for the US to join the League of Nations). Spurred by the Cold War, Big Government was made bigger with the creation of the CIA out of the wartime OSS (Office of Strategic Services) (1947) and by the creation of the NSA out of the wartime SIS (Signal Intelligence Service) (1952). The Marshall plan and NATO brought Western Europe under the American defense umbrella; Germany was re-armed and added to NATO, shocking many and especially the Soviets. The “containment doctrine” girded Eurasia with new military blocs (CENTO [Middle East], SEATO [Southeast Asia], ANZUS [the Antipodes]) and mutual defense treaties (Taiwan, South Korea, Japan). Intervention abroad was not new to the US foreign policy playbook. But now with the Cold War, this tactic was expanded way beyond the mare nostrum that was the Caribbean – there soon came dramatic regime change in Iran, in the Congo and in Indonesia, replacing left leaning leaders with amenable dictators (Pahlavi, Mobutu, Suharto).

Big Industry, Big Government and Big Military together created a new force in the country, the Military Industrial Complex – the one Eisenhower warned about in his farewell address. This new nexus of power and influence came complete with its revolving doors involving congressmen, generals, lobbyists and all kinds of government officials; add to that cost overruns and massive subsidies of underperforming contractors. Worse the benefits of capitalist competition were further diluted as the defense industry was subject to more and more mergers, making it a most non-competitive oligopoly – and it continues, the most recent merger being defense giants Raytheon and United Technologies (2020); somehow anti-trust does not apply to this world. The military budget has grown incessantly, in the process starving other departments (most dangerously the State department). This has led to the military’s being assigned all sorts of projects it is unequipped to carry out – “mission creep” and all that. Paradoxically, despite its dismal performance in endless wars, the military enjoys a position of prestige in the country and is even forced to take money for projects it itself does not want! For a sobering update on all this, there is Georgetown Professor Rosa Brooks’ work How Everything Became War and the Military Became Everything: Tales from the Pentagon (2016).

Another development that followed in the wake of the dazzling success of the wartime Manhattan Project was an extraordinary bonding of science and the Pentagon. Big Military became the major funder of university science research some of which has spilled over into the civilian realm such as the Internet, GPS and Artificial Intelligence. Although these achievements are impressive, they have come at the cost of redirecting research dollars and personnel to military ends as opposed to projects that would have addressed civilian needs much more responsively (such as the environment). In any case, this military domination of research has led us to today’s Surveillance State and created the new economic paradigm Surveillance Capitalism.

On the home front, the FBI was unleashed to combat anything and anybody whom the untouchable J. Edgar Hoover deemed “un-American” – people like Martin Luther King Jr. but not Mafia dons. Blacklisting became a spectator sport bringing dangerous grandstanders like Joseph McCarthy and Roy Cohn onto the national stage. Big Government was threatening civil liberties as never before. This was accompanied by the growing influence of the religious right as the US positioned itself as the bulwark against atheistic communism. Big Religion was represented faithfully by evangelists like Jerry Falwell on the fundamentalist side and by eminences like Francis Cardinal Spellman of New York on the high church side. At the behest of Big Religion, the phrase “under God” was inserted into the Pledge of Allegiance (1954) by executive order and the nation’s longtime motto “E Pluribus Unum” was quietly replaced with “In God We Trust,” by an act of Congress no less (1956).

BTW : Witch hunting was not new to Cardinal Spellman: he was behind the campaign to prevent the City College of New York from hiring Bertrand Russell as a professor in 1940 on the grounds that Russell was “morally unfit” – even though Russell had already accepted the offer of the position. Two of the world’s greatest Logicians, Emil Post and Alfred Tarski, were in the Math Department at CCNY in 1940. Just think, had Russell come there, New York could have become the Mecca of Mathematical Logic in the US; instead Russell went back to Cambridge, Tarski went to Berkeley and the rest is history.

The political cause of Big Industry was taken up by men born wealthy like the Koch brothers and Joseph Coors. The achievements of Roosevelt’s New Deal and later Johnson’s Great Society were especially anathema to them and incompatible with their dystopian vision of a society based on cutthroat and unfair competition, simply put libertarianism. Challenges by environmentalists to Big Energy’s ability to plunder the land (and leave the cleanup bill to the rest of us) had to be squashed. University professors being too liberal and too intellectually honest, these wealthy backers of Barry Goldwater (and later Ronald Reagan) created right-wing think tanks like the Charles Koch Institute, the Heritage Foundation, the Cato Institute, etc.). These shops serve as intellectual propaganda mills; among their achievements is the authorship of more than 90% of the papers skeptical on climate change. There even was a Nobel Prize in the new category of Economics for libertarian economist and propagandist James Buchanan (1986). This mix of wealth, Christianity, libertarianism and White superiority was loudly trumpeted by pundits like William Buckley with magazines, books and TV shows, all cleverly packaged as “conservatism.” Importantly, they provided unstinting support for right-wing candidates at all levels of government and for all sorts of court appointments. The strategy was brilliantly executed; for play-by-play: Duke Professor Nancy MacLean’s Democracy in Chains: The Deep History of the Radical Right’s Stealth Plan for America (2017)

Division was not unknown in America during the WASP ascendancy: capital vs. labor, nativist vs. immigrant, Black vs. White, rural vs. urban, men vs. women. But in the après-guerre, there was peace of a kind. The unions were strong and industry was churning, the sons and grandsons of immigrants had fought valiantly in two world wars, the military and professional baseball had been integrated, price supports and electrification buoyed the countryside, the 19th Amendment was the law of the land and Rosie the Riveter a national heroine.

But today (2020), 75 years after the war that left the country in a relatively unified state, the US is a nation dramatically divided by race, religion, wealth, political ideology; a nation at logger heads over civil rights, minority rights, women’s rights, LGBTQ rights, immigrants’ rights; a nation incapable of coherent action on the environment, on health care; a nation immobilized by the COVID-19 pandemic; a hi-tech nation with scientific philistines in positions of power and with anti-vaxxers in a position to prevent vaccines from having their effect throughout the population. The new Power Elite failed egregiously there where the WASPs had managed to muddle through. How did this happen? More to come. Affaire à suivre.

Power in the US I:The WASP Ascendancy

My late friend Alex was a holocaust survivor who landed in New York as a refugee after World War II, a young man who then went on to create a new life for himself here. A teenager when the War broke out, Alex shared the story of how he was shipped across Europe from camp to camp with Joanna Wiszniewicz of the Polish-Jewish Institute in Warsaw. This story has been published in book form And Yet I Still Have Dreams: A Story of Certain Loneliness. Not an easy read, but one that reveals the best in people along with the worst.
Always grateful to have become an American, Alex was a keen student of US social structure and politics; and with Old World wisdom, he would say “If you have to have a ruling class, the WASPs are the best – they are honest and they are not intellectual.”
The phrase WASP is an acronym for White Anglo-Saxon Protestant and designates an upper-class, white American protestant usually of English descent – especially those with roots in colonial New England and, better yet, those who went to an Ivy League school. The group also includes the members of prominent New York families whose roots go back to the Dutch period and who blended into the WASP social world, such as the Roosevelts and van Cortlandts; these are often referred to as “patroons,” the Dutch term for settlers who were granted 16 miles of land along the Hudson River as part of their civilizing mission.
The most storied representatives of the WASP world were the Boston Brahmin families: the Lowells, the Cabots, the Lodges, et al. So clubby were they that they inspired this doggerel:
Here’s to dear old Boston, Home of the Bean and the Cod.
Where the Lowells speak only to Cabots and the Cabots speak only to God
From the very beginnings of the new nation, armed with the Protestant Ethic and those Ivy League diplomas, its scions have occupied the presidency (starting with John Adams, Harvard 1755) and held seats on the Supreme Court (starting with first Chief Justice John Jay, Columbia 1764) – Adams’ family arrived in Massachusetts around 1638 while Jay’s mother’s family were the van Cortlandts and his paternal grandfather was a French Huguenot who came to NY in the late 1600s.
The US always being exceptional, it can be argued that the Marxist sounding term “ruling class” doesn’t quite fit the American experience. Wikipedia has it thus: “The ruling class is the social class of a given society that decides upon and sets that society’s political agenda.” So, for the sake of discussion and simplicity, let us stay with it. There is a point to having a ruling class. The ship of state is steady and transitions of power peaceful (something that is of concern lately in the US). Family ties, concern for future generations and responsibility for the nation in their charge makes a ruling class a dynasty which directs the country with a long term view as opposed to the short-term logic of the stock market and with a nationalistic view as opposed to that of striving politicians who are beholden to self-interested donors and lobbyists.
Until not very long ago, the WASPs were visible everywhere – in business, government and academe; their leadership role in American life was universally acknowledged. Today they have almost disappeared from view; indeed, for some years now, not a single person even raised Protestant has been on the Supreme Court, a court where the WASPs were accustomed to being the majority. How did we get here? Mystėre.
Indeed, the WASPs were the great stewards of American capitalism from the Industrial Revolution on. At the outset, they had to share power with the rich, well-educated, planter class of the South. With the Civil War, however, the WASPs became the uncontested ruling class of the United States. The nation’s history was rewritten and Thanksgiving became the holiday to commemorate the arrival of Europeans on these shores as Plymouth replaced Jamestown in the official story.
The WASP leadership would see the country through Reconstruction, the Gilded Age, the Spanish-American War, the Progressive Era, WW I, the Roaring Twenties, the Great Depression and WW II.
After the Civil War the Republicans in Congress moved quickly to pass the 13th Amendment (1865) which ended slavery as such in the US and then went on to pass the 14th (citizenship, due process and equal protection under the law, 1868) and the 15th (the right to vote, 1870).
Like the course of true love, that of the WASP Ascendancy was not always smooth. Except for those three amendments, their post-Civil War record on race relations proved disastrous. To start, the scope of the 14th amendment was rolled back soon after ratification by the Supreme Court in the Slaughter-House Cases of 1873, making the amendment’s passage a “vain and idle enactment” in the words of dissenting justice Stephen J. Field.
Despite the WASP reputation for honesty and despite the fact that the first Grants arrived in Massachusetts in 1630, the two terms of West Point graduate U.S. Grant (1869-1877) were plagued by corruption. Still, towards the very end of Grant’s tenure, Congress did pass the Civil Rights Act of 1875 which outlawed racial segregation in public places, thus staying true to the agenda of the abolitionist movement.
Then came 1876, a year Gore Vidal described as “probably the low point in our republic’s history.” The Republican candidate for president was Rutherford B. Hayes (Harvard Law) whose forebears missed the Mayflower and didn’t reach Connecticut until 1625. The Republicans engineered a deal with the Southern states to secure the presidency although Hayes had lost the popular vote to Democratic candidate Samuel Tilden. The deal gave Hayes a majority in the Electoral College but ended Reconstruction – a true Faustian Bargain. That plus the Supreme Court’s overturning that same Civil Rights Act in a basket of cases called the Civil Rights Cases (1883) followed by Plessy v Ferguson (1896) served to keep the former slave states safe for white supremacy.
A break in the Republican suite of presidents occurred with the election of Grover Cleveland in 1884. Though a Democrat, his WASP credentials were solid – the Cleveland family went back to 1635 in Massachusetts. Cleveland’s two terms were, in turn, split by the election of Ohioan Benjamin Harrison whose ancestors included a US president (grandfather) and a signer of the Declaration of Independence (great-grandfather).
Continental imperialism continued with the American Indian Wars (aka American Frontier Wars) which ground on even after the closing of the Frontier (1890 according to the Census Bureau and Frederick Jackson Turner). Overseas imperialism began under President William McKinley with the annexation of Hawaii and then the Spanish-American War which led to the annexation of the Philippine Islands and Puerto Rico. As for family tree, on both sides his forebears (Scots-Irish and English) went back to the early 1700s in Pennsylvania. The McKinley name has since lost some of its luster – until recently the tallest mountain in North America was named Mt McKinley; now it is officially known by its Native American name Denali, no “Mt” required.
The excesses of the Gilded Age and the robber barons were reined in some by the Progressive Era policies enacted during the presidency of Teddy Roosevelt, a Harvard man (1880) with impeccable patroon credentials. Roosevelt was followed by William H. Taft (Yale 1878) whose family tree went back to the 1660s in Massachusetts.
Following Woodrow Wilson (a Democrat of vintage Scots-Irish heritage, Princeton 1879), Warren Harding a Republican from Ohio became the next president –- no surprise there: in the sequence from Grant to Harding, 7 of the 10 elected presidents were from the great state of Ohio! The Harding administration was another low point, however, for the WASP Ascendancy and their reputation for honest living. Despite his distinguished forebears (on his mother’s side his family tree went back to the eminent patroon Van Kirk family), Harding’s administration begat the Teapot Dome Scandal (oil leases on Federal lands in California and Wyoming); and there was scandal in his personal life: he had an illegitimate daughter with Nan Britton, his long time paramour. To make matters worse, Britton went on to write The President’s Daughter, a pioneering “Kiss and Tell” publication and one juicy even by today’s standards. There’s more: Harding died in most mysterious circumstances in a San Francisco hotel room – arguably done in by his long-suffering wife, the stuff of a resilient conspiracy theory.
Moral order was restored with the accession to the presidency of Vice-President Calvin Coolidge in 1923, a Yale Law graduate whose Puritan ancestry traced back to 1630 in Massachusetts. To boot, Coolidge was an active member of the Congregational Church while mayor of Northampton, Massachusetts – the very church of the fiery preacher Jonathan Edwards whose sermon “Sinners in the Hands of an Angry God” electrified the New England of the mid 1700s.
There was another brief break with the Ivy League tradition with the election of Herbert Hoover (Stanford 1895). Hoover was from Iowa originally making him the only president till then from West of the Mississippi. The Protestant dimension still loomed large, however, as Hoover pummeled his Irish-Catholic opponent, NY governor Al Smith. Hoover even carried 5 states from the Solid South, a bloc which had voted Democratic unerringly since the end of Reconstruction.
With the Great Depression, the public turned once again to the WASP ruling class and elected Franklin D. Roosevelt (Harvard 1903) to the office of President. So central still to the nation’s psyche was the WASP leadership that George Santayana’s novel The Last Puritan was a bestseller in 1936, second in sales only to Gone With The Wind. FDR’s job was to make America safe for capitalism. With the New Deal and pro-labor policies, his administration forestalled threats from the far Left and, on the other hand, subsidized industry with huge public works projects – many of them iconic such as the Golden Gate Bridge, the Hoover Dam and the Tennessee Valley Authority.
In the post-war world, the WASP political class still stood tall. The Cabots, the Lodges, the Harrimans, the Rockefellers, the Tafts were still politically active and visible. But the dynasty was coming to an end. Indeed, with the Depression and WW II, power had to be shared with others. The New Deal empowered labor unions – a development long resisted by the establishment. By now, the colorful robber barons were long gone and faceless “organization men” were at the helm of the huge war-fed corporations that had made the US “the arsenal of democracy.” The War and the new Cold War created the Military-Industrial Complex, a fearsome new kind of alliance of capitalism and the military.
At this point in time, the legacy of the WASP ascendancy wasn’t looking so bad – a most glaring exception being the situation of Black America, a population callously abandoned by the leadership since the end of Reconstruction. And this villainy continued after World War II as the Great Migration of 5 million black Americans to the Northern cities gained momentum, spurred in part by the invention of the cotton harvesting machine in 1944 – Nicholas Lemann, The Promised Land (1991). The response was high-rise ghettos as federal funds were directed to ensure the Levittowns of the post-War suburbs would be segregated. The integration of the military and of major league baseball during the Truman administration were the only bright spots.
A sure sign that the times were a-changin’ came in 1952 when the young Irish-Catholic JFK defeated the Brahmin stalwart Henry Cabot Lodge Jr. in a race for a Senate seat in Massachusetts: Lodge was the sitting senator and this was a stunning defeat for the WASP establishment.
A dynastic ruling class provides for continuity and national unity; it is patriotic in the true sense of the term and puts the nation’s interests first because it identifies the nation’s interests with its own. With the WASP ascendancy ending in America, what would follow? Where would the new leadership of the new colossus take us? Today, we can be nostalgic about the WASPs’ role as a ruling class as the country is in the process of tearing itself apart. More to come.

AI VIII: Software, Hardware, Data and China

Work on Artificial Intelligence systems continues apace. But one must ask whether this juggernaut could run into obstacles that would seriously slow it down?
Jaron Lanier, a Virtual Reality pioneer and now a researcher now at Microsoft, points out that all this acceleration in AI so far has been based on hardware (solid state physics/engineering) and algorithms (mathematics/engineering), progress made possible largely by the exponential growth in computer power that the world has known since the 1970s, epitomized by Moore’s Law: the computing power of a chip doubles every 18 months. What has not “accelerated” is the art and science of software development itself.
There are integrated development environments (IDEs) for a programmer working in the workhorse computer languages C++ and Java, complete with source code editors, debuggers, profilers, compilers, etc. But even now, debugging is in no way automated as there are no algorithms as such for debugging – this goes back to the issue of Combinatorial Explosion and those Gödel-like limits on what can be done; for example, the only way for an algorithm to reliably predict how long a program will run is to run the program, alas.
But maybe down the road the AI Bots or the enhanced humans will be able to create the software science of the future that will rationalize the development of the phenomenal new software projects that will be undertaken. But for now, programming is more art than science: in fact, neuroscience research reveals that it is the language area of the brain that is activated while programming and not the mathematical, logical part. For one article, click HERE .
BTW, development at legendary Silicon Valley research center Xerox PARC pioneered features like wysiwyg editors (what you see is what you get), the GUI (graphical user interface) with windows and icons, the mouse, even the Ethernet (the backbone of computer networks). Commercially, Xerox fielded the Alto and then the Dandelion, a machine designed for AI work in the programming language LISP which provided a dazzling IDE. In the meantime there were the savvy Steve Jobs and his Apple engineers who visited PARC in the 1970s in conjunction with Xerox’ involvement in Apple’s IPO. The Apple people then applied what they learned to build the Lisa and the Macintosh – thinking mass market rather than an expensive, specialized AI workstation aimed at Xerox’ corporate clients.
But, though Moore’s Law is slowing down now if not already over, AI researchers are a resourceful and motivated group: instead of using general purpose processors from companies like Intel, they have simply turned to more specialized hardware.
As a case in point, Google has developed its own Tensor Processing Unit (TCU) as an application specific chip for neural network machine learning (2015). In fact, much of the leap forward in deep learning these last couple of years was made possible by employing Nvidia’s V100 GPU (graphics processing unit), which was originally designed for video games. (Looking ahead, Nvidia CEO Jen-Hsun Huang has said that the company, like Google, will next be working on a version specially tailored for neural nets.)
As a new example of a deep learning system trained on Nvidia processors, we have the third generation Generative Pretrained Transformer (GPT-3), a natural language processor (NLP) from San Francisco based Open AI. It performs most impressively on NLP tasks such as translation and answering questions; it even writes articles and blog posts that pass for human-made (nothing is sacred anymore, alas). With graphics processors, Open AI trained the underlying neural network on an enormous data set and built the GPT-3 network so vast that it dwarfed even that of Microsoft’s latest powerful NLP system, Turing-NLG.
BTW, Open AI is a company founded by Elon Musk who has added to his investment in AI by starting Neuralink, a company whose goal is embedding chips in the human brain.
Another force driving progress in AI continues to gain momentum, private sector R&D. In the years following WWII, for the most part AI research was done in universities, largely sponsored by the military. Business interest grew with the development of Expert System (aka Rule Based Systems) and then with the progress made with Connectionist models based on neural nets. In this century, the role of industry and business in the development of AI has become paramount. Not only is there money to be made but, thanks in large part to the Internet, business now has access to extraordinarily rich data sets with abundant information on millions and millions of people. This is exemplified by Chinese companies like AliBaba, ByteDance and Tencent which provide vertically integrated apps to their “customers.” So where Facebook and Google interact with users in delimited application programs (e.g. even Messenger is a different app from Facebook itself), in one product the Chinese companies offer everything from internet search to social media to messaging to on-line shopping to in-store payment and much more all on your phone. (Interestingly, China has gone from a cash economy to a mobile device payment economy without passing through the credit card stage, a development accelerated by these AI technologies.) With that, these companies are integrating the online and offline shopping experience by making the product/consumer relationship symmetric (the product can now seek out the consumer as well), calling it OMO for “Online Merging with Offline.” For a cloying Alibaba video, click HERE .
A key to the success of these Chinese companies is their access to limitless consumer data – their customer base is huge to begin with and the vertically integrated apps log about everything those customers do. Kai-Fu Lee, who headed up Google’s foray into China and who is now an AI venture capitalist in China, lays all this out in his book AI Superpowers: China, Silicon Valley and the New World Order (2018). He sums things up with a dash of humor by saying that data is the oil of the AI era and that China is the Saudi Arabia of data.
Indeed, the subject of data is much in the papers these days as we see the Trump administration acting to ban the Chinese apps Tik-Tok and WeChat in the US, charging that the mass of data these apps collect could be used nefariously by the Chinese government or other actors. The Chinese are crying “foul” but then data is the new black gold. Apropos, because of the role of personal data in all this, the EU is worried about privacy and ethics and plans to regulate AI (NY Times, Feb 17, Business Section, Tech Titans). Such government interference could slow things down considerably – perhaps for the good – but this is not likely in the US and unthinkable in China.
As per a recent NY Times article, tension with China worries companies and university research labs in the US because, as the authors (ineptly) put it, “much of the groundbreaking working coming out the United States has been powered by Chinese brains” – a point that is bolstered by statistics on publications and talks at prestigious conferences etc.
As another example of this superpower AI relationship, we have the TU Simple company, a leader in driverless trucks: it is headquartered in Tucson AZ but its research lab is in Beijing.
Indeed, in the world of AI, the US-China relationship has become symbiotic. The thrust of Lee’s book is that China and the US are the two superpowers of AI and stand as frenemies together – leaving Japan, Europe and the rest in the dust. Looked at as a race between the two, Lee gives China an edge because of access to data, its ferociously competitive business culture and supportive government policy. But as is happening now, this duel will give these two superpowers a significant advantage over all the others. The side-effects will be widespread: e.g., through AI automation, much manufacturing will return to the US and the loss of manufacturing jobs will leave low wage countries like Vietnam in the lurch – capital always prefers machinery it can own to the worker it must pay (as per Karl Marx).
So the AI juggernaut looks poised for yet greater things despite the roadblocks created by software engineering issues and the plateauing of Moore’s Law. The road to the Singularity where machine intelligence catches up to human intelligence is still open. The next decade will see the 3rd Wave of AI and the futurologists are bullish. More to come.

Fascism and Trumpism

George Santayana famously wrote “Those who cannot learn from history are doomed to repeat it.” Perhaps a lesson from the 1930s can be applied to help understand the phenomenon of the Donald Trump presidency and its relation to Fascism.
Georges Bataille was a prolific French writer, philosopher and activist from the 1920s through the 1950s. He was associated with multiple movements, magazines, secret societies; he was colleague and/or friend to people with the most droppable names (Breton, Satre, Camus, Orwell, … ). He influenced later philosophers such as (postmodernist) Michel Foucault, (deconstructionist) Jacques Derrida, (accelerationist) Nick Land, … . His elegant erotic fiction was described by Susan Sontag as “the most original and powerful intellectually” of its genre.
In his Psychology of Fascism (1933), Bataille analyzes the phenomenon of Fascism in Italy and Germany in the 1920s and 30s. Interestingly, this was published the same year as Wilhelm Reich’s classic The Mass Psychology of Fascism; put simplistically, Reich’s starting point is Freudian psychology, Bataille’s is the sociology of Durkheim and Mauss. But how does Bataille’s analysis match up with the phenomenon of the Trump presidency in the US?
To start, Bataille emphasizes the role of a base formed from a segment of society that was once part of the main-stream population but which has lost its status and has become part of a discontented, violence-prone, alienated class.
In the case of Trump, the base is the white working class, a once-proud group that has been dramatically marginalized in the US since Ronald Reagan’s presidency. Not long ago, there was the New Deal that empowered industrial unions; there was the Second World War where this class provided the bulk of the GIs; post WWII, people worked for powerful, successful, competitive, giant companies. These were the “30 glorious years” of post-War economic growth where union leaders like John L. Lewis (coal miners) and Walter Reuther (automobile workers) made regular appearances on TV. The white working class were fully part of main-stream America. But then by the late 1970s things began to change as US industry became hollowed out when confronted with keen competition from Europe and Asia. American capitalism switched from Industrial Capitalism to Financial Capitalism – more money was to be made by financing the deal rather than by actually making anything! The unions were crushed, the factories were closed. Except for those who managed to move into the middle class (mostly through a college education), the white working class has since been turned into an angry proletariat: economic insecurity, food insecurity, the opioid crisis, the breakdown of family structure, college unaffordable – all without the safety net provided to their own citizens by European countries. Characteristically, fascists promise a break with the present by means of reviving a mythologized past to restore the base to its position of prestige; thus Make America Great Again is the motto of the Trump ascendancy.
Bataille emphasizes the role of organized violence in the rise of fascism. In Europe, the fascists had para-military gangs from the outset – the blackshirts in Italy, the brownshirts in Germany. Trump is cultivating para-military support by bringing gun wielding haters to his convention, by shout-outs to white supremacists, to QAnon and other conspiracy theory groups. He has deployed federal troops and Homeland Security agents to police the citizenry in violation of law and protocol.
Bataille discusses the all-important role of the leader (for which he employs the absolutely quaint French term meneur : a word one would use for, say, a goatherd). The Republican party is now totally Trump’s party. The Republicans have nominated him without a party platform – loyalty to the leader is the only thing required. This concentration of power gives the movement transcendence as Bataille puts it; Bataille notes further that the leader and his people must repeatedly break the law to firm up the belief that they are above the law, something Trump and his minions excel at – the Emoluments Clause, the Hatch Act, etc. In fact, this is a reason that people take seriously the threat the Trump will not honor the results of the upcoming election if the returns do not suit him.
Bataille argues that for fascism to triumph, the regime to overturn must be a weakened, liberal, democratic one: this certainly applies to the present day US where we have watched the legislature have its power eroded constantly in favor of the executive – reminiscent of the Senate of the Roman Republic.
Bataille emphasizes the role of esthetics – the uniforms, the rallies, the symbols, the cult rituals, the myths, the music and marches, the heroic architecture, the films. Somehow, this misogynist adulterer has co-opted the votes and symbols of Evangelical Christians and uses a Bible as a stage prop at every opportunity. The QAnon conspiracy theory even makes a messiah of him and he has grown into the role: “And we are actually, we’re saving the world.” For Trump, his “wall” is to be his architectural triumph, 40 foot high and extending 2000 miles – and work has begun in places. He turns every appearance into a ritualistic political rally, but with the exception of some of Melania’s prison guard outfits, the MAGA hats and other swag fall way short of the esthetics of classical fascism.
Bataille does not bring up the role the “big lie” can play in fascist control of the population – one might say, though, that it falls under the category of breaking the laws of society. Certainly, the Nazis pushed this tactic to its limit with Goebbel’s saying “If you tell a lie big enough and keep repeating it, people will eventually come to believe it.” Trump’s used this gimmick in his anti-Obama “birtherism” campaign which served to give him standing among racist segments of the population. Now Trump and his staff lie all the time. (Since his loss to Biden, this has kicked into overdrive with “the Big Steal,” Jan. 6 etc.)
Bataille addresses racism as a way to rile up the base and to define a scapegoat. Trump outrageously uses immigrants as a target: putting babies in cages, separating children from parents – sadism as theater. More and more he is appealing to out-and-out anti-Black racism, posing as the defender of Law and Order. His call to “save the suburbs” for white womanhood is the latest dog-whistle. For Bataille, racism is a tool for changing the nature of the host state. Certainly, the sense one has is that undoing Obama’s legacy with such relish is racist in origin – and “birtherism” is now being used against Kamala Harris.
The moral of the story: if Bataille is your guide, Trump is a fascist.

AI VII: Humankind and Machinekind

On the road to the Technological Singularity where machine intelligence catches up to human intelligence, we are seeing a symbiosis of mankind and machinekind taking place what with nanobots, brain implants, genetic engineering, etc.
The essence of the humankind/machinekind co-evolution was captured by Marshall McLuhan. In Understanding Media : The Extensions of Man, he wrote
     “Physiologically, man in the normal use of technology (or his variously extended body) is perpetually modified by it and in turn finds ever new ways of modifying his technology. Man becomes, as it were, the sex organs of the machine world, as the bee of the plant world, enabling it to fecundate and to evolve ever new forms. The machine world reciprocates man’s love by expediting his wishes and desires, namely, in providing him with wealth.”
Thus, from cave painting and art to writing and manuscripts to printed books to the telegraph to the telephone to cinema and on to radio, television and the computer, these media have provided humankind with tools to extend the reach and depth of intelligent activity. For example: except for those working on numerical algorithms like Linear Programming, research mathematicians were indifferent to the development of computing power – until they learned that computers could draw! Then with visualization yielding insight, challenging long-open problems were solved – notably ones dealing with fractals and chaos!
McLuhan also foresaw the globalized world of today which he called “the global village.” The phenomenon of people all over the world staring into their smartphones and communicating over the World Wide Web would probably delight him.
But all this co-evolution is just the latest chapter in a long saga that goes back millions of years. For example, mastery of fire was a technological breakthrough that transformed humans from folivores into omnivores and from fearful prey into fearsome predators. Professor Richard Wrangham of Harvard’s Department of Human Evolutionary Biology, is the author of Catching Fire: How Cooking Made Us Human. There he argues that mastery of fire was responsible for the extraordinary development of the human brain: apes spend their whole day eating raw food which requires enormous caloric expenditure to digest; cooked food is quickly eaten, easily digested and cooked meat especially is a marvelous source of protein; as a result, energy was liberated from the task of digestion and reallocated to power a larger brain (which itself requires enormous energy); the resulting free time was reallocated to a wider range of activities such as tool making which in turn drove technology further along.
The significance of human control of fire was not lost on the ancient Greeks: this empowerment of humankind by the titan Prometheus so shocked the Olympian gods that Zeus had him chained to a rock where vultures would pick at his liver for all eternity! (Luckily for Prometheus, he was eventually set free by Hercules.)
More recently (so to speak), the development of agriculture began but some 12000 years ago with the Neolithic Revolution and this powerful technology has directly impacted human evolution in multiple ways. Indeed, when hunter gatherers metamorphosed into farmers and began civilization as we know it, they traded a life-style where they were taller, healthier and longer-lived for one in which they were shorter, less healthy, had a shorter life span and (to make it worse) had to endure the economic and social hierarchy of a structure built on inequality, private property, slavery and a class system dominated by priests, nobles, chiefs and kings. To this list of negative developments, add warfare on an increasingly deadly scale and poxes from domestic animals.
It is hard to divine what true advantages agriculture and a sedentary life-style might have held for humans. Could population growth be an end in itself from an evolutionary biology point of view – genes selfishly seeking reproduction? Thinking cosmically, agriculture and sedentarism tapped into the extraordinary power of the sun to create surplus production. For French philosophical writer, Georges Bataille, this created a driving force for the human species: wasteful, excessive expense: “la dėpense improductive.”
BTW, Bataille is also known for his elegant erotic fiction, described by Susan Sontag as “the most original and powerful intellectually” of the genre.
An insight of Bataille’s is that institutions that can exploit the intimidating power of wasteful expense are the ones that emerge and endure. Agriculture and wasteful expense indeed did lead to more complex societies and to constantly improving technologies. Bataille’s work (La Notion de dépense, La Part maudite) is not in the tradition of Adam Smith or Karl Marx or John Maynard Keynes but rather in that of Nietzsche and Freud – using the tools and logic of myth: which is most appropriate as this is an area that we don’t have detailed data for.
As examples of such wasteful expense, nobles and kings flaunt their wealth most spectacularly; warfare provides a most dramatic example. Moreover, archaeological evidence points to the fact that one of the first things these new farmers did do was to make beer by fermenting grains – eventually tied to religious Bacchanalian rites. Indeed, religion offers a rich set of examples of wasteful, excessive expense from priestly raiment to magnificent temples to Homeric hecatombs to human sacrifice. Organized religion has the power to mobilize large numbers of people in nations and empires – even beyond national or imperial boundaries. Warfare and organized religion together are a most deadly force for waste and destruction: it was Pope Urban II who launched the Crusades with the cry “Deus Vult” (God wills it). As another example of religion’s ability to motivate people beyond tribe and nation, there is today’s Islamic Terrorism.
Technological acceleration and evolutionary acceleration are intertwined processes. Even after branching off from apes, human evolution proceeded very slowly, sped up in the last 2.5 million years, then again in the last million years and then even more quickly in the last 500,000 years. What feeds the steps in both technology and biological evolution is complexity – once the system reaches a certain complexity with high level controls, the step to the next level comes all the more quickly and leads to yet more complexity sometimes to the point that technological progress can be exponential: there is the example of Moore’s Law that the power of computer chips doubles every 18 months; this has held since the mid 1970s.
All this means that technology is interacting with the ongoing process of human biological evolution, a process that itself is very much alive. For example, the mutation in Northern Europe for lactose tolerance in adults only goes back 4300 years. A similar mutation for lactose tolerance in East Africa only goes back 3000 years! The technology associated with these recent evolutionary events is cattle husbandry.
Interestingly, along with all this technological acceleration, the size of the human brain has been shrinking for thousands of years now and technology is needed to bolster it. It turns out that progress and modern life are contributing to this. Indeed, civilization is a form of domestication which dulls intelligence. Just as dogs are less intelligent than wolves and have smaller brains than wolves, we have measurably smaller brains than our cousins the Neanderthals did.
Environmental and cultural factors also influence natural selection and stone-age people encounter more challenges than the couch potatoes of the civilized world. To quote Jared Diamond, author of the influential high level history of civilization Guns, Germs and Steel :
    “modern ‘Stone Age’ peoples are on the average probably more intelligent, not less intelligent, than industrialized peoples”
Diamond adds that Stone Age populations are typically multilingual which also makes them more intelligent and, what is more, this makes them less susceptible to Alzheimer’s Disease!
The evolutionary explanation for this decrease in human brain power is simple – large brains are great consumers of energy and, as life gets easier and as technology assists us in our intellectual endeavors, brains can lessen in size and power as the requirements for survival of the species change. The difficulty of human birth also has something to do with it. Large head size means fewer surviving children and mothers so this is resisted by evolution. Ultimately, intelligence is selected for only if it enhances survivability: evolution does not select for intelligence as an end in itself and civilization’s sociology and technology now make it possible to survive with less mental effort.
In the end, this is a tricky topic to discuss. Even the most rational scientists probably believe in their hearts that there has been some Design and Purpose in the evolution of human intelligence – especially their own. The nihilism of modernity serves up a gruel too thin. Futurist doctrines which hold that humanity is serving as a pass-through to introduce intelligence to the cosmos provide little comfort. Things have moved too far. Per Yeats’ The Second Coming : “The ceremony of innocence is drowned .”
Human-machine co-evolution has moved into a new phase today as man’s body and mind are connected to external devices and devices are implanted even into the human brain. Going forward, technology can serve to make up for the dulling of the human intellect. Human intellection will likely become a hybrid of biological and non-biological intelligence that risks becoming increasingly dominated by its non-biological component. Even short of reaching the Singularity, the social implications of all this are exhilarating for a futurist and frightening for a humanist. More to come. Affaire à suivre.

AI VI: Towards the Singularity

The next decade (2020-2030) will see the 3rd Wave of Artificial Intelligence (AI). Given the dazzling progress during the 2nd Wave, expectations are high. Another reason for this optimism is that technology feeds on itself and continually accelerates – for  example, Moore’s Law: the power of computer chips doubles every 18 months. For the coming 3rd Wave, the futurolgists predict that new systems will be able to learn, to reason, to converse in natural language and to generalize! A tall order, indeed.
One problem for this kind of progress is that current deep learning systems are muscle-bound and highly specialized for a specific domain. They depend on huge training sets and are supervised to master specific tasks. One side effect of all this is that, despite the large training sets, the systems are not good at truly exceptional situations that can pop up in the real world (“black swans” to economists, “long-tail distributions” to mathematicians). This is an especial problem for self-driving vehicles: human drivers with their ability to deal with the unexpected can handle an unusual situation while self-driving systems still cannot. Tesla’s self-driving vehicles have been involved in fatal accidents because of their inability to react to an unusual situation. Another recent example (2016) is that, when lines of salt were laid down on a highway in anticipation of a snow storm, Tesla’s self-driving vehicles confused the salt lines with the lane boundary lines.
In fact, critics have long argued that AI systems cannot “understand” what it is that they are doing. A classic attack was mounted by Berkeley Philosophy professor John Searle already in 1980 with his example of the “Chinese Room.” Roughly put, let us suppose there is an AI system that can read and respond in written Chinese; suppose a person in a room can take something written in Chinese characters and then (somehow) perform exactly the same mechanical steps on the data that the computer would in order to produce an appropriate response in Chinese characters. Although quite impressive, this performance by the person still would not mean that the he or she understood the Chinese language in any real way – the corollary being that that the AI system cannot be said to understand it either. Note that Searle’s argument implies that, for him at least, the Turing Test will not be a sufficient standard to establish that a machine can actually think!
For Searle’s talk on all this at Google’s Silicon Valley AI center, including an exchange with futurologist Ray Kurzweil, click  HERE . For the record, Google also has AI centers in London, Zurich, Accra, New York, Paris and Beijing – not a shabby address among them.
Another serious issue is that biases can be embedded into a system too easily. Well known examples are face-recognition systems which perform badly on images of people of color. Google itself faced a public relations nightmare in 2015 when its photo-tagger cruelly mislabeled images of African-Americans.
In her thoughtful and well-written book Artificial Intelligence: A Guide for Thinking Humans (2019), AI researcher Melanie Mitchell describes how machine learning systems can fall victim to attacks. In fact, a whole field known as adversarial learning has developed where researchers seek out weaknesses in deep learning systems. And they are good at it! Indeed, it is often the case that changes imperceptible to humans in a photo, for example, can force a well trained system to reverse direction and completely misclassify an image. Adversarial learning exploits the fact that we do not know how these AI systems are reaching the conclusions they do: a tweak to the data that would not affect our judgment can confuse an AI system which is reacting to clues in the data that are completely different from the ones humans react to.
Indeed, we do not really understand how an AI system “reasons.” That is, we do not know “why” the system does do what it does; its set of “reasons” for a conclusion might not at all be anything like what ours would be. A related issue is that the systems cannot explain why and how they arrive at their results. This is an issue for mathematical algorithms as well – there it is a deep problem because extracting an explanation can bring us right back to the dreaded specter of Combinatorial Explosion!
Mitchell also remarks on how IBM’s Watson, though renowned for its Jeopardy victory, has not proved all that successful when ported to domains like medicine and finance. Its special skill at responding to single-answer queries does not carry over well to other areas. Also, for those who watched the shows, the human players outperformed Watson on the more difficult “answers” while Watson’s strength was fielding the easier ones almost instantaneously.
However, futurologists hold that the 3rd Wave of AI will break through these barriers, developing systems that will be proficient at perceiving, learning, reasoning and generalization; they will not require such huge training sets or such extensive human supervision.
In any case, there where AI will next have a most dramatic impact going forward is the area of Bionics and it will bring us into the “transhuman era”: when, using science and technology, the human race evolves beyond its current physical and mental limitations.
To start, life expectancy will increase. Futurologists “joke” that as we age, accelerated medicine will be able to buy us 10 more years every 10 years, thus making us virtually immortal.
In fact, we are already putting computers—neural implants—directly into people’s brains to counteract Parkinson’s disease and tremors from multiple sclerosis. We have cochlear implants that restore hearing. A retinal implant has been developed that provides some visual perception for some blind individuals, basically by replacing certain visual-processing circuits of the brain.
Recently Apple and Stanford Medical announced an app where an Apple Watch checks constantly for cardiac issues and, if something is detected, it prompts a call to the wearer’s iPhone from a telehealth doctor. Indeed, in the future we will be permanently connected to the internet for monitoring and for cognitive enhancement, the surveillance state on steroids.
We have already reached the point where there are AI based prostheses such as artificial hands which communicate with receptors implanted in the brain. For example, the BrainGate company’s technology uses dime sized computer chips that connect the mind to computers and the internet: the chip is implanted into the brain and attached to connectors outside the skull which are hooked up to external computers; for one application, the computers can be linked to a robotic arm that a handicapped patient can control with his or her thoughts.
N.B. BrainGate is headed up by entrepreneur Jeff Stibel, cofounder with the late Kobe Bryant of the venture capital firm Bryant-Stibel. Indeed, Kobe Bryant was a man of parts.
Nanotechnology is the manipulation of matter at the atomic and molecular scale, down to the level of one billionth of a meter: to track the pioneering basic research in this field, round up some of the usual suspects – Cal Tech, Bell Labs, IBM Research, MIT. This technology promises a suite of miracles far into the future. By way of example, a nanobot will sail around the insides of the body searching out and destroying cancer cells. It is expected that nanobots embedded in the brain will be capable of reprogramming neural connections to enhance human intellectual power.
Indeed, we are at the dawn of a new era, where biology, mathematics, physics, AI and computer science more generally all converge and combine – a development heralded in Prof. Susan Hockfield’s new book The Age of Living Machines (2020). Reading Hockfield’s book is like reading Paul deKruif’s Microbe Hunters – the excitement of the creation of a science. Thus, viruses are being employed to build lithium-ion batteries using nanomaterials – to boot, these new batteries will be environmentally safe!
So the momentum is there. A confluence of sciences is thrusting humanity forward to a Brave New World, to the Technological Singularity where machine intelligence catches up to human intelligence. The run up to the Singularity already will have a profound impact on human life and human social structure. Historically, humanity and its technology have co-evolved, as seen in the record of human biological evolution and its accompanying ever accelerating technological progress. But this time there is reason to fear that dystopia awaits us and not a better world. The surveillance state is here to stay. The new developments in Bionics are bringing us to the point where the specter of a caste of Nietzschean supermen looms large – there is no reason to suppose that Bionics will be uniformly available to the human population as a whole; worse, many think that race as well as class will play a role in future developments. The list goes on. More to come. Affaire à suivre.

AI V: The Random and the Quantum

As children, we first encounter randomness in flipping a coin to see who goes first in a game or in shuffling cards for Solitaire – nothing terribly dramatic. Biological evolution, on the other hand, uses randomness in some of the most important processes of life such as the selection of genes for transfer from parent to child. Physical processes also exhibit randomness such as collisions of gas molecules or the “noise” in communication signals. And then randomness is a key concept in Quantum Mechanics, the physics of the subatomic realm: the deterministic laws of Newton are replaced by assertions about likely outcomes formulated using mathematical statistics and probability theory – worse, the fundamental axiom of Quantum Mechanics is called the Heisenberg Uncertainty Principle.
But randomness has given Artificial Intelligence (AI) researchers and others a rich set of new algorithmic tools and some real help in dealing with the issues raised by exponential growth and combinatorial explosion.
Indeed, with the advent of modern computing machinery, mathematicians and programmers quickly introduced randomization into algorithms; this is indeed coeval with the Computer Age – mathematicians working laboriously by hand just are not physically able to weave randomness into an algorithmic process. Randomized algorithms require long sequences of random numbers (bits actually). But while nature manages to do this brilliantly, it is a challenge for programmers: computer software pioneer John von Neumann pontifically said “Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.” But sin they did and techniques using random numbers felicitously named “Monte-Carlo algorithms” were developed early on and quickly proved critical in the Manhattan Project and in the design of the Hydrogen Bomb. (“Las Vegas algorithms” came in later!)
Monte-Carlo methods are now in widespread use in fields such as physics, engineering, climate science and (wouldn’t you know it!) finance. In the Artificial Intelligence world, Monte Carlo methods are used for systems that play Go, Tantrix, Battleship and other games. Randomized algorithms are employed in machine learning, robotics and Bayesian networks. AI researchers reverse-engineered some of Nature’s methods and applied them to applications that are subject to combinatorial explosion. For example, the process of evolution itself uses randomness for mutations and gene crossover to drive natural selection; genetic algorithms, introduced in 1960 by John Holland, use these stratagems to develop codes for all sorts of practical challenges – e.g. scheduling, routing, machine learning, etc.
The phenomenon of exponential growth continues to impede progress in AI, in Operations Research and other fields. But hope springs eternal and perhaps the physical world can provide a way around these problems. And, in fact, quantum computing is a technology that has the potential to overcome the combinatorial explosion that limits the practical range of mathematical algorithms. The role of randomness in Quantum Mechanics is the key to the way Quantum Mechanics itself can deal with the problem of an exponential number of possibilities.
Just as the Theory of Relativity is modern physics at a cosmic scale, Quantum Mechanics is modern physics at the atomic and sub-atomic scale. A rich, complex, mind-bending theory, it started with work of Max Planck in 1900 to account for the mysterious phenomenon of “black-body radiation” – mysterious in that the laws of physics could not account for it properly. Planck solved this complicated problem by postulating that energy levels could only increase in whole number multiples of a minimal increment, now called a quantum. Albert Einstein was awarded his Nobel prize for work in 1905 analyzing an equally mysterious phenomenon, the “photo-electric effect,” in terms of light quanta (aka photons). Einstein showed that photons make discrete “quantum leaps” in going from one energy level to another.
The first pervasive practical application of quantum mechanics was the television set – in itself a confounding tale of corporate games playing and human tragedy; for the blog post on this click HERE . Then there were the laser, the transistor, the computer chip, etc.
Digital computers are not able to simulate quantum systems of any complexity since they are confronted with an exponential number of possibilities to consider, another example of combinatorial explosion. In a paper published in 1982, Simulating Physics with Computers, Nobel laureate Richard Feynman looked at this differently; since simulating a quantum experiment is a computational task too difficult for a digital computer, a quantum experiment itself must be actually performing a prodigious act of computation; he concluded that to simulate quantum systems you should try to build a new kind of computational engine, one based on Quantum Mechanics – quantum computers ! (Moreover, for the experts, the existing field of quantum interferometry with its multiparticle interference experiments would provide a starting point! In other words, they knew where to begin.)
In Quantum Mechanics, there is no “half a quantum” and energy jumps from 1 quantum to 2 quanta without passing through “1.5 quanta.” But there are more strange phenomena to deal with such as entanglement and superposition.
The phenomenon of entanglement refers to the fact that two particles in the same state can be separated in space but later a change to one will force the same state change in the other; so although there is no physical link between them, the particles still influence each other. Here we have a return to physics of the magical “action at a distance,” a concept that goes back to the middle ages and William of Ockham (of “razor” renown); but “action at a distance” was something thought banished from serious physics by Maxwell’s Equations in the 19th century. However, entanglement has been verified experimentally with photons, neutrinos, and other sub-atomic particles.
The phenomenon of superposition also applies to sub-atomic particles. But it is traditionally illustrated by the fable of Schrödinger’s Cat. There is a small amount of radioactive material which has a half life of one hour, meaning that the material will decay within one hour with 50% probability – the decay is a random subatomic event that can happen at any time within the hour. A cat is in a box and an apparatus is set up so that the cat will be poisoned if and only if the radioactive material actually decays; an outside observer will not know if any of this has happened. At the end of the hour, the apparatus that would poison the cat is turned off. Naively, when the hour is up, one would think that the cat in the box is either alive or dead. However, if the cat is thought of as a quantum system like an electron or photon, while waiting in the box the cat is in two superposed states simultaneously because of the 50% probability that it is alive and the 50% probability that it is dead; only when the diligent outside observer opens the box and the cat is seen, does the superposition collapse into one state (alive) or the other (dead).
Ironically, this fable was intended to poke fun at the interpretation of Quantum Mechanics put forth by the Copenhagen School led by Niels Bohr but it has become the classic story for illustrating superposition.
Confusing? Well, Niels Bohr famously said “And anyone who thinks they can talk about quantum theory without feeling dizzy hasn’t yet understood the first word about it.” But physicists are proud of the “quantum weirdness” of entanglement and superposition and even have Bell’s Theorem, a fundamental result that proves that quantum weirdness unavoidably comes with the territory.
All this made Albert Einstein himself uncomfortable with Quantum Mechanics describing entanglement as “spooky action at a distance.” He made light of the field itself for its reliance on probability and randomness saying “God does not play dice with the universe,” to which Bohr responded “Einstein, don’t tell God what to do.”
Quantum computing is based on the qubit, a quantum system that emulates the {0,1} of the traditional binary bit of digital computers by means of two distinguishable quantum states. But, because qubits behave quantumly, one can capitalize on “quantum weirdness.”
A quantum computer solves a problem by simultaneously searching through all possibilities in the search space without committing to one which it only does at the end when its state is measured (or observed) at which point, as with Schrödinger’s Cat, it settles into the desired value for the qubits.
Recently, Google announced a milestone in quantum computing: in 200 seconds their quantum system solved a problem that would take a traditional digital supercomputer about 10,000 years to complete! (The problem was to verify the randomness of a very long sequence of numbers.) And they claimed Quantum Supremacy in the race with IBM. The latter contested this claim, of course, saying (among other things) that their digital computers could solve that problem in only 2.5 days! So the jury is still out on all this – but something is happening and both companies are investing most seriously in the field.
With quantum computing, Mathematical Logic might still play an important role in reaching the Technological Singularity (the point where machine intelligence will surpass human intelligence) if the impasse presented by combinatorial explosion can indeed be broken through. One thing is for sure, quantum computing would signal the end of the encryption technique which is the basis of the s (for secure) in “https://” ; this technique (known by its authors’ initials as RSA) exploits combinatorial explosion and the concomitant clumsiness of algorithms – but have no fear, Microsoft is already working on encryption which will be immune to quantum computing.
Scientists have long marveled at the extraordinary fit between mathematics and physics, from the differential equation to the geometry of space-time – to the extent that today theoretical physicists start by looking at the mathematics to point them in new directions. Quantum computing would be an elegant “thank you” on the part of physics; it would be a revolution in our understanding of algorithms and a challenge to the Church-Turing Thesis that the Turing Machine model and, therefore, digital computers represent the theoretical limit of what can ever be computed.
So after a bumpy start in the 1950s and ‘60s and a history of over-promising, AI appears to be well poised for the race to the Technological Singularity: the field has had several decades of accelerating progress; its effects can be seen everywhere, often anthropomorphized as with Siri and Alexa. So what can be expected in the next decade, in the 3rd Wave of AI? More to come. Affaire à suivre.

AI IV: Exponential Growth

The phenomenon of exponential growth is having an impact on the way Artificial Intelligence (AI) is bringing us to the Technological Singularity, the point at which machine intelligence will catch up to human intelligence – and surpass it quickly thereafter.
The phenomenon of exponential growth is also much with us today because of the Corona virus whose spread gives a simple example: 1 person infects 2, 2 infect 4, 4 infect 8, and so on; after 20 iterations of this, over 4 million people are infected. Each step doubles the number of new patients and when you get to 20 the number of patients is huge – and all that from one initial case.
A recent New York Times article (April 25) uses the example of a pond being overrun by lily pads to illustrate the exponential spread of the virus. At first there is only one lily pad but once the pond is half-covered with lily pads, the next day the entire pond is covered.
For another example, in a classic tale from medieval India, the King wants to reward the man who has just invented Chess; the man requests 1 grain of wheat on a corner square of the chess board and then twice the previous amount on each successive square until all squares are accounted for; naively, the King agrees thinking he is getting a bargain of sorts; but after a few squares, they realize that soon there would be no wheat left in the kingdom! According to some accounts, the inventor does not survive the King’s wrath; according to others he becomes a high ranking minister. BTW, in the end the total number of grains of wheat on the chessboard would come to 18,446,744,073,709,551,615 – over 18 quintillion, way more than current annual world wheat production.
Such huge numbers were not new to the Hindu sages of the middle ages; in fact it was they who invented the 10 digit numerals {0,1,2,3,4,5,6,7,8,9} that are universally used today. Their motivation came from Hindu Vedic cosmology where very, very large numbers are required; for example, a day for Brahma, the creator, endures for about 4,320,000,000 solar years – in Roman numerals that would take over 4 million M’s. The trick in the base 10 Hindu-Arabic numerals is that the numbers represented can grow exponentially in size while the lengths of their written representations grow by 1; thus, 10 to the power n is written down using n+1 digits: 100 is 10 squared, 1000 is 10 cubed etc. The same kind of thing applies to the numbers in base 2 used in digital computers: 2 to the power n,  2n , is written down using n+1 bits: 100 is 4, 1000 is 8, 10000 is 16 etc. Roman numerals (and the equivalent systems that were used throughout the medieval world) are mathematically in base 1 so there is no real compression – even with the abbreviation M for 1000, the number represented is only 1000 times the length of the Roman numerals needed to write it down and so 1 quintillion still needs 1 quadrillion M’s. For yet another historical note, the word algorithm is derived from the name of the medieval Persian mathematician Al-Khwarizmi, who wrote a treatise on the Hindu-Arabic numerals in the early 9th Century.
The introduction of the Hindu-Arabic numerals is arguably the most important event in the history of computation – critical to AI where the fundamental idea is that reasoning is a form of computation, a point made already in 1655 by dystopian British philosopher Thomas Hobbes who wrote “By reasoning, I understand computation.”
Compound interest is an example too of exponential growth. The principal on a 30 year balloon mortgage for $100,000 at 7.25 percent interest compounded annually would double in 10 years, double again in 20 years and come to over $800,000 at maturity! The interest feeds on itself. Interestingly, when Fibonacci popularized the magical Hindu-Arabic numerals in Europe with his Book of Calculation (Liber Abaci, 1212), he included the example of compound interest (nigh impossible using Roman numerals – in any case, charging compound interest was outlawed in the Roman Empire). Eventually, the Florentine bankers further up the River Arno from Fibonacci’s home town of Pisa took notice and the Renaissance was born – the rest is history. BTW, the storied sequence of Fibonacci numbers also grows exponentially; for more on this part of the story, click HERE . For the whole story, consult the catchily titled Finding Fibonacci: The Quest to Rediscover the Forgotten Mathematical Genius Who Changed the World by Keith Devlin.
Yet another example is due to the English country parson Thomas Malthus. In An Essay on the Principle of Population (1798), writing in opposition to the feel-good optimism of the Enlightenment, he argued that the food supply will only grow at a slow pace but that the population will increase exponentially leading to an eventual catastrophe. (Malthus employed the terminology of infinite series, describing the growth of the food supply as arithmetic and that of the population as geometric.) However, even as the earth’s population has increased apace since WW II, the food supply has kept up – thanks on the one hand to things like the Green Revolution and on the other hand to things like animal cruelty, antibiotic resistant bacteria, methane pollution, overzealous insecticides, the crash of the bee population, over fishing, GMO frankenfoods, the food divide, deforestation and wide spread spoliation of the environment. Clearly, we have to do a better job; however, despite the fact that all this craziness is well known, protest has been ineffective; “hats off” though to the B-film industry who spoke truth to power with the surprise hit horror movie Frankenfish (2004).
Especially appalling from an historical viewpoint are the packed feed lots where beef cattle are fed corn (which makes them sick) to fatten them for market. The glory of the Indo-European tribes was their cattle and their husbandry – a family’s worth was the size of their herd! From the Caspian Steppe, some went East to India where today the cow is iconic; others went to Europe and then as far as the North American West to find grazing land where the cattle could eat grass and ruminate. The root of the Latin word for money pecunia is the word for cow: pecus; the connection persists to this day in English words like pecuniary and impecunious. So deep is the connection that certainly feeding corn to beef cattle would bring tears to the pagan gods of Mt. Olympus and Valhalla.
A most famous example of exponential growth in technology is Moore’s Law: in 1975, Gordon E. Moore, a founder of INTEL, noted that the number of transistors on a microchip was doubling every two years even as the cost was being halved – adding that this was likely to continue. Amazingly this prediction has held true into the 21st Century and the number of transistors on an integrated circuit has gone from 5 thousand to 1 billion. The phenomenon of Moore’s Law is appearing to level off now but it might well take off again with new insights. Indeed, exponential bursts are part of the evolutionary process that is technology. Like compound interest, progress feeds on itself. The future is getting closer all the time – advances that once would have been the stuff of science fiction can now be expected in a decade or two. This is an important element in the march toward the Technological Singularity.
However, exponential growth of a different kind can be a stumbling block for AI and other areas of Computer Science because it leads to Combinatorial Explosion: situations where the time for an algorithm to return a solution would surpass the life-expectancy of our solar system if not of the galaxy. This can happen if the size of the “search space” grows exponentially: there will simply be too many combinations that the algorithm will have to account for. For example, consider the archetypical Traveling Salesman Problem (TSP): given a list of cities, the distances between them and the city to start from, find the shortest route that visits all the cities and returns to the start city. For n cities, the number of possible routes that any computer algorithm has to reckon with in order to return the optimal solution is the product of all the numbers from 1 through n, aka n factorial, a quantity that grows much faster than the 2n of the examples above.
One reason that we resort often to the TSP as an example is that it is representative of a large class of important combinatorial problems – a good algorithm for one can translate into a good algorithm for all of them. (These challenging problems are known in the trade as NP-Complete.) Current analysis, based on the universal role of problems like the TSP, makes a strong case that fundamentally better algorithms cannot be found for any of them – this is not unrelated to the limits on provability and computability uncovered by Gödel and Turing. Another representative of this class is the problem of provability and unprovability in propositional logic: p → q, truth-tables and all that. As a result, Mathematical Logic as such does not play the role in AI that was once predicted for it; the stunning progress in machine learning and other areas has relied on Connectionism, Bayesian Networks, emulating biological and physical processes etc. rather than on Logic itself.
One side effect of this humbling of Logic is that we are beginning to look with greater respect at models of intelligence different from our own (conscious) way of thinking. Up till now, our intelligence has been the gold standard – for philosophers the definition itself of intelligence, for theologians even the model for the mind of God.
While a direct attack on the phenomenon of Combinatorial Explosion seems unlikely to yield results, researchers and developers have turned to techniques that use randomness, Statistics and Probability to help in decision making in applications. Introducing uncertainty into algorithms might well make Al-Khwarizmi turn in his grave, but it has worked for Quantum Mechanics – the physics that brought us television and the transistor. And it has also worked in AI systems for decision making that deal with uncertainty, notably with Bayesian Networks. So perhaps randomized algorithms and even Quantum Mechanics can open the way to some further progress in AI on this front. Then too the creativity of researchers in exploiting the genius of nature knows no limits and can boggle the imagination: on the horizon are virus built lithium batteries and amoeba inspired algorithms. More to come. Affaire à suivre.

AI III: Connectionism

DARPA stands for Defense Advanced Research Projects Agency, a part of the US Department of Defense that has played a critical role in funding scientific projects since WW II, among them the ARPANET which has morphed into the Internet and the World Wide Web. DARPA has also been an important source of funding for research into Artificial Intelligence (AI). Following a scheme put forth by John Launchbury, then director of the DARPA I2O (Information Innovation Office), the timeline of AI can be divided into three parts like Caesar’s Gaul. The 1st Wave of AI went from 1950 to the turn of the millennium. The 2nd Wave of AI went from 2000 to the present. During this period advances continued in fields like expert systems and Bayesian networks; search based software for games like chess also advanced considerably. However, it is in this period that Connectionism – imitating the way neurons are connected in the brain – came into its own.
The human brain is one of nature’s grandest achievements – a massively parallel, multi-tasking computing device (albeit rather slow by electronic standards) that is the command and control center of the body. Some stats:
It contains about 100 billion nerve cells (neurons) — the “gray matter.”
It contains about 700 billion nerve fibers (1 axon per neuron and 5-7 dendrites per          neuron) — the “white matter.”
The neurons are linked by 100 trillion connections (synapses) — structures that              permit a neuron to pass an electrical or chemical signal to another neuron or to a          target cell.
The Connectionist approach to AI employs networks implemented in software known as “neural networks” or “neural nets” to mimic the way the neurons in the brain function. Connectionism proper begins in 1943 with a paper by Warren McCulloch and Walter Pitts which provided the first mathematical model of an artificial neuron. This inspired the single layer perceptron network of Frank Rosenblatt whose Perceptron Learning Theorem (1958) showed that machines could learn! However, this cognitive model was soon shown to be very limited in what it could do, which did dull the enthusiasm of AI funding sources– but the idea of machine learning by means of neuron-like networks was established and research went on.
So, already by the 1980s, the connectionist model was expanded to include more complex neural networks, composed of large numbers of units together with weights that measure the strength of the connections between the units – in the brain, if enough input accumulates at a neuron, it then sends a signal along the synapses extending from it. These weights model the effects of the synapses that link one neuron to another. Neural nets learn by adjusting the weights according to a feedback method which reacts to the network’s performance on test data, the more data the better – mathematically speaking this is a kind of non-linear optimization plus numerical differentiation, statistics and more.
These net architectures have multiplied and there are now not only classical neural nets but also convolutional neural nets, recurrent neural nets, neural Turing Machines, etc. Along with that, there are multiple new machine learning methods such as deep learning, reinforcement learning, competitive learning, etc. These methods are constantly improving and constitute true engineering achievements. Accordingly, there has been progress in the handling of core applications like text comprehension and translation, vision, sensor technology, voice recognition, face recognition, etc.
Popular apps such as eHarmony, Tinder, ancestry.com and 23AndMe all use AI and machine learning in their mix of algorithms. These algorithms are purported to have learned what makes for a happy marriage and how Italian you really can claim to be.
IBM’s Watson proved it had machine-learned just about everything with its victory on Jeopardy; its engine is now being deployed in areas such as cancer detection, finance and eCommerce.
In 2010, Google purchased DeepMind, a British AI company and soon basked in the success of DeepMind’s Go playing software. First there was AlphaGo which stunned the world by beating champion player Lee Se-dol in a five game match in March, 2016 – something that was thought to be yet years away as the number of possible positions in Go dwarfs that of Chess. But things didn’t stop there: AlphaGo has been followed by AlphaZero a system that defeated world champion Ke Jie in 2019. In fact, AlphaZero can learn how to play multiple games such as Chess and Shogi (Japanese Chess) as well as Go; what is more, AlphaZero does not learn by playing against human beings or other systems: it learns by playing against itself – playing against humans would just be a waste of precious time!
Applying machine learning to create a computer that can win at Go is a milestone. But applying machine learning so that a robot can enjoy “on the job training” is having more of an impact on the world of work. For example, per a recent NY Times article, an AI trained robot has been deployed in Europe to sort articles for packing and shipping for eCommerce. The robot is trained using reinforcement learning, an engineering extension of the mathematical optimization technique of dynamic programming (the one used by GPS systems to find the best route). This is another example where the system learns pretty much on its own; it is also an example of serious job-killing technology – one of the unsettling things about AI’s potential to force changes in society even beyond the typical “creative destruction” of free-market capitalism.
Another way AI is having an impact on society is through surveillance technology: from NSA eavesdropping to hovering surveillance drones to citywide face recognition cameras. London, once the capital of civil liberties and individual freedom, has become the surveillance capital of the world – but (breaking news) Shanghai has already overtaken London in this dystopian competition. What is more we are now subjecting our own selves to constant monitoring: our movements traced by our cellphones, our keystrokes logged by social media.
In the process, the surveillance state has created it own surveillance capitalism: our personal behavorial data are amassed by AI enhanced software – Fitbit, Alexa, Siri, Google, FaceBook, … ; the data are analyzed and sold for targeted advertising and other feeds to guide us in our lives; an example: as one googles work on machine intelligence, Amazon drops ads for books on the topic (e.g. The Sentient Machine) onto one’s FaceBook page. This is only going to get worse as the internet of things puts sensors and listening devices throughout the home and machines start to shepherd us through our day – a GPS for everything, adieu free will! For the in-depth story of this latest chapter in the history of capitalism, consult Shoshana Zuboff’s The Age of Surveillance Capitalism (2019).
Word to the wise: machine intelligence is one thing but do avoid googling eHarmony or Tinder – the surveillance capitalists do not know that’s part of your innocent research endeavors.
Moreover, there is the emerging field of telehealth: the provision of healthcare remotely by means of telecommunications technology. In addition to office visits via Zoom or Skype or WhatsApp, there are wearable devices that monitor one’s heart’s functions and report via the internet to an algorithm that checks for abnormalities etc. Such devices are typically worn for a week or so and then have to be carefully returned. Recently Apple and Stanford Medical have produced an app where an Apple Watch checks constantly for cardiac issues and, if something is detected, it prompts a call to the wearer’s iPhone from a telehealth doctor. Indeed, in the future we will be permanently connected to the internet for monitoring – the surveillance state on steroids.
In fact, all this information about us lives a life parallel to our own out in the cloud – it has become our avatar, and for many purposes it is more important than we are.
The English philosopher Jeremy Bentham is known for his Utilitarian principle: “it is the greatest happiness of the greatest number of people that is the measure of right and wrong.” From the 1780s on, Bentham also promoted the idea of the panopticon, a prison structured so that the inmates would be under constant surveillance by unseen guards – click  HERE and scroll down for a visualization. To update a metaphor from French post-modernist philosopher Michel Foucault, with surveillance technology we have created our own panopticon – one in which we dwell quietly and willingly as our every keystroke, every move is observed.
Some see an upside to all this connectivity: back in 2004, Google’s young founders told Playboy magazine that one day we would have direct access to the Internet through brain implants, with “the entirety of the world’s information as just one of our thoughts.” This hasn’t happened quite yet but one wouldn’t want to bet against Page and Brin. Indeed, we are now entering the 3rd Wave of AI which the DARPA schedule has lasting until 2030 – the waves get shorter as progress builds on itself. So what can be expected in the next decade, in this 3rd Wave? And then what? More to come. Affaire à suivre.

Pandas and Pandemics

After the Chinese invasion and takeover of Tibet in the 1950s, China became a practitioner of panda diplomacy where it would send those cuddly bears to zoos around the world to improve relations with various countries. But back then, China was still something of a sleepy economic backwater. Napoleon once opined “Let China sleep, for when she wakes she will shake the world” – an admonition that has become a prediction. In recent times, China has emerged as the most dynamic country on the planet: Shanghai has long replaced New York and Chicago as the place for daring sky-scraper architecture, the Chinese economy is the second largest in the world, the Silk Road project extends from Beijing across Asia and into Europe itself. Indeed, considerable Silk Road investment in Northern Italy in industries like leather goods and fashion has brought tens of thousands of Chinese workers (many illegally) to the Bel Paese to make luxury goods with the label “Made In Italy”; this has led to scheduled direct flights from Milan Bergamo Airport (BGY) to Wuhan Airport (WUH) – the root cause of the virulence of the corona virus outbreak in Northern Italy.
Indeed, China is the center of a new kind of capitalist system, state controlled capitalism where the government is the principal actor. But the government of China is run by the Chinese Communist Party, the organization founded by Mao Zedong and others in the 1920s to combat the gangster capitalism of the era – for deep background, there is the campy movie classic Shanghai Express (Marlene Dietrich: “It took more than one man to change my name to Shanghai Lily”). So has the party of the people lost its bearings or is something else going on? Mystère.
The Maoist era in China saw the economy mismanaged, saw educated cadres and scientists exiled to rural areas to learn the joys of farming, saw the Great Leap Forward lead to the deaths of millions. Following Mao’s own death in 1976, Deng Xiaoping emerged as “paramount leader” of the Communist Party and began the dramatic transformation of the country into the economic behemoth it is today.
Deng’s modernization program took on new urgency in the early 1990s when the fall of the Soviet Union made Western capitalism a yet more formidable opponent. However, the idea that capitalist economic practices were going to prove necessary on the road to Communism was not new, far from it. Marx himself wrote that further scientific and industrial progress under capitalism was going to be necessary to have the tools in place for the transition to communism. Then too there was the example of Lenin who thought the Russia inherited from the Czars was too backward for socialism; in 1918 he wrote
    “Socialism is inconceivable without large-scale capitalist engineering based on the latest discoveries of modern science.”
Accordingly Lenin resorted to market based incentives with the New Economic Policy in the 1920s in the USSR. So, there is nothing new here: in China, Communism is alive and well – just taking a deep breath, getting its bearings.
Normally we associate capitalism with agile democracies like the US and the UK rather than autocratic monoliths like China. But capitalism has worked its wonders before in autocratic societies: prior to the World Wars of the 20th century, there was a thriving capitalist system in Europe in the imperial countries of Germany and Austria-Hungary which created the society that brought us the internal combustion engine and the Diesel engine, Quantum Mechanics and the Theory of Relativity, Wagner and Mahler, Freud and Nietzsche. All of which bodes well for the new China – to some libertarian thinkers, democracy just inhibits capitalism.
The US formally recognized the People’s Republic of China in 1979, a few years after Nixon’s legendary visit and the gift of pandas Ling-Ling and Hsing-Hsing to the Washington DC zoo. Deng’s policies were bolstered too by events in the capitalist world itself. There was the return of Japan and Germany to dominant positions in high-end manufacturing by the 1970s: machine tools, automobiles, microwave ovens, cameras, and so on – with Korea, Taiwan and Singapore following close behind. Concomitantly, the UK and the US turned from industrial capitalism to financial capitalism in the era of Margaret Thatcher and Ronald Reagan. Industrial production was de-emphasized as more money could be made in financing things than in making them. This created a vacuum and China was poised to fill the void – rural populations were uprooted to work in manufacturing plants in often brutal conditions, ironically creating in China itself the kind of social upheaval and exploitation of labor that Marx and Engels denounced in the 19th century. But the resulting boom in the Chinese economy led to membership in the World Trade Organization in 2001!
The displaced rural populations were crammed into ever more crowded cities. This exacerbated the serious problems China has long had with transmission of animal viruses to humans – Asian Flu, Hong Kong Flu, Bird Flu, SARS and now COVID-19. The fact that China is no longer isolated as it once was but a huge exporter and importer of goods and services from all over the world has made these virus transmissions a frightening global menace.
The corona pandemic is raging as the world finds itself in a situation eerily like that of August 1914 in Europe: two powerful opposing capitalist systems – one led by democracies, the other by an autocratic central government. The idea of a full-scale war between nuclear powers is unthinkable. Instead, there is hope that this latest virus crisis will be for China and the West what William James called “the moral equivalent of war,” leading to joint mobilization and cooperation to make the world a safer place; hopefully, in the process, military posturing will be transformed into healthy economic competition so that the interests of humanity as a whole are served in a kinder gentler world. Hope springs eternal, perhaps naively – but consider the alternative.

AI II : First Wave — 1950 – 2000

Alan Turing was a Computer Science pioneer whose brilliant and tragic life has been the subject of books, plays and films – most recently The Imitation Game with Benedict Cumberbatch. Turing and others were excited by the possibility of Artificial Intelligence (AI) from the outset, in Turing’s case at least from 1941. In his 1950 paper Computing Machinery and Intelligence, Turing proposed his famous test where, roughly put, a machine would be deemed “intelligent” if it could pass for a human in an interactive session he called “The Imitation Game” – whence the movie title. (Futurologist Ray Kurzweil has predicted that a machine will pass the Turing Test by the year 2029.)
Historically, the practice of war has hewn closely to developments in technology. And warfare, in turn, has made demands on technology. Indeed, even men of genius like Archimedes and Leonardo da Vinci developed weapons systems. However, the relationship between matters military and matters technological became almost symbiotic with WWII. Technological feats such as radar, nuclear power, rockets, missiles, jet planes and the digital computer are all associated with the war efforts of the different powers of that conflict. Certainly, the fundamental research behind these achievements was well underway by the 1930s but the war determined which areas of technology should be prioritized, thereby creating special concentrations of brilliant scientific talent. The Manhattan Project itself is studied as a model of large scale R&D; furthermore, the industrial organization of the war period and military operations such as countering submarine warfare gave rise to a new mathematical discipline, aptly called Operations Research, which is now taught in Business Schools under the name Management Science.
In his masterful treatise War in the Age of Intelligent Machines (1991), Manuel DeLanda summarizes it thusly: “The war … forged new bonds between the military and scientific communities. Never before had science been applied at so grand a scale to such a variety of warfare problems.”
However, now the reliance on military funding might be skewing technological progress leading in less fruitful directions than capitalism or science-as-usual itself would. Perhaps this is why, instead of addressing the environmental crisis, post WWII technological progress has perfected drones and fueled the growth of organizations such as the NSA and its surveillance prowess.
All that said, since WWII the US Military has been a very strong supporter of research into AI; in particular funding has come from the Defense Advanced Research Projects Agency (DARPA). It is worth noting that one of their other projects was the ARPANET which was once the sole domain of the military and research universities; this became the internet of today when liberated for general use and eCommerce by the High Performance Computing and Communications Act (“Gore Act”) of 1991.
The field of AI was only founded formally in 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term Artificial Intelligence itself was coined.
Following a scheme put forth by John Launchbury of DARPA, the timeline of AI can be broken into three parts. The 1st Wave (1950-2000) saw the development of four fundamental approaches to AI – one based on powerful Search Algorithms, one on Mathematical Logic, one on algorithms drawn form the natural world and one on Connectionism, imitating the structure of neurons in the human brain. Connectionism develops slowly in the First Wave but explodes in the 2nd Wave (2000-2020). We are now entering the 3rd Wave.
Claude Shannon, a scientist at the legendary Bell Labs, was a participant at the Dartmouth conference. His earlier work on implementing Boolean Logic with electromagnetic switches is the basis of computer circuit design – this was done in his Master’s Thesis at MIT making it probably the most important Master’s Thesis ever written. In 1950, Shannon published a beautiful paper Programming a Computer for Playing Chess, which laid the groundwork for games playing algorithms based on searching ahead and evaluating the quality of possible moves.
Fast Forward: Shannon’s approach led to the triumph in 1997 of IBM’s Deep Blue computer which defeated reigning chess champion Gary Kasparov in a match. And things have accelerated since – one can now run even more powerful codes on a laptop.
Known as the “first AI program”, Logic Theorist was developed in 1956 by Allen Newell, Herbert A. Simon and Cliff Shaw – Simon and Newell were also at the Dartmouth Conference (Shaw wasn’t). The system was able to prove 38 of the first 52 theorems from Russell and Whitehead’s Prinicipia Mathematica and in some cases to find more elegant proofs! Logic Theorist established that digital computers could do more than crunch numbers, that programs could deal with symbols and reasoning.
With characteristic boldness, Simon (who was also a Nobel prize winner in Economics) wrote
     [We] invented a computer program capable of thinking non-numerically, and thereby solved the venerable mind-body problem, explaining how a system composed of matter can have the properties of mind.
Again with his characteristic boldness, Simon predicted in 1957 that computer chess programs would outperform humans within “ten years” but that was wrong by some thirty years! In fact, “over-promising” has plagued AI over the years – but presumably all that is behind us now.
AI has also proved too attractive to researchers and companies. For example, at Xerox PARC in the 1970s, the computer mouse, the Ethernet and WYSIWYG editors (What you see is what you get) were invented. However, rather than commercializing these advances for a large market as Apple would do with the Macintosh, Xerox produced the Dandelion – a $50,000 workstation designed for work on AI by elite programmers.
The Liar’s Paradox (“This statement is false”) was magically transformed into the Incompleteness Theorem by Kurt Gödel in 1931 by exploiting self-reference in systems of mathematical axioms. With Turing Machines, an algorithm can be the input to an algorithm (even to itself). And indeed, the power of self-reference gives rise to variants of the Liar’s Paradox that become theorems about Turing machines and algorithms. Thus, the only algorithm for telling how long an algorithm or program will run will come down to running the program; and, be warned, it might run forever and there is no sure way you can tell that in advance.
In a similar vein, it turns out that the approach through Logic soon ran into the formidable barrier called Combinatorial Explosion where all possible algorithms will necessarily take too long to reach a conclusion on a large family of mathematical problems – for example, there is the Traveling Salesman Problem:
     Given a set of cities and distance between every pair of cities, the problem is to find the shortest possible route that visits every city exactly once and returns to the starting point.
This math problem is not only important to salesmen but is also important for the design of circuit boards, for DNA sequencing, etc. Again the impasse created by Combinatorial Explosion is not unrelated to the issues of limitation in Mathematics and Computer Science uncovered by Gödel and Turing.
Nature encounters difficult mathematical problems all the time and responds with devilishly clever algorithms of its own making. For example, protein folding is a natural process that solves an optimization problem that is as challenging as the Traveling Salesman Problem; the algorithm doesn’t guarantee the best possible solution but always yields a very good solution. The process of evolution itself uses randomness, gene crossover and fitness criteria to drive natural selection; genetic algorithms, introduced in 1960 by John Holland, adopt these ideas to develop codes for all sorts of practical challenges – e.g. scheduling elevator cars. Then there is the technique of simulated annealing – the name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. This technique has been applied to myriad optimization problems including the Traveling Salesman Problem. A common feature of these and many other AI algorithms is the resort to randomness; this is special to the Computer Age – mathematicians working laboriously by hand just are not physically able to weave randomness into an algorithmic process.
Expert Systems are an important technology of the 1st Wave; they are based on the simplified logic of if-then-rules:
    If it’s Tuesday, this must be Belgium.
As the rules are “fired” (applied), a data base of information called a “knowledge base” is updated making it possible to fire more rules. Major steps in this area include the DENDRAL and the MYCIN expert systems developed at Stanford University in the 1960s and 1970s.
A problem for MYCIN which assisted doctors in the identification of bacteria causing infections was that it had to deal with uncertainty and work with chains of propositions such as:
“Presence of A implies Condition B with 50% certainty”
“Condition B implies Condition C with 50% certainty”
One is tempted to say that presence of A implies C with 25% certainty, but (1) that is not mathematically correct in general and (2) if applied to a few more rules in the chain that 25% will soon be down to an unworkable 1.5%.
Still MYCIN was right about 65% of the time, meaning it performed as well as the expert MDs of the time. Another problem came up, though, when a system derived from MYCIN was being deployed in the 1970s: back then MDs did not type! Still this area of research led to the development of Knowledge Engineering Environments which built rules derived from the knowledge of experts in different fields – here one problem was that the experts (stock brokers, for example) often did not have enough expertise to encode to make the enterprise worthwhile, although they could type!
For all that, Rule Based Systems are widespread today. For example, IBM has a software product marketed as a “Business Rules Management System.” A sample application of this software is that it enables an eCommerce firm to update features of the customer interaction with its web page – such as changing the way to compute the discount on a product – on the fly without degrading performance and without calling IBM or having to recompile the system.
To better deal with reasoning and uncertainty, Bayesian Networks were introduced by UCLA Professor Judea Pearl in 1985 to address the problem of updating probabilities when new information becomes available. The term Bayesian comes from a theorem of the 18th century Presbyterian minister Thomas Bayes on what is called “conditional probability” – here is a example of how Bayes’ Theorem works:
    In a footrace, Jim has beaten Bob only 25% of the time but of the 4 days they’ve done this, it was raining twice and Jim was victorious on one of those days. They are racing again tomorrow. What is the likelihood that Jim will win? Oh, one more thing, the forecast is that it will certainly be raining tomorrow.
At first, one would say 25% but given the new information that rain is forecast, a Bayesian Network would update the probability to 50%.
Reasoning under uncertainty is a real challenge. A Nobel Prize in economics was recently awarded to Daniel Kahneman based on his work with the late Amos Tversky on just how ill-equipped humans are to deal with it. (For more on their work, there is Michael Lewis best-selling book The Undoing Project.) As with MYCIN where the human experts themselves were only right 65% of the time, the work of Kahneman and Tversky illustrates that medical people can have a lot of trouble sorting through the likely and unlikely causes of a patient’s condition – these mental gymnastics are just very challenging for humans and we have to hope that AI can come to the rescue.
Bayesian Networks are impressive constructions and play an important role in multiple AI techniques including Machine Learning. Indeed Machine Learning has become an ever more impressive technology and underlies many of the success stories of Connectionism and the 2nd Wave of AI. More to come.

AI I: Pre-History — 500 BC to 1950 AD

Artificial Intelligence (AI) is the technology that is critical to getting humanity to the Promised Land of the Singularity, where machines will be as intelligent as human beings.
The roots of modern AI can be traced to attempts by classical philosophers to describe human thinking in systematic terms.
Aristotle’s syllogism exploited the idea that there is structure to logical reasoning:                “All men are mortal; Socrates is a man; therefore Socrates is mortal.”
The ancient world was puzzled by the way syntax could create semantic confusion; for example, The Liar’s Paradox: “This statement is false” is false if it is true and true if it is false.
Also, in the classical world, there was what became the best selling textbook of all time: Euclid’s Elements where all plane geometry flows logically from axioms and postulates.
The greatest single advance in computational science took place in northern India in the 6th or 7th century AD – the invention of the Hindu-Arabic numerals. The Hindu sages encountered the need for truly large numbers for Hindu Vedic cosmology – e.g. a day for Brahma, the creator, endures for about 4,320,000,000 solar years (in Roman numerals that would take over 4 million M’s). This advance made its way west across the Islamic world to North Africa. The father of the teenage Leonardo of Pisa (aka Fibonacci) was posted to Bejaia (in modern Algeria) as commercial ambassador of the Republic of Pisa. Fibonacci brought this number system back to Europe and in 1212 published his Book of Calculation (Liber Abaci) which introduced Europe to its marvels. With these numerals, all the computation was done by manipulating the symbols themselves – no need for an external device like an abacus or the calculi (pebbles) of the ancient Romans. What is more, with these 10 magic digits, as Fibonacci demonstrates in his book, one could compute compound interest and the Florentine bankers further up the River Arno from Pisa soon took notice.
In the Middle Ages in Western Europe, Aristotle’s Logic was studied intently as some theologians debated the number of angels that could fit on the head of a pin while others formulated proofs of the existence of God, notably St. Anselm and St. Thomas Aquinas. A paradoxical proof of God’s existence was given by Jean Buridan: consider the following pair of sentences:
    God exists. Neither of the sentences in this pair is true.
Since the second one cannot be true, the existence of God follows. Buridan was a true polymath, one making significant contributions to multiple fields in the arts and sciences – a glamorous and mysterious figure in Paris life, though an ordained priest. His work on Physics was the first serious break with Aristotle’s cosmology; he introduced the concept of “inertia” and influenced Copernicus and Galileo. The leading Paris philosopher of the 14th Century, he is known for his work on the doctrine of Free Will, a cornerstone of Christianity. However, the name “Buridan” itself is actually better known for “Buridan’s Ass,” the donkey who could never choose which of two equally tempting piles of hay to eat from and died of starvation as a result of this “embarrassment of choice” – apparently a specious attribution contrived by his opponents as this tale does not appear in any of Buridan’s writings; it appears to be mocking Buridan’s work on free will: Buridan taught that simply realizing which of two choices was evil and which was moral was not enough and an actual decision still required an act of will.
Doubtlessly equally unfounded is the tale that Buridan was stuffed in a sack and drowned in the Seine by order of King Louis X because of his affair with the Queen, Marguerite of Burgundy – although this story was immortalized by the immortal poet François Villon in his Ballade des Dames du Temps Jadis, the poem in which the refrain is “Where are the snows of yester-year” (Mais où sont les neiges d’antan); in the poem Villon compares the story of Marguerite and Buridan to that of Hėloise and Abėlard!
It the late middle ages, Ramon Llull, the Catalan polymath (father of Catalan literature, mathematician, artist whose constructions inspired work of superstar architect Daniel Libeskind) published his Ars Magna (1305) which described a mechanical method to help in arguments, especially in ones to win Muslims over to Christianity.
François Viète (aka Vieta in Latin) was another real polymath (lawyer, mathematician, Huguenot, privy councilor to kings). At the end of the 16th Century, he revolutionized Algebra, replacing the awkward Arab system, with a purely symbolic one; Viète was the first to say “Let x be the unknown,” he made Algebra a game of manipulating symbols. Before that, in working out an Algebra problem, one actually thought of “10 squared” as a 10-by-10 square and “10 cubed” as a 10-by-10-by-10 cube.
Llull’s work is referenced by Gottfried Leibniz, the German polymath (great mathematician, philosopher, diplomat) who in the 1670’s proposed a calculus for philosophical reasoning based on his idea of a Characteristica Universalis, a perfect language which would provide for a direct representation of ideas.
Leibniz also references Thomas Hobbes, the English polymath (philosopher, mathematician, very theoretical physicist). In 1655, Hobbes wrote : “By reasoning, I understand computation.” This assertion of Hobbes is the cornerstone of AI today; cast in modern terms: intelligence is an algorithm.
Blaise Pascal, the French polymath (mathematics, philosophy, theology) devised a mechanical calculation engine in 1645; in the 1800s, Thomas Babbage and Ada Lovelace worked on a more ambitious project, the Analytical Engine, a proposed general computing machine.
Also in the early 1800s, there was the extraordinarily original work of Evariste Galois. He boldly applied one field of Mathematics to another, the Theory of Groups to the Theory of Equations. Of greatest interest here is that he showed that there were problems for which no appropriate algorithm existed. With his techniques, one can show, for example, that there is no general method to trisect an angle using a ruler and compass – Euclid’s Elements presents an algorithm of this type for bisecting an angle. Tragically, Galois was embroiled in the violent politics that led up to the destitution of Charles X and was killed in a duel at the age of twenty in 1830. He is considered to be the inspiration for the young hero of Stendahl’s novel Lucien Leuwen.
Later in the 19th Century, we have George Boole whose calculus of Propositional Logic is the basis on which computer chips are built, Gottlob Frege who dramatically extended Boole’s Logic to First Order Logic which allowed for the development of systems such as Alfred North Whitehead and Bertrand Russell’s Principia Mathematica and other Set Theories; these systems provide a framework for axiomatic mathematics. Russell was particularly excited about Frege’s new logic, which is a notable advance over Aristotle: while Aristotle could prove that Socrates was mortal, the syllogism cannot deal with binary relations as in
    “All lions are animals; therefore the tail of a lion is a tail of an animal.”
Aristotle’s syllogistic Logic is still de rigueur, though, in the Vatican where some tribunals yet require that arguments be presented in syllogistic format!
Things took another leap forward with Kurt Gödel’s landmark On Formally Undecidable Propositions of Principia Mathematica and Related Systems, published in 1931 (in German). In this paper, Gödel builds a programming language and gives a data structuring course where everything is coded as a number (formulas, the axioms of number theory, proofs from the axioms, properties like “provable formula”, …). Armed with the power of recursive self-reference, Gödel ingeniously constructed a statement about numbers that asserts its own unprovability. Paradox enters the picture in that “This statement is not provable” is akin to “This sentence is false” as in the Liar’s Paradox. All this self-reference is possible because with Gödel’s encoding scheme everything is a number – formulas, proofs, etc. all live in the same universe, so to speak. First Order Logic and systems like Principia Mathematica make it possible to apply Mathematics to Mathematics itself (aka Metamathematics) which can turn a paradox into a theorem.
For a theorem of ordinary mathematics that is not provable from the axioms of number theory but is not self-referential, Google “The Kanamori-McAloon Theorem.”
The Incompleteness Theorem rattled the foundations of mathematics but also sowed the seeds of the computer software revolution. Gödel’s paper was quickly followed by several different formulations of a mathematical model for computability, a mathematical definition of the concept of algorithm: the Herbrand-Gödel Recursive Functions, Church’s lambda-calculus, Kleene’s μ-recursive functions – all were quickly shown to be equivalent, that any algorithm in one model had an equivalent algorithm in each of the others. That these models must capture the notion of algorithm in its entirety is known as Church’s Thesis or the Church-Turing Thesis.
In 1936 Alan Turing published a paper “On Computable Numbers with an Application to the Entscheidungsproblem” – German was still the principal language for science! Here Turing presented a new mathematical model for computation – the “automatic machine” in the paper, the “Turing Machine” today. Turing proved that this much simpler model of computability is equivalent to the other schemes; it is couched in terms of a device manipulating 0s and 1s; furthermore, Turing demonstrated the existence of a Universal Turing Machine which can emulate the operation of any Turing machine M given the description of M in 0s and 1s along with the intended input: the Universal Turing Machine will decipher the description of M and will perform the same operations on the input as M would, thus yielding the same output: this is the inspiration for stored programming – in the early days of computing machinery one had to rewire the machine, change external tapes, swap plugboards or reset switches if the problem was changed; with Turing’s setup, you just input the algorithm along with the data into the memory of the machine – the algorithm and the data live in the same universe. In 1945, John Von Neumann joined the team under engineers John Mauchly and J. Presper Eckert and wrote up a report on the design of a new digital computer, “First Draft of a Report on the EDVAC.” Stored programming was the crucial innovation of the Von Neumann Architecture. Because of the equivalence of the mathematical models of computability and Church’s Thesis, Von Neumann also knew that this architecture captured all possible algorithms that could be programmed by machine – subject only to limitations of speed and size of memory. (In the future, though, quantum computing could challenge Church’s Thesis.)
With today’s computers, it is the operating system (Windows, Unix, Android, macOS, …) that plays the role of the Universal Turing Machine. Interestingly, the inspiration for the Turing Machine was not a mechanical computing engine but rather the way a pupil in an English school of Turing’s time used a European style notebook with graph paper pages to do math homework.
BTW, Gödel and Turing have both made it into motion pictures. Gödel is played by Lou Jacobi in the rom-com I.Q. and Turing is played by Benedict Cumberbatch in The Imitation Game, a movie of a more serious kind.
Computer pioneers were excited by the possibility of Artificial Intelligence from the outset, in Turing’s case at least from 1941. In his 1950 paper Computing Machinery and Intelligence, Turing proposed his famous test where, roughly put, a machine would be deemed “intelligent” if it could pass for a human in an interactive session he called “The Imitation Game” (whence the movie title). Futurologist Ray Kurzweil (now at Google) has predicted that a machine will pass the Turing Test by the year 2029. But gurus have been wrong in the past. Nobel Prize winning economist and AI pioneer, Herbert Simon, boldly predicted in 1957 that computer chess programs would outperform humans within “ten years” but that was wrong by some thirty years!
In his 1951 talk at the University of Manchester entitled Intelligent Machinery: A Heretical Theory, Turing spoke of machines that will eventually surpass human intelligence: “once the machine thinking method has started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control.” From the beginning, the Singularity was viewed with a mixture of wonder and dread.
But the field of AI wasn’t formally founded until 1956; it was at a summer research conference at Dartmouth College, in Hanover, New Hampshire, that the term Artificial Intelligence was coined. Principal participants at the conference included Herbert Simon as well as fellow scientific luminaries Claude Shannon, John McCarthy and Marvin Minsky.
Today, Artificial Intelligence is simultaneously bringing progress and miracles to humankind on the one hand and representing an existential threat to humanity on the other. Investment in AI research is significant and it is proceeding apace in industry and at universities; the latest White House budget includes $1.1B for AI research (NY Times, Feb. 17, 2020) reflecting in part the interest of the military in all this.
The principal military funding agency for AI has been the Defense Research Projects Agency (DARPA). According to a schema devised by DARPA people, AI has already gone through two phases and is in the beginning of the 3rd Phase now. The Singularity is expected in a 4th Phase which will begin around 2030 according to those who know.
More to come.

Toward the Singularity

Futurology is the art of predicting the technology of the future.
N.B. We say “futurology” because the term “futurism” denotes the Italian aesthetic movement “Il Futurismo”: it began with manifestos – Manifesto del Futurismo (1909) which glorified the technology of the automobile and its speed and power followed by two manifestos on technology and music, Musica Futurista (1912), L’arte dei Rumori (1913). The movement’s architectural esthetic can be appreciated at Rockefeller Center; its members also included celebrated artists like Umberto Boccioni whose paintings are part of the permanent collection of the MoMA in New York.
We live in an age of accelerating technological forward motion and this juggernaut is hailed as “bearer of the future.” However, deep down, genuine distrust of science and “progress” has always been there. Going back in history, profound discomfort with technology is expressed in the Greek and Roman myths. The Titan Prometheus brings fire to mankind but he is condemned by the gods to spend eternity with an eagle picking at his liver. Vulcan, the god of fire, is a master craftsman who manufactures marvels at his forge under the Mt. Etna volcano in Sicily. But Vulcan is a figure of scorn: he is homely with a permanent limp for which he is mocked by the other gods; though married to Venus, he is outrageously cuckolded by his own brother Mars; for Botticelli’s interpretation of Olympian adultery, click HERE .
More recently, there is the myth of Frankenstein and its terrors. Then there is the character of the Mad Scientist in movies, magazines and comic books whose depiction mirrors public distrust of what technology is all about.
For all that, in today’s world, even the environmentalists do not call for a return to more idyllic times; rather they want a technological solution to the current crisis – for example, The Green New Deal. Future oriented movements like Accelerationism also call upon free-market capitalism to push change harder and harder rather than wanting to retreat to an earlier bucolic time.
The only voluble animosity towards science and technology comes from Donald Trump and his Republican spear carriers but theirs is opportunistic and dishonest, not something they actually believe in.
Futurology goes back at least to the 19th century with Jules Verne and his marvelous tales of submarines and trips to the moon. H.G. Wells too left an impressive body of work dealing with challenges that might be in the offing. In a different vein, there are the writings of Teilhard de Chardin whose noosphere is a predictor of where the world wide web and social media might be taking us – one unified super-mind. In yet another style, there are the books of the Tofflers from the 1970s such as Future Shock which among other things dealt with humanity’s dealing with the endless change to daily life fueled by technology, change at such a speed as to make the present never quite real.
For leading technologist and futurologist Ray Kurzweil, for the Accelerationists and for most others, the vector of technological change has been free-market capitalism. Another vehicle of technological progress, to some a most important one, is warfare. Violence between groups is not new to our species. Indeed, anthropologists point out that inter-group aggression is also characteristic of our closest relatives, the chimpanzees – so all this likely goes way back to our common ancestor. The evolutionary benefit of such violence is a topic of debate and research among social scientists. The simplest and most simple-minded explanation is that the more fit, surviving males had access to more females and so more offspring. One measure of the evolutionary importance of fighting among males for reproductive success is the relative size of males and females. In elephant seals, where the males stage mammoth fights for the right to mate, the ratio is 3.33 to 1.0; in humans it is roughly 1.15 to 1.0 – this modest ratio implies that the simple-minded link between warfare and reproductive success cannot be the whole story.
Historically, the practice of war has hewn closely to developments in technology. And warfare, in turn, has made demands on technology. Indeed, even men of genius like Archimedes and Leonardo da Vinci developed weapons systems. However, the relationship between matters military and technology became almost symbiotic with WWII. Technological feats such as nuclear power, rockets, missiles, jet planes and the digital computer are all associated with the war efforts of the different powers of that conflict. Certainly, the fundamental research and engineering behind these achievements was well underway in the 1930s but the war efforts determined priorities and thus to which areas of technology should resources and funding be allocated, thereby creating remarkable concentrations of brilliant scientific talent. The Manhattan Project itself is studied as a model of large scale R&D; furthermore, the industrial organization of the war period and military operations such as countering submarine warfare gave rise to a new mathematical discipline, aptly called Operations Research, which is now taught in Business Schools under the name Management Science.
In his masterful treatise War in the Age of Intelligent Machines (1991), Manuel DeLanda summarizes it thusly: “The war … forged new bonds between the military and scientific communities. Never before had science been applied at so grand a scale to such a variety of warfare problems.”
Since WWII we have been in a “relatively” peaceful period. But the technological surge continues. Perhaps we are just coasting on the momentum of the military R&D that followed WWII – the internet, GPS systems, Artificial Intelligence, etc. However, military funding might be skewing technological progress today in less fruitful directions than capitalism or science-as-usual itself would. Perhaps this is why post WWII technological progress has fueled the growth of paramilitary surveillance organizations such as the CIA and NSA and perfected drones rather than addressing the environmental crisis.
Moreover, these new technologies are transforming capitalism itself: the internet and social media and big data have given rise to surveillance capitalism, the subject of a recent book by Harvard Professor Emerita Shoshana Zuboff, The Age of Surveillance Capitalism: our personal behavorial data are amassed by Alexa, Siri, Google, FaceBook et al., analyzed and sold for targeted advertising and other feeds to guide us in our lives; this is only going to get worse as the internet of things puts sensoring and listening devices throughout the home. The 18th Century Utilitarian philosopher Jeremy Bentham promoted the idea of the panopticon, a prison structured so that the inmates would be under constant surveillance by unseen guards – click HERE . To update a metaphor from French post-modernist philosopher Michel Foucault, with surveillance technology we have created our own panopticon, one in which we dwell quietly and willingly as our every keystroke, every move is observed. An example: as one researches work on machine intelligence on the internet, Amazon drops ads for books on the topic (e.g. The Sentient Machine) onto one’s FaceBook page !
The futurologists and Accelerationists, like many fundamentalist Christians, await the coming of the new human condition – for fundamentalists this will happen at the Second Coming; for the others the analog of the Second Coming is the singularity – the moment in time when machine intelligence surpasses human intelligence.
In Mathematics, a singularity occurs at a point that is dramatically different from those around it. John von Neumann, a mathematician and computer science pioneer (who worked on the Manhattan Project) used this mathematical term metaphorically: “the ever accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” For Von Neumann, the singularity will be the moment when “technological progress will become incomprehensibly rapid and complicated.” Like Von Neumann, Alan Turing was a mathematician and a computer science pioneer; famous for his work on breaking the German Enigma Code during WWII, he is the subject of plays, books and movies. In 1951, Turing wrote “once the machine thinking method has started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control … .” The term singularity was then used by Vernor Vinge in an article in Omni Magazine in 1983, a piece that develops Von Neumann’s and Turing’s remarks further: “We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.” The concept of the singularity was brought into the mainstream by the work of Ray Kurzweil with his book entitled The Singularity Is Near (2005).
In Kurzweil’s writings, he emphasizes how much technological development is growing exponentially. A most famous example of exponential technological growth is Moore’s Law: in 1975, Gordon E. Moore, a founder of INTEL, noted that the number of transistors on a microchip was doubling every two years even as the cost was being halved – and that this was likely to continue. Amazingly this prediction has held true into the 21st Century and the number of transistors on an integrated circuit has gone from 5 thousand to 1 billion: for a graph, click HERE . Another example of exponential growth is given by compound interest: at 10% compounded annually, your money will more than double in 5 years, more than quadruple in 10 years and so on.
Kurzweil argues that exponential growth also applies to many other areas, indeed to technology as a whole. Here his thinking is reminiscent of that of the French post-structuralist accelerationists Deleuze and Guattari who also view humanity-cum-technology as a grand bio-physical evolutionary process. To make his point, Kurzweil employs compelling charts and graphs to illustrate that growth is indeed exponential (click HERE ); because of this the future is getting closer all the time – advances that once would have been the stuff of science fiction can now be expected in a decade or two. So when the first French post-structuralist, post-modern philosophers began calling for an increase in the speed of technological change to right society’s ills in the early 1970s, the acceleration had already begun!
But what will happen as we go past the technological singularity? Mystère. More to come.

Accelerationism II

Accelerationism is a philosophical movement that emerged from the work of late 20th century disillusioned Marxist-oriented French philosophers who were confronted with the realization that capitalism cannot be controlled by current political institutions nor supplanted by the long awaited revolution: for centuries now, the driving force of modernity has been capitalism, the take-no-prisoners social-economic system that produces ever faster technological progress with dramatic physical and social side-effects – the individual is disoriented; social structure is weakened; the present yields constantly to the onrushing future; “the center cannot hold.” However, for the accelerationists, the response is not to slow things down to return to a pre-capitalist past but rather to push capitalism to quicken the pace of progress so a technological singularity can be reached, one where machine intelligence surpasses human intelligence and begins to spark its own development and that of everything else; it will do this at machine speed as opposed to the clumsy pace of development today. This goal will not be reached if human events or natural disasters dictate otherwise, speed is of the essence.

Nick Land, then a lecturer in Continental Philosophy at the University of Warwick in the UK picked up on the work in France and published an accelerationist landmark in 1992, The Thirst for Annihilation: Georges Bataille and Virulent Nihilism. Land builds on work of the French eroticist writer Georges Bataille and emphasizes that Accelerationism does not necessarily predict a “happy ending” for humanity: all is proceeding nihilistically, without direction or value, humanity can be but a cog in a planetary process of Spaceship Earth. Accelerationism is different from Marxism, Adventism, Mormonism, Futurism – all optimistic forward-looking world views.

Land pushed beyond the boundaries of academic life and methodology. In 1995, he and colleague Sadie Plant founded the Cybernetic Culture Research Unit (CCRU) which became an intellectual warren of forward thinking young people – exploring themes such as “cyberfeminism” and “libidinal-materialist Deleuzian thinking.” Though disbanded by 2003, alums of the group have stayed the course and publish regularly in the present day accelerationist literature. In fact, a collection of writings of Land himself has been published under the title Fanged Noumena and today, from his aerie in Shanghai, he comments on things via Twitter. (In Kant’s philosophy noumena as opposed to phenomena are the underlying essences of things that the human mind does not have direct access to.)

Accelerationists have much in common with the Futurist movement: they expect the convergence of computer technology and medicine to bring us into the “bionic age” where a physical merge of man and robot can begin with chip-implantation, gene manipulation and much more. Their literature of choice is dystopian science-fiction, particularly the cyberpunk subgenre: William Gibson’s pioneering Neuromancer has the status of scripture; Rudy Rucker’s thoughtful The Ware Tetralogy is required reading and Richard Morgan’s ferocious Market Forces is considered a minor masterpiece.

Accelerationism is composed today of multiple branches.

Unconditional Accelerationism (aka U/Acc) is the most free-form, the most indifferent to politics. It celebrates modernity and the wild ride we are on. It tempers its nihilism with a certain philosophical playfulness and its mantra, if it had one, would be “do your own thing”!

Left Accelerationism (aka L/Acc) harkens back to Marx as precursor: indeed, Marx did not call for a return to the past but rather claimed that capitalism had to move society further along until it had created the tools – scientific, industrial, organizational – needed for the new centralized communist economy. Even Lenin wrote (in his 1918 text “Left Wing” Childishness)

    Socialism is inconceivable without large-scale capitalist engineering based on the latest discoveries of modern science.

So Lenin certainly realized that Holy Russia was nowhere near the level of industrialization and organization necessary for a Marxist revolution in 1917 but plunge ahead he did. Maybe that venerable conspiracy theory where Lenin was transported back to Russia from Switzerland by the Germans in order to get the Russians out of WW I has some truth to it! Indeed, Lenin was calling for an end to the war even before returning; with the October Revolution and still in the month of October, Lenin proposed an immediate withdrawal of Russia from the war which was followed by an armistice the next month between Soviet Russia and the Central Powers. All this freed up German and Austrian men and resources for the Western Front.

An important contribution to L/Acc is the paper of Alex Williams and Nick Srinek (Manifesto for an Accelerationist Politics, 2013) in which they argue that “accelerationist politics seeks to preserve the gains of late capitalism while going further than its value system, governance structures, and mass pathologies will allow.” Challenging the conceit that capitalism is the only system able to generate technological change at a fast enough speed, they write “Our technological development is being suppressed by capitalism, as much as it has been unleashed. Accelerationism is the basic belief that these capacities can and should be let loose by moving beyond the limitations imposed by capitalist society.” They dare to boldly go beyond earthbound considerations asserting that capitalism is not able to realize the opening provided by space travel nor can it pursue “the quest of Homo Sapiens towards expansion beyond the limitations of the earth and our immediate bodily forms.” The Left accelerationists want politics and the acceleration, both, to be liberated from capitalism.

Right Accelerationism (aka R/Acc) can claim Nick Land as one of its own – he dismisses L/Acc as warmed over socialism. In his frank, libertarian essay, The Dark Enlightenment (click HERE), Land broaches the difficult subject of Human Bio-Diversity (HBD) with its grim interest in biological differences among human population groups and potential eugenic implications. But Land’s interest is not frivolous and he is dealing with issues that will have to be encountered as biology, medicine and technology continue to merge and as the cost of bionic enhancements drives a wedge between social classes and racial groups.

This interest of the accelerationists in capitalism brings up a “chicken or egg” problem: Which comes first – democratic political institutions or free market capitalism?

People (among them the L/Acc) would likely say that democracy has been necessary for capitalism to develop, having in mind the Holland of the Dutch Republic with its Tulip Bubble, the England of the Glorious Revolution of 1689 which established the power of parliament over the purse, the US of the Founding Fathers. However, 20th Century conservative thinkers such as Friedrich Hayek and Milton Friedman argued that free markets are a necessary precondition for democracy. Indeed, the case can be made that even the democracy of Athens and other Greek city states was made possible by the invention of gold coins by the neighboring Lydians of Midas and Croesus fame: currency led to a democratic society built around the agora/marketplace and commerce rather than the palace and tribute.

In The Dark Enlightenment, Land also pushes the thinking of Hayek and Friedman further and argues that democracy is a parasite on capitalism: with time, democratic government contributes to an ever growing and ever more corrupt state apparatus which is inimical to capitalism and its accelerationist mission. In fact, Land and other accelerationists put forth the thesis that societies like China and Singapore provide a better platform for the acceleration required of late capitalism: getting politics out of everyday life is liberating – if the state is well run and essential services are provided efficiently, citizens are free to go about the important business of life.

An historical example of capitalism in autocratic societies is provided by the German and Austro-Hungarian empires of the half century leading up to WWI: it was in this world that the link was made between basic scientific research (notably at universities) and industrial development that continues to be a critical source of new technologies (the internet is an example). In this period, the modern chemical and pharmaceutical industries were created (Bayer aspirin and all that); the automobile was pioneered by Karl Benz’ internal combustion engine and steam power was challenged by Rudolf Diesel’s compression-ignition engine. Add the mathematics (Cantor and new infinities, Riemann and new geometries), physics (Hertz and radio waves, Planck and quantum mechanics, Einstein and relativity), the early Nobel prizes in medicine garnered by Koch and Ehrlich (two heroes of Paul De Kruif’s classic book Microbe Hunters), the triumphant music (Brahms, Wagner, Bruckner, Mahler). Certainly this was a golden age for progress, an example of how capitalism and technology can thrive in autocratic societies.

Starkly, we are now in a situation reminiscent of the first quarter of the 20th Century – two branches of capitalism in conflict, the one led by liberal democracies, the other by autocratic states (this time China and Singapore instead of Germany and Austria). For Land and his school, the question is which model of capitalism is better positioned to further the acceleration; for them and the rest of us, the question is how to avoid a replay of the Guns of August 1914, all the pieces being ominously in place.

The Constitution – then and now

In the US, the Constitution plays the role of sacred scripture and the word unconstitutional has the force of a curse. The origin story of this document begins in Philadelphia in 1787 with the Constitutional Convention. Jefferson and Adams, ambassadors to England and France, did not attend; Hamilton and Franklin did; Washington presided. It was James Madison who took the lead and addressed the problem of creating a strong central government that would not turn autocratic. Indeed, Madison was a keen reader of the Roman historian Tacitus who pitilessly described the transformation of Roman Senators into sniveling courtiers with the transformation of the Roman Republic into the Roman Empire. Madison also drew on ideas of the Enlightenment philosopher Montesquieu and, in the Federalist Papers, he refined Montesquieu’s “separation of powers” and enunciated the principle of “checks and balances.”

A balance between large and small states was achieved by means of the Connecticut Compromise: a bicameral legislature composed of the Senate and the House of Representatives. As a buffer against “mob rule,” the Senators would be appointed by the state legislatures. However, the House created the problem of computing each state’s population for the purpose of determining representation. The resulting Three-Fifths Compromise stipulated that 3/5ths of the slave population in a state would count toward the state’s total population. This created the need for an electoral college to elect the president, since enslaved African-Americans would not each have three-fifths of a vote!

In September 1787, a modest four page document (without mention of the word Democracy, without a Bill of Rights, without provision for judicial review but with guidelines for impeachment) was submitted to the states; upon ratification the new Congress was seated and George Washington became President in the spring of 1789.

While the Constitution is revered today, it is not without its critics – it makes it too hard to represent the will of the people to the point where the American electorate is one of the most indifferent in the developed world (26th out of 32 in the OECD, the bottom 20%). Simply put, Americans don’t vote!!

For example, the Constitution provides for an Amendment process that requires ratification by 3/4ths of the states. Today the vestigial Electoral College makes a vote for president in Wyoming worth twice that in Delaware: both states have 3 electors and Delaware’s population is twice that of Wyoming. If you do more math, you’ll find that a presidential vote in Wyoming is worth 3.5 times one in Brooklyn and nearly 4 times one in California. Change would require an amendment; however any 13 states can block it and the 13 smallest states, with barely 4% of the population, would not find it in their interest to alter the current system.

Another issue is term limits for members of Congress, something supported by the voters. It can be in a party’s interest to have senators and representatives with seniority so they can accede to powerful committee chairmanships; this is the old Dixiecrat strategy that kept Strom Thurmond in the Senate until he was over 100 years old – but then the root of the word “senator” is the Latin “senex” which does mean “old man.” The Constitution, however, does provide for a second way to pass an amendment: 34 state legislatures would have to vote to hold a constitutional convention; this method has never been used successfully, but a feisty group “U.S. Term Limits” is trying just that.

The Constitution leaves running elections to the states and today we see widespread voter suppression, gerrymandering, etc. The lack of federal technical standards gave us the spectacle of “hanging chads” in Florida in the 2000 presidential election and has people rightly concerned about foreign interference in the 2020 election.

Judicial review came about by fiat in 1803 when John Marshall’s Supreme Court ruled a section of an act of Congress to be unconstitutional: an action itself rather extra-constitutional given that no such authority was set down in the Constitution! Today, any law passed has to go through an interminable legal process. With the Supreme Court politicized the way it is, the most crucial decisions are thus regularly made by five unelected, high church (four Catholics, one Catholic become Episcopalian), male, ideologically conservative, elitist, lifetime appointees of Republican presidents.

The founding fathers did not imagine how powerful the judicial branch of government would become; in fact, Hamilton himself provided assurances that the judiciary would always be the weakest partner in his influential tract Federalist 78. However, a recent (2008) malign example of how the Constitution does not provide protection against usurpation of power by the Supreme Court came in District of Columbia v. Heller where over two hundred years of common understanding were jettisoned when the reference to “militia” in the 2nd amendment was declared irrelevant: “A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.” What makes it particularly outrageous was that this interpretation was put forth as an example of “originalism” where the semantics of the late 18th Century are to be applied to the text of the amendment; quite the opposite is true, Madison’s first draft made it clear that the military connection was the motivating one to the point where he added an exclusion for pacifist Quakers:

    “The right of the people to keep and bear arms shall not be      infringed; a well armed, and well regulated militia being the best security of a free country: but no person religiously scrupulous of bearing arms, shall be compelled to render military service in person.”

Note too that Madison implies in the original text and in the shorter final text as well that “the right to bear arms” is a collective military “right of the people” rather than an individual right to own firearms – one doesn’t “bear arms” to go duck hunting, not even in the 18th Century. As a result of the Court’s wordplay, today American children go to school in fear; the repeated calls for “thoughts and prayers” have become a national ritual – a sick form of human sacrifice, a reenactment of King Herod’s Massacre of the Innocents.

Furthermore, we now have an imperial presidency; the Legislative Branch is still separate but no longer equal: the Constitution gives only Congress the right to levy tariffs or declare war but, for some administrations now, the president imposes tariffs, sends troops off to endless wars, and governs largely by executive order. All “justified” by the need for efficient decision-making – but, as Tacitus warned, this is what led to the end of the Roman Republic.

Accelerationism I

The discipline of Philosophy has been part of Western Culture for two and a half millennia now, from the time of the rise of the Greek city states to the present day. Interestingly, a new philosophical system often arises in anticipation of new directions for society and for history. Thus the Stoicism of Zeno and Epictetus prepared the elite of the Mediterranean world for the emerging Roman imperium with its wealth and with its centralization of political and military power. The philosophy of St. Augustine locked Western Christianity into a stern theology which served as an anchor throughout the Middle Ages and then as a guide for reformers Wycliffe, Luther and Calvin. The philosopher Descartes defined the scientific method and the scientific revolution followed in Europe. Hegel and Marx applied dialectical thinking to human history and economics as the industrial revolution created class warfare between labor and capital. The logical philosophy of Gottlob Frege and Bertrand Russell set the stage for the work of Alan Turing and thence the ensuing computer software revolution.

Existentialism (with its rich literary culture of novels and plays, its cafės, its subterranean jazz clubs, its Gauloises cigarettes) steeled people for life in a Europe made absurd by two world wars and it paved the way for second wave feminism: Simone de Beauvoir’s magistral work of 1949 The Second Sex (Le Deuxieme Sexe) provided that existentialist rallying cry for women to take charge of their own lives: “One is not born a woman; one becomes a woman.” (On ne naît pas femme, on le devient.)

By the 1960s, however, French intellectual life was dominated by structuralism, a social science methodology which looks at society as very much a static field that is built on the persistent forms that characterize it. Even Marxist philosophers like Louis Althusser were now labeled structuralists. To some extent, structuralism’s influence was due to the brilliant writing of its practitioners, e.g. semiologist Roland Barthes and anthropologist Claude Levi-Strauss: brilliance was certainly required to interest readers in the mathematical structure of kinship systems such as matrilateral cross-cousin marriage – an algorithm to maximize genetic diversity employed by small population groups.

Today the intellectual movement which most resembles past philosophical beacons of the future is known as Accelerationism. As a philosophy, Accelerationism has its roots in France in the period after the May ’68 student and worker uprising. The movement led to barricades and fighting in the streets of Paris and to the largest general strike in the history of Europe. All of which brought the government to the bargaining table; the students and workers counted on the left-wing leadership of labor unions and Marxist oriented political parties to strike a deal for freedom and radical social progress to lead to a post-capitalist world. Instead this “leadership” was interested in more seats in parliament and incremental improvements – not any truly revolutionary change in society.

The take-away from May ’68 for Gilles Deleuze, Fėlix Guattari, Jean-François Lyotard and other post-structuralist French intellectuals was the realization that capitalism proved itself once again too powerful, too flexible, too unstoppable; its dominance could not be challenged by society in its present form.

The paradoxical response in the 1970s then was to call for an acceleration of the development of technologies and other forces of capitalist progress to bring society as rapidly as possible to a new place. In their 1972 work Anti-Oedipus, Deleuze and Guattari put it this way: “Not to withdraw from the process, but to go further, to ‘accelerate the process’, as Nietzsche put it: in this matter, the truth is that we haven’t seen anything yet.” This then is the fundamental tenet of Accelerationism – push technology to get us to the point where it enables us to get out from under current society’s Iron Heel, something we cannot do now. What kind of technologies will be required for this or best suited for this and how this new world will emerge from them are, naturally, core topics of debate. One much discussed and promising (also menacing) technology is Artificial Intelligence.

Deleuze and Guattari extend the notion of the Oedipus complex beyond the nuclear family and develop schizoanalysis to account for the way modern society induces a form of schizophrenia which helps the power structure maintain the steady biological/sociological/psychological march of modern capitalism. Their Anti-Oedipus presents a truly imaginative and innovative way of looking at the world, a poetic mixture of insights fueled by ideas from myriad diverse sources; as an example, they even turn to Americans Ray Bradbury, Jack Kerouac, Alan Ginsberg, Nicholas Ray and Henry Miller and to immigrants to America Marshall McLuhan, Charles Chaplin, Wilhem Reich and Herbert Marcuse.

In The Libidinal Economy (1974), Lyotard describes events as primary processes of the human libido – again “Freud on steroids.” It is Lyotard who coined the term post-modern which has been applied to include other post-structuralists such as Michel Foucault and Jacques Derrida.

Though boldly original, Accelerationism is very much a child of continental thinking in the great European philosophical tradition, a complex modern line of thought with its own themes and conflicts: what makes it most conflicted is its schizophrenic love-hate relation to capitalism; what makes it most contemporary is its attention to the role played by new technologies; what makes it most unsettling is its nihilism, its position that there is no meaning or purpose to human life; what makes it most radical is its displacement of humanity from center-stage and its abandonment of that ancient cornerstone of Greek philosophy: “Man is the measure of all things.”

By the 1980s, the post-structuralist vision of a society in thrall to capitalism was proving prophetic. What with Thatcher, Reagan, supply-side economics, the surge of the income gap, dramatic reductions in taxes (income, corporate and estate), the twilight of the labor unions and the fall of the Berlin Wall: a stronger, more flexible, neo-liberal capitalism was emerging – a globalized post-industrial capitalism, a financial capitalism, deregulated, risk welcoming, tax avoiding, globalized, off-shoring, outsourcing, … . In a victory lap in 1989, political science professor Francis Fukuyama published The End of History; in this widely acclaimed article, Fukuyama announced that the end-point of history had been reached: market-based Western liberal democracy was the final form of human government – thus turning Marx over on his head, much the way Marx had turned Hegel over on his head! So “over” was Marxism by the 1980s that Marxist stalwart Andrė Gorz (friend of Sartre, co-founder of the Le Nouvel Observateur) declared that the proletariat was no longer the vanguard revolutionary class in his Adieux au proletariat.

With the end of the Soviet Union in 1991, in Western intellectual circles, Karl Marx and his theory of the “dictatorship of the proletariat” gave way to the Austrian-American economist Joseph Schumpeter and his theory of capitalism’s “creative destruction”; this formula captures the churning of capitalism which systematically creates new industries and new social institutions that replace the old – e.g. Sears by Amazon, an America of farmers by an America of city dwellers. Marx argued that capitalism’s contradictions and failures would lead to its demise; Schumpeter, closer to the Accelerationists, argued that capitalism has more to fear from its triumphs: ineluctably the colossal success of capitalism hollows out the social institutions and mores which historically nurtured capitalism such as the nuclear family, church-going and the Protestant Ethic itself. Look at Western Europe today with its precipitously low birth-rate where capitalism is triumphant but where church attendance is reduced to three events: “hatch, match and dispatch,” to put it the playful way Anglicans do. But all this is not all bad from the point of view of Accelerationism – capitalism triumphant should better serve to “accelerate the process.”

At this point entering the 1990s, we have a post-Marxist, post-structuralist school of Parisian philosophical thought that is the preserve of professors, researchers, cultural critics and writers. In fact at that point in time, the movement (such as it was) was simply considered part of post-modernism and was not yet known as Accelerationism.

However, in its current form, Accelerationism has moved much closer to the futurist mainstream. Science fiction is taken very seriously as a source for insights into where things might be headed. In fact, the term Accelerationist itself originated in a 1967 sci-fi novel Lord of Light by Roger Zelazny where a group of revolutionaries wanted to take their society “to a higher level” through technology: Zelazny called them the “accelerationists.” But the name was not applied to the movement until much more recently when it was so christened by Benjamin Noys in his 2013 work Malign Velocities: Accelerationism and Capitalism.

In today’s world, the work of futurist writer Ray Kurzweil and the predications of visionary Yuval Harari intersect the Accelerationist literature in the discussion of the transformation of human life that is coming at us. So how did Accelerationism get out of the salons of Paris and become part of the futurist avant-garde of the English speaking world and even a darling of the Twitterati ? Affaire à suivre, more to come.

Ranked Choice Voting

In 2016, the State of Maine voted to apply ranked choice voting in congressional and gubernatorial elections and then in 2018 voted to extend this voting process to the allocation of its electoral college votes. Recently, the New York Times ran an editorial calling for the Empire State to consider ranked choice voting; in Massachusetts, there is a drive to collect signatures to have a referendum on this on the 2020 ballot. Ranked choice voting is used effectively in American cities such as Minneapolis and Cambridge and in countries such as Australia and Ireland. So what is it exactly? Mystère.

First let us discuss what it is not. In the UK and the US, elections are decided (with some exceptions) by plurality: the candidate who polls the largest number of votes is the winner even if this is not a majority. Although simple to administer, this can lead to unusual results. By way of example, in Maine in 2010, Republican Paul LePage was elected governor with 38% of the vote. He beat out the Independent candidate who won 36% and the Democratic candidate who won 19%.

One solution to the problems posed by plurality voting is to hold the vote in multiple rounds: if no one wins an absolute majority on the first ballot, then there must be more than two candidates and the candidate with the least votes, Z say, is eliminated and everybody votes again; this time Z’s voters will shift their votes to their second choice among the candidates. If no one gets a majority this time, repeat the process. Eventually, someone has to get a true majority.

Ranked choice voting is also known as instant-runoff voting: it emulates runoff elections but in a single round of balloting. First, if there are only two candidates to begin with, nothing changes – somebody will get a majority. Suppose there are 3 candidates – A, B and Z; then, on the ballot, each voter lists the 3 candidates in the order of that voter’s preference. First, the count is made of the number of first place votes each candidate received; if for one candidate that number is a majority, that candidate wins outright. Otherwise, the candidate with the least number of first place votes, say Z, is eliminated; now we add to A’s first place total the number of ballots that ranked Z first but that listed A as second choice and similarly for B. Now, except in the case of a tie, either A or B will have a clear majority and will be declared the winner. This will give the same result that staging a runoff between A and B would have yielded but in one trip to the voting booth where the voter has to rank the candidates A,B,Z on the ballot rather than choosing only one.

There are other positive side-effects to ranked choice voting. For one thing, voter turnout goes up; another thing is that campaigns are less nasty and partisan – you want your opponents’ supporters to list you second on their ballots! One can also see how this voting system makes good sense for primaries where there are often multiple candidates; for example, with the recent Democratic field of presidential candidates, ranked choice voting would have given the voter a chance to express his or her opinion and rank a marginal candidate with good ideas first without throwing that vote away.

After the 2010 debacle in Maine (LePage proved a most divisive and most unpopular governor), the Downeasters switched to ranked choice voting. In 2016 in one congressional district, no candidate for the House of Representatives gathered an absolute majority on the first round but a different candidate who received fewer first place votes on that first round won on the second round when he caught up and surged ahead because of the number of voters who made him their second choice. Naturally, all this was challenged by the losing side but they lost in court. For elections, the U.S. Constitution leaves implementation to the states for them to carry out in the manner they deem fit – subject to Congressional oversight but not to judiciary oversight. Per Section 4 of Article 1: “The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof; but the Congress may at any time by Law make or alter such Regulations, …”

Ranked voting systems are not new and have been a serious topic of interest to social scientists and mathematicians for a long time now – there is something mathematically elegant about the way you can simulate a sequence of runoffs in one ballot. Among them, there are the 18th Century French Enlightenment thinker, the Marquis de Condorcet, and the 19th Century English mathematician, Charles Lutwidge Dodgson, author of Dodgson’s Method for analyzing election results. More recently, there was the work of 20th Century mathematical economist Kenneth Arrow. For this and other efforts, Arrow was awarded a Nobel Prize; Condorcet had a street named for him in Paris; however, Dodgson had to take the pen name Lewis Carroll and then proceed to write Alice in Wonderland to rescue himself from the obscurity that usually awaits mathematicians.

The Third Person VIII: The Fall of Rome

St Augustine of Hippo (354-430) was the last great intellectual figure of Western Christianity in the Roman Empire. His writings on election and pre-destination, on original sin, on the theory of a just war and on the Trinity had a great influence on the medieval Church, in particular on St Thomas; he also had a great influence on Protestant reformers such as John Wycliffe and John Calvin. Augustine himself was influenced by the 3rd century Greek philosopher Plotinus and the Neoplatonists, influenced to the point where he ascribed to them some awareness of the persons of the Trinity (Confessions VIII.3; City of God X.23).

After the sack of Rome in 410, Augustine wrote his Sermons on the Fall of Rome. In these episcopal lectures, he absolves Christians of any role in bringing about the event that plunged the Western branch of the Empire into the Dark Ages, laying all the blame on the wicked, wicked ways of the pagans. European historians have begged to differ, however. By way of example, in his masterpiece, both of English prose and of scholarship, The History of the Decline and Fall of the Roman Empire, the great 18th century English historian Edward Gibbon does indeed blame Christianity for weakening the fiber of the people, hastening the Fall of Rome.

In his work on the Trinity, Augustine followed the Nicean formulation and fulminated against the heretics known as Arians who denied the divinity of Christ. Paradoxically, it was Arian missionaries who first reached many of the tribes of barbarians invading the Empire, among them the Vandals. The Vandal horde swept from Spain Eastward along the North African coast and besieged St. Augustine’s bishopric of Hippo (today Annaba in Algeria). Augustine died during the siege and did not live to see the sack of the city.

By the time of St Augustine, in the Western Church the place of the Holy Spirit in the theology of the Holy Trinity was secured. But in popular culture, the role of the Holy Spirit was minor. Jesus and Mary were always front and center along with God the Father. To complicate matters, there emerged the magnificent doctrine of the Communion of Saints: the belief that all Christians, whether here on Earth, in Purgatory or in Heaven could communicate with one another through prayer. Thus, the faithful could pray to the many saints and martyrs who had already reached Heaven and the latter could intercede with God Himself for those who venerated them .

This is the internet and social media prefigured. The doctrine has other modern echoes in Jung’s Collective Unconscious (an inherited shared store of beliefs and instincts) and in Teilhard de Chardin’s noosphere (a collective organism of mind).

The origins of this doctrine are a problem for scholars. Indeed, even the first reference known in Latin to the “Communio sanctorum” is ascribed to Nicetas of Remesiana (ca. 335–414), a bishop from an outpost of the Empire on the Danube in modern day Serbia who included it in his Instructions for Candidates for Baptism. Eventually, though, the doctrine made its way into the Greek and Latin versions of the Apostles Creed.

The wording “I believe … in the Communion of Saints” in the Apostles Creed is now a bedrock statement of Christian belief. However, this doctrine was not part of the Old Roman Creed, the earlier and shorter version of the Apostles Creed that dates from the second and third centuries. It also does not appear in the Nicene Creed. The earliest references to the Apostles Creed itself date from 390 and the earliest extant texts referencing the Communion of Saints are later still.

One school of thought is that the doctrine evolved from St Paul’s teaching that Christ and His Christians form a single mystical body (Romans 12.4-13, 1 Corinthians 12). Another candidate is this passage in the Book of Revelation 5.8 where the prayers of the faithful are collected in Heaven:

      And when he had taken it, the four living creatures and the twenty-four elders fell down before the Lamb. Each one had a harp and they were holding golden bowls full of incense, which are the prayers of God’s people.

For an illustration from the Book of Hours of the Duc de Berry of St John imagining this scene as he wrote the Book of Revelation on the Ile of Patmos, click HERE .

The naïve view is that the doctrine of the Communion of Saints came from the ground up, from nascent folk Christianity where it helped to wean new converts from their native polytheism. Indeed, as Christianity spread, canonization of saints was a mechanism for absorbing local religious traditions and for making local martyrs and saintly figures recognized members of the Church Triumphant.

With the doctrine of the Communion of Saints, the saints in heaven could intercede for individual Christians with the Godhead; devotion to them developed, complete with hagiographic literature and a rich iconography. Thus, the most celebrated works of Christian art depict saints. Moreover, special devotions have grown up around popular patron saints such as Anthony (patron of lost objects), Jude (patron of hopeless cases), Jean-François Rėgis (patron of lacemakers), Patrick (patron of an island nation), Joan of Arc (patron of a continental nation), … .

The Holy Spirit, on the other hand, pops up in paintings as a dove here and there, at best taking a minor part next to John the Baptist and Jesus of Nazareth or next to the Virgin Mary and the Angel Gabriel. This stands in contrast with the Shekinah, the Jewish precursor of the Holy Spirit, who plays an important role in the Kabbalah and Jewish mysticism.

There is one area of Christianity today, however, where the Holy Spirit is accorded due importance: Pentecostal and Charismatic churches; indeed, the very word Pentecostal is derived from the feast of Pentecost where the Holy Spirit and tongues of fire inspired the apostles to speak in tongues. In these churches, direct personal experience of God is reached through the Holy Spirit and His power to inspire prophecy and insight. In the New Testament, it is written that prophecy is in the domain of the Holy Spirit – to cite 2 Peter 1:21 :

      For no prophecy ever came by the will of man: but men spake from God, being moved by the Holy Spirit.

Speaking of prophecy, the Holy Spirit is not mentioned in the Book of Revelation which is most surprising since one would think that apocalyptic prophesy would naturally be associated with the Holy Spirit. For some, this is one more reason that the Council of the Rome (382) under St Pope Damasus I should have thought twice before including the Book of Revelation in the canon. There are other reasons too.

Tertullian, the Father of Western Theology, is not a saint of the Catholic Church – he defended a Charismatic-Pentecostal approach to Christianity, Montanism, which was branded a heresy. This sect had three founders, Montanus and the two sibyls, Priscilla and Maximilla; the sybils would prophesy when the Holy Spirit entered their bodies. Alas, there are no classical statues or Renaissance paintings honoring Priscilla and Maximilla; instead the Church treated them as “seductresses” who, according to Eusebius’ authoritative, 4th century Church History, “left their husbands the moment they were filled with the spirit.” No wonder then that Eusebius is known as the Father of Church History.

While we have no masterpieces depicting Priscilla or Maximilla, for a painting of the Sybil at Delphi, click HERE .

For Tertullian and the Montanists, the Holy Spirit was sent by God the Son to continue the revelation through prophecy. Though steeped in Greek rationalism, Tertullian insisted on the distinction between faith and reason, on the fact that faith required an extra magical step: “I believe because it is absurd” – bolder even than Pascal. He broke with the main body of the Church saying that the role given to the Holy Spirit was too narrow – a position shared by Pentecostal Christians today. In fact, the Holy Spirit is key to Pentecostalism where the faithful are inspired with “Holy Ghost fire” and become “drunk on the Holy Spirit.” Given that this is the only branch of Western Christianity that is growing now as the others recede, it looks as though Tertullian was insightful and should have been listened to more carefully. Perhaps, it is a good time now for the Church of Rome to bring up the subject of his canonization almost two millennia later. Doing so today would go some way toward restoring the Holy Spirit to a rightful place in Christianity, aligning the Holy Spirit’s role with that of the continuing importance of the Shekinah in the Jewish tradition and restoring the spirit of early Christianity.

The Third Person VII: The Established Religion

During Augustus’ reign as Emperor of the Roman Empire, the Pax Romana settled over the Mediterranean world – with the notable exception of Judea (Palestine, the Holy Land). After the beheading of John the Baptist and the Crucifixion of Jesus of Nazareth, unrest continued leading to the Jewish-Roman Wars (66-73, 115-117, 132-135), the destruction of the Temple in Jerusalem (70) and the forced exile of many Jews. Little wonder then that the early Gentile Christians disassociated themselves from Judaism and turned to Greek philosophical models to develop their new theology.

And with God and the Logos (the Word) of Platonism and Stoicism, the Greco-Roman intellectual world was in some sense “ready” for God the Father and God the Son. Indeed, the early Christians identified the Logos with the Christ. In the prologue of the Gospel of St. John in the King James Bible, verses 1 and 14 read

    In the beginning was the Word, and the Word was with God, and the Word was God.

    And the Word was made flesh, and dwelt among us, (and we beheld his glory, the glory as of the only begotten of the Father,) full of grace and truth.

Christians who undertook the task of explaining their new religion to the Greco-Roman world were known as apologists, from the Greek ἀπολογία meaning “speech in defence.” Thus, in following up on the Gospel of John, Justin Martyr (100-165), a most important 2nd century apologist, drew on Stoic doctrine to make Christian doctrine more approachable; in particular, he held that the Logos was present within God from eternity but emerged as a distinct actor only at the Creation – the Creation according to Genesis, that is. But while often referring to the Spirit, the Holy Spirit, the Divine Spirit and the Prophetic Spirit in his writings, Justin apparently never formulated a theory of the Trinity as such.

So from here, how did early Christians reach the elegant formulation of the doctrine of the Holy Trinity that is so much a part of Catholic, Protestant and Orthodox Christianity? Mystère.

The earliest surviving post-New Testament Christian writings that we have that include the Holy Spirit, the Father and the Son together in a trinity identify the Holy Spirit with Wisdom/Sophia. In fact, the first Christian writer known to use the term trinity was Theophilos of Antioch in about the year 170:

    the Trinity [Τριάδος], of God, and His Word, and His wisdom.

In his powerful Against Heresies, Irenaeus (130-202) takes the position that God the Son and God the Holy Spirit are co-eternal with God the Father:

    I have also largely demonstrated, that the Word, namely the Son, was always with the Father; and that Wisdom also, which is the Spirit, was present with Him, anterior to all creation,

In A Plea for Christians, the Athenian author Athenagoras (c. 133 – c. 190) wrote

    For, as we acknowledge a God, and a Son his Logos, and a Holy Spirit, united in essence, the Father, the Son, the Spirit, because the Son is the Intelligence, Reason, Wisdom of the Father, and the Spirit an effluence, as light from fire

Here the “Wisdom of the Father” has devolved onto God the Son and the Holy Spirit is described simply as emanating from the Father. On the one hand, it is tempting to dismiss this theological shift on the part of Athenagoras. After all, he is not considered the most consistent of writers when it comes to sophiology, matters of Wisdom. To quote Prof. Michel René Barnes:

    “Athenagoras has, scholars have noted, a confused sophiology: within the course of a few sentences he can apply the Wisdom of Prov. 8:22 to the Word and the Wisdom of Wisdom of Solomon 7:25 to the Holy Spirit.”

For the full text of Prof. Barnes’ interesting article, click HERE .

On the other hand, the view in A Plea for Christians took hold and going forward the Son of God, the Logos, was identified with Holy Wisdom; indeed, the greatest church of antiquity, the Hagia Sophia in Constantinople, was dedicated to God the Son and not to the Holy Spirit.

But Trinitarianism did not have the field to itself. For one thing, there was still Sabellianism where Father, Son and Holy Spirit were just “manners of speaking” about God. The fight against Sabellianism was led by Tertullian – Quintus Septimius Florens Tertullianus to his family and friends. This Doctor of the Church was the first writer to use the term Trinitas in Latin; he is considered the first great Western Christian theologian and he is known as the Father of the Latin Church. For Tertullian, a most egregious aspect of Sabellianism was that it implied that God the Father also suffered the physical torments of the cross, a heretical position known as patripassionism. Tertullian directly confronted this heresy in his work Contra Praxeas where he famously accused the eponymous target of his attack of “driving out the Holy Spirit and crucifying the Father”:

    Paracletum fugavit et patrem crucifixit

Tertullian developed a dual view of the Trinity distinguishing between the “ontological Trinity” of one single being with three “persons” (Father, Son, Holy Spirit) and the “economic Trinity” which distinguishes and ranks the three persons according to each One’s role in salvation: the Father sends the Son for our redemption and the Holy Spirit applies that redemption to us. In the ontological Trinity, there is only one divine substance (substantia) which is shared and which means monotheism is maintained. Here Tertullian is using philosophy to underpin theology: his “substantia” is a Latin translation of the term used by Greek philosophers ουσία (ousia). Interestingly, Tertullian himself was very aware of the threat of philosophy infiltrating theology and he famously asked “What has Athens to do with Jerusalem?”

The Roman empire of the early Christian era was a cauldron of competing philosophical and religious ideas. It was also a time of engineering and scientific achievement: the invention of waterproof cement made great aqueducts and great domes possible; the Ptolemaic system of astronomy provided algorithms for computing the movements of the spheres (the advance that Copernicus made didn’t change the results but simplified the computations); Diophantus of Alexandria is known as the Father of Algebra, … . The level of technology developed at Alexandria in the Roman period was not reached again until the late Renaissance (per the great Annalist historian Fernand Braudel).

In the 3rd century, neo-Platonism emerged as an updated form of Greek philosophy – updated in that this development in Greek thought was influenced by relatively recent Greek thinkers such as the neo-Pythagoreanists and Middle Platonists and likely by others such as the Hellenized Jewish writer Philo of Alexandria, the Gnostics and even the Christians.

The principal architect of neo-Platonism, Plotinus (204–270), developed a triad of the One, Intellect, and Soul, in which the latter two “proceed” from the One, and “are the One and not the One; they are the One because they are from it; they are not the One, because it endowed them with what they have while remaining by Itself” (Enneads, 85). All existence comes from the productive unity of these three. Plotinus describes the elements of the triad as three persons (hypostases), and describes their sameness using homoousios, a sharper way of saying “same substance.” From neo-Platonism came the concept of the hypostatic union, a meld of two into one which Trinitarians would employ to explain how Christ could be both God and man at the same time.

So at this point, the Trinitarian position had taken shape: very roughly put, the three Persons are different but they are co-eternal and share the same substance; the Son of God can be both God and man in a hypostatic union.

But Trinitarianism was still far away from a final victory. The issue of the dual nature of God the Son as both God and man continued to divide Christians. The most serious challenge to the Trinitarian view was mounted in Alexandria: the bishop Arius (c. 250-c. 336) maintained that the Son of God had to be created by the Father at some point in time and so was not co-eternal with the Father nor was the Son of the same substance as the Father; a similar logic applied to the Holy Spirit. Arianism became widely followed, especially in the Eastern Greek Orthodox branch of the Church and lingered for centuries; in more recent times, Isaac Newton professed Arianism in some of his religious writings – these heretical documents were kept under wraps by Newton’s heirs for centuries and were only rediscovered by John Maynard Keynes in 1936 who purchased them at an auction!

While the Empire generally enjoyed the Pax Romana, at the highest levels there were constant struggles for supreme power – in the end, who had the loyalty of the Roman army determined who would be the next Emperor. The story that has come down to us is that in 312, as Constantine was on his way to fight his last rival in the Western Empire, Maxentius, he looked up into the sky and saw a cross and the Greek words “Εν Τούτῳ Νίκα” (which becomes “In Hoc Signo Vinces” in Latin and “In this sign, you will conquer” in English). With his ensuing victory at the Battle of the Milvian Bridge, Constantine gained control over the Western Roman Empire. The following year, with the Edict of Milan, Christianity was no longer subject to persecution and would be looked upon benevolently by Constantine. For a painting of the cross in heaven and the sign in Greek by the School of Raphael, click HERE and zoom in to see the writing on the sign.

Consolidating the Eastern and Western branches of the Empire, Constantine became sole emperor in 324. Now that Christianity was an official religion of the Empire, it was important that it be more uniform in dogma and ritual and that highly divisive issues be resolved. To that end, in 325, Constantine convened a Council at Nicea (modern Iznik, Turkey) to sort out all the loose ends of the very diverse systems of belief that comprised Christianity at that time. One of the disagreements to settle was the ongoing conflict between Arianism and Trinitarianism.

Here the council came down on the side of the Trinitarians: God has one substance but three persons (hypostases); though these persons are distinct, they form one God and so are all co-eternal. There is a distinction of rank to be made: God the Son and God the Holy Spirit both proceed from God the Father. This position was formalized by the Council of Nicea and refined at the Council of Constantinople (381). In the meantime, with the Edict of Thessalonica in 380, Theodosius I officially made Christianity the state religion of the Empire.

Still disagreements continued even among the anti-Arians. There is the interesting example of Marcellus of Ancyra (Ankara in modern Turkey), an important participant in the Council of Nicea and a resolute opponent of the Arians; Marcellus developed a bold view wherein the Trinity was necessary for the Creation and for the Redemption but, at the end of days, the three aspects (πρόσωπα prosopa but not ὑπόστασɛς hypostases, persons) of the Trinity would merge back together. Marcellus’ position has a scriptural basis in St Paul’s assertion in 1 Corinthians 20:28 :

    … then the Son himself will be made subject to him [God] who put everything under him [the Son], so that God may be all in all.

This view also harkens back somewhat to Justin Martyr – in fact, writings now attributed to Marcellus were traditionally attributed to Justin Martyr! So, in Marcellus’ view, in the end Christ and the Holy Spirit will return into the Father, restoring the absolute unity of the Godhead. This line of thought opened Marcellus to the charge of Sabellianism; he also had the misfortune of having Eusebius, the Father of Church History, as an opponent and his orthodoxy was placed in doubt. For a tightly argued treatise on this illustrative chapter in Church History and for a tour of the dynamic world of 4th century Christian theologians, there is the in-depth study Contra Marcellum: Marcellus of Ancyra and Fourth-Century Theology by Joseph T. Lienhard S.J.

The original Nicene Creed of 325 as well as the updated version formulated at the Council of Constantinople of 381 had both the Holy Spirit and God the Son proceeding from God the Father. Whether the Holy Spirit proceeds from the Son as well as from the Father is a tough question for Trinitarianism; in Latin filioque means “and from the Son” and this phrase has been a source of great controversy in the Church. A scriptural justification for including the filioque in the Nicean Creed is found in John 20:22 :

    And with that he [Jesus] breathed on them and said, “Receive the Holy Spirit …”

Is the filioque a demotion for the Holy Spirit vis-à-vis God the Son? Or is it simply a way of organizing the economic Trinity of Tertullian? In the late 6th century, Western churches added the term filioque to the Nicene Creed but the Greek churches did not follow suit; this lingering controversy was an important issue in the Great Schism of 1054 which led to the definitive and hostile breakup of the two major branches of Christendom. This schism created a fault line in Europe separating Orthodox from Roman Christianity that has endured until modern times. Indeed, it was the massive Russian mobilization in July 1914 in support of Orthodox Christian Serbia which led directly to World War I.


Brexit is a portmanteau word meaning “British exit from the European Union.” The referendum on Brexit in the UK in 2016 won by means of a majority of less than 2% of the votes cast representing less than 34% of the voting age population. The process took place in the worst possible conditions – false advertising, fake news, dismal voter participation, demagogy and xenophobia. Brexit is yet another example of the dangers of mixing representative government with government by plebiscite – other examples include the infamous Proposition 13 in California which turned one of the best public school systems in the nation into one of the worst and the vote for independence in Quėbec which, with a simple majority, would have torn Canada apart. Problem is, these simple majority referenda can amount to a form of mob rule.

The two components of the United Kingdom that will be most negatively affected by Brexit are the Celtic areas of Scotland and Northern Ireland; both voted to stay in the European Union – 62% and 55.8% respectively. Brexit will push Scotland toward independence risking the breakup of the UK itself. The threat of Brexit had already reignited The Troubles in Northern Ireland and the enactment of Brexit will bring back more violence and bloodshed.

We learn in school about English imperialism and colonialism – how the sun never sets on the British Empire and all that. Historically, the first targets of English imperialism were the Celtic peoples of the British Isles – the Welsh, the Scots and the Irish.

Incursion into Welsh territory began with William the Conqueror himself in 1081 and by 1283 all Wales was under the control of the English King Edward I – known as Edward Longshanks for his great height for the time (6’2”).

A series of 13 invasions into Scotland began in 1296 under the that same celtiphobe English king, Edward I, who went to war against the Scottish heroes William Wallace (Mel Gibson in Braveheart) and Robert the Bruce (Angus Macfadyen in Robert the Bruce). So memorable was Edward’s hostility toward the Scots that on his tomb in Westminster Abbey is written

    Edwardus Primus Scottorum malleus hic est, pactum serva
(Here is Edward I, Hammerer of the Scots. Keep The Faith)

In that most scholarly history of England, 1066 and All That, there is a perfect tribute to Edward I – a droll cartoon of him hammering Scots.

Edward I didn’t only have it in for the Welsh and the Scots but for Jews as well: in 1290 he issued the Edict of Expulsion, by which Jews were expelled from Merry England. His warmongering caught up with him though: Edward died in 1307 during a campaign against Robert the Bruce – though not of a surfeit but rather of dysentery.

The series continued until 1650 with an invasion led by Oliver Cromwell. To his credit, however, the Lord Protector (Military Dictator per Winston Churchill) revoked Edward’s Edict of Expulsion in 1657.

The story of English aggression in Ireland is even more damning. It started with incursions beginning in 1169 and the full scale invasion launched by King Henry II in 1171. Henry (Peter O’Toole in The Lion in Winter and in Becket) had motivation beyond the usual expansionism for this undertaking: he was instructed by Pope Adrian IV by means of the papal bull Laudabiliter to invade and govern Ireland; the goal was to enforce papal authority over the too autonomous Irish Church. Adrian was the only English pope ever and certainly his motives were “complex.” His bull was a forerunner of the Discovery Doctrine of European and American jurisprudence which justifies Christian takeover of native lands (click HERE ). English invasions continued through to full conquest by Henry VIII, the repression of rebellions under Elizabeth I, and the horrific campaign of Oliver Cromwell. There followed the plantation system in Northern Ireland as the six counties of Ulster became known, and a long period of repressive government under the Protestant Ascendancy. The Irish Free State was only formed in 1922 after a prolonged violent struggle and at the price of partition of the Emerald Isle into Northern Ireland and the Free State; the modern Republic of Ireland only dates from 1949. The “low-level war” known as The Troubles that began in the 1960s was triggered by the discrimination against Catholics that the English-backed regime in the Ulster parliament maintained. This kind of discrimination was endemic: according to memory, no Catholic was hired to work on the building of the Titanic in the Belfast shipyards; according to legend, the Titanic had “F__ the Pope” written on it; according to history, blasphemy does not pay. The Troubles were a violent and bitter period of conflict between loyalists/unionists (mainly Protestants who wanted to stay in the UK) and nationalists/republicans (mainly Catholics who wanted a united Ireland). The Troubles finally ended with the Good Friday Agreement of 1998 that was possible because of joint membership in the EU which made Northern Ireland and the Republic both part of a larger political unit and which, for all practical purposes, ended the frontier separating them – a frontier which until then was manned by armed British soldiers.

Economically and politically, the EU has been good for the Irish Republic and it has become a prosperous modern Scandinavian style European country – indeed Irish born New York Times writer Timothy Egan titled his July 20th op-ed “Send me back to the country I came from.”

Scotland too benefits from EU membership, from infrastructure investments, worker protection regulations and environmental standards – things dear to the socially conscious Scots.

All this history makes the Brexit vote of 2016 simply amoral and sadistic. To add to that, the main reason Theresa May’s proposal for a “soft Brexit” with a “backstop” was repeatedly shot down was its customs union clause that it would have forestalled the border closing in Northern Ireland. On the other hand, reversing the process because of the harm it would inflict on peoples who have been victims of British imperialism over the centuries would have been a gesture of Truth and Reconciliation by the English. Alas, Brexit is now a fact, Boris Johnson having won a majority in Parliament with less than 50% of the vote – the English rotten boroughs are the U.K. analog of the U.S. Electoral College.

The Third Person VI: The Pax Romana

From the outset in the New Testament, the Epistles and Gospels talk of “the Father, the Son and the Holy Spirit.” God the Father came from Yahweh of the Hebrew Bible; the Son was Jesus of Nazareth, an historical figure. For the Holy Spirit, things are more complicated. For sources, there are the Dead Sea Scrolls of the Essenes with the indwelling universal presence of the Holy Spirit; there is the Aramaic language literature the Targums with the Lord’s Shekinah who stands in for Him in dealing with the material world and who enables prophecy by humans; there is the Wisdom literature such as the Wisdom of Solomon where Sophia provides a feminine divine presence.

As the Shekinah becomes an independent deity in the Kabbalah, so in the New Testament the Holy Spirit is a full-fledged divine actor. Like the Shekinah, the Holy Spirit’s announced role is to represent the Godhead in the material world, to guide the lives of the believers and to inspire their prophesying.

The New Testament is written in Greek, not Aramaic and not Hebrew. In the first century, the leadership of the nascent Christian community quickly passes from the apostles and deacons in the Holy Land to the Hellenized Jewish converts of the Diaspora and the Greek speaking gentiles of the Roman Empire. Traditional Jewish practices such as male circumcision are dropped, observing the Law of Moses is no longer obligatory and the Sabbath is moved to Sunday, the day of rest of the Gentiles.

But now God has become three –the Father, the Son, the Holy Spirit. Monotheism is definitely in peril here. Add to that the Virgin Mary together with the Immaculate Conception and the Assumption and you have four divinities to deal with.

Indeed, the early Christians assigned to Mary functions once assumed in the world of Biblical Palestine by Asherah, the Queen of Heaven, and then by the Shekinah of Jewish lore. Christianity would thus not suffer from the unnatural absence of a feminine principle as did the Judaism of the Pharisees of the Temple. However, despite accusations of Mariolatry, Christianity has never deified the Mother of God, only canonized her. Even so, that still leaves us with three divinities where once there was one.

When it comes to the messianic role of Jesus, the New Testament writers do strive to calibrate their narratives with the prophecies and pronouncements of the Hebrew Bible. The situation is different when it comes to the Holy Spirit. In Acts 2, the Holy Spirit descends upon the apostles in the form of tongues of fire when they are assembled in Jerusalem for the Shavuot holiday (aka The Feast of Weeks). This holiday takes place on the fiftieth day after Passover; it both celebrates the spring harvest and the day that God gave The Torah to Moses and the nation of Israel. This is a link to the Essenes’ doctrine where this feast was a special time of connection between the believer and the indwelling Holy Spirit. But that link is not made clear in Acts and even the origin of that Jewish feast is obscured by that fact that the New Testament gives it the Greek name of Pentecost, simply meaning “fifty.”

In general, in the New Testament, there is no explicit association of the Holy Spirit with the Shekinah or with Wisdom/Sophia. Moreover, the way the early Christians handled this complex situation involving the Holy Spirit would not be to go back to the practices of folk or formal Judaism, or to the Essene scrolls or to the Hebrew scriptures to sort it out; rabbinical sources such as the Talmud would not be consulted; Aramaic language sources such as the Targums would not be mined. Rather in dealing with the Holy Spirit and with the charge of polytheism, the Gentile Christians would follow the lead of Greek philosophy and formulate their theology in a way so as to make Christianity intellectually reputable in Greek cultural terms.

With such an important role in the New Testament, the Holy Spirit becomes theologically significant in Christianity – but, in terms of the beliefs and practices of the faithful, the Holy Spirit becomes something of a silent partner. The feminine role of the Shekinah and that of Wisdom/Sophia are taken over by Mary, the Mother of Jesus; the earliest Christian writers place the Holy Spirit in the Godhead with God the Father and God the Son and they do so as Wisdom/Sophia; but later Wisdom becomes identified with the Son of God. In the end, for the faithful, the role of the Holy Spirit as indwelling individual guide would be usurped by patron saints and guardian angels; for the Holy Spirit, the only substantial role left is to round out the Holy Trinity.

As the war with Cleopatra and Mark Anthony comes to an end, Augustus becomes the Roman Emperor and a (relative) peace that would last four hundred years, the Pax Romana, leads to accelerated commercial and cultural exchange throughout the Mediterranean world. Indeed, the cultural world of the Roman Empire is in full ebullition. As the Pax Romana has facilitated the spread of Christian ideas throughout the empire, so too it has provided a platform for competing philosophies, theologies and mystical practices of many sorts. So encounters with developments in Greek and Roman philosophy, with the flow of  new ideas from Messianic Judaism and alternative Jewish/Christian groups, with eastern religions, with mystery religions and on and on would lead to difficult theological arguments and would drive centrifugal forces within the Christian movement itself leading to almost countless heresies to be denounced.

Given the different theological roles of the Father, Son and Holy Spirit in their theology, the early Christians were naturally accused of polytheism. To counter this, the most simple position was called Sabellianism or monarchianism or modalism – Father, Son and Holy Spirit are just manners of speaking, façons de parler, to describe God as He takes on different roles. However, this view came to be condemned multiple times as a heresy by Church authorities basically because it implies that God the Father had to somehow endure the pain of the crucifixion..

So alternative solutions were proposed. The adoptionist position was that God the Son was a human elevated to the rank of Son of God, the Holy Spirit also being a creation of God the Father. In the form of Arianism where God the Son is not co-eternal with God the Father but was begotten by the Father at some point in time,  this kind of position stayed current in Christianity for centuries. The position that eventually emerged victorious was trinitarianism: “three co-equal persons in one God.” This last phrase seems straightforward enough today, but a proper parsing requires some explanation. For one thing, this formulation does not come from the Hebrew scriptures, the Targums or rabbinical sources. To be fair, there is one place in the Hebrew Bible where God does appear as a threesome: in Genesis 18, the Lord visits Abraham to announce that his wife Sarah shall bear a child. The first two verses are

       The LORD appeared to Abraham near the great trees of Mamre while he was sitting at the entrance to his tent in the heat of the day. Abraham looked up and saw three men standing nearby. When he saw them, he hurried from the entrance of his tent to meet them and bowed low to the ground.

This was taken as a pointer to the Trinity by some early Christian writers (St. Augustine among them) but hardly suffices as an explanation of the evolution of the doctrine. So the development of the theology of the Holy Trinity is still a mystère of its own.

In the Greco-Roman world of the time of Christ, there was a view that prefigured the Christian Trinity that originated with Plato and that was followed by the Stoics: there was an abstract, non-material God from all eternity from whom came the Logos (the Word of God) who was responsible for the Creation of the material universe. So already from the Greek philosophical world, we have the idea of a dyadic Godhead.

This cosmogony infiltrated the Hellenized Jewish milieu as well. In a kind of last attempt to reconcile Jewish and Greek culture in the Hellenistic world, a contemporary of Jesus of Nazareth, Philo of Alexandria (aka Philo Judaeus) developed an entire philosophy complete with a trinity: there was Yahweh of the ineffable name, the Wisdom/Sophia of the Wisdom of Solomon, and the Logos (the Word) of Plato who was responsible for the actual creation of the physical universe. Philo wrote (Flight and Finding XX (108, 109))

    because, I imagine, he [the Logos, the Word of God] has received imperishable and wholly pure parents, God being his father, who is also the father of all things, and wisdom being his mother, by means of whom the universe arrived at creation

The theology of the Holy Spirit is called pneumatology from pneuma the Greek word for spirit (or breath or wind). On the Christian side of the fence then, Philo’s view – where the Holy Spirit takes on the role of queen consort of El/Yahweh – is known as consort pneumatology.

Another movement with roots in the Jewish/Christian world of Alexandria that impacted early Christianity and Rabbinical Judaism was Gnosticism.  This topic is certainly worth an internet search. Very, very simply put, at its core there was the belief that God, the Supreme Being, is unknowable, that the material world was created by a lesser figure known as a demiurge, that the material world is evil in itself and that only knowledge (gnosis in Greek) coming from God can lead to salvation. At its root, Gnosticism is based on a dualism between the forces of good and the forces of evil, between God and lesser deities. Gnosticism was an important movement in antiquity with an impact on Christianity, Judaism and Islam. The battle between St Michael and Lucifer in the Book of Revelation can be understood in Gnostic terms. Furthermore, Gnosticism contributed to the elaboration of the Kabbalah and Gnostic strains abound in the Quran. It gave rise to religious systems such as Manichaeism (with its competing forces of good and evil).

Manichaeism’s last stand in the Western Christian world was the Cathar (Albigensian) movement of the South of France in the Middle Ages. To aid in the destruction of Cathar civilization with its indigent holy men and holy women and its troubadour poets, Pope Innocent III created the first Papal Inquistion and in 1209 had the French king launch a horrific crusade against them. For his part, in imitation of the Cathars, St. Dominic founded the mendicant order of the Dominicans but then it was this same order that led the Inquisition’s persecution of the Cathars – all this is bizarrely celebrated in her pop hit “Dominique” by Soeur Sourire, the Singing Nun:

    “Dominique … combattit les Albigeois”

For the vocal click here.

The Mandaeans are a Gnostic, dualistic sect that is still active in Iraq. One of the many terrible side effects of the War in Iraq is the oppression (approaching ethnocide) of religious groups such as the Chaldean Christians, the Yazidis and the Mandaeans. The Mandaeans numbered some 60,000 in 2003 but, with the war and Islamic extremism, many have fled and their numbers are down to an estimated 5,000 today. When you add to this how conflict in the Middle East has led to the end of the once thriving Jewish communities of Mesopotamia, we are witnessing a terrible loss of religious diversity akin to the disappearance of species.

For the history of Christian Gnosticism in the early Christian era, we have the writings of their opponents and texts known as the Gnostic Gospels. Again very simply put, to the Gnostic scheme these Christians add that the Supreme Being sent Christ to bring humans the knowledge (gnosis) necessary for redemption. Concerning the Holy Spirit, we know from the Gnostic Gospel of St. Philip that theirs, like Philo’s, was a consort pneumatology. Some of the names of these Gnostic Christians pop up even today; for example, Jack Palance plays one of them, Simon Magus, in the 1954 movie The Silver Chalice. Being portrayed by Jack Palance certainly means you are villain enough, but one can add to that the fact that the sin of simony (selling holy offices) is named for Simon Magus (Acts 8:18).

So with all these competing philosophies and theologies – from within Christianity and from outside Christianity – to contend with, just how did trinitarianism emerge as the canonical position? Mystère.

The Drums of War

The history of civilization was long taught in schools as the history of its wars – battles and dates. Humans are unique this way: male animals fight amongst themselves for access to females but leave each other exhausted and maybe wounded but not dead. What cultural or biological function does war among humans actually have?

In his classic dystopian novel, 1984, Orwell described a nation committed to endless war, the situation the US finds itself in today. In the book, the purpose of these wars is to control the population with patriotic rallies and surveillance, to cover up failings of the leadership and to get rid of excess industrial production. Some would also point to the powerful side-effects of technology first developed for the military that contribute to the head-long plunge into an Orwellian future such as the Internet.

Our defense industry today is perfectly suited for the third task Orwell lists: after all, it makes things that destroy themselves. This industry is so dominated by a small number of giant companies that they can dictate costs and prices to the Pentagon, knowing the military budget will bloat to oblige them. This military-industrial complex lives outside the market-based capitalist system; the current move to merge Raytheon and United Technologies will be one more step in concentration of this oligopoly, one that does not brook competition. We can’t say that Eisenhower didn’t warn us.

But the human price of war is very, very high. So how does a modern nation-state structure its sociology to enable it to endure wars? For one point of view, the French feminist author Virginie Despentes puts it this way: with the citizens’ army, men have a “deal” – be willing to fight the nation’s wars in exchange for a position of prestige in society. With this setup, the position of women is made subordinate to that of men.

However, this compact is being eroded today: the US has a professional army, as do France, England, Germany et al. Moreover, the professional American army recruits women to bolster the level of IQ in the military which is so important for today’s technology based warfare! The erosion (in the US since the Vietnam draft) of the citizens’ army as a source of power for men in society could be a factor in the rise of feminist activism.

The irony is that while endless wars continue, the world is actually less violent today than it has been for a long time now. According to Prof. Steven Pinker, author of The Better Angels of Our Nature, the rate of death in war has fallen by a factor of 100 over a span of 25 years. The wars of today just do not require the great conscript armies of the world wars such as the massive force of over 34 million men and women that the Soviet Union put together in WWII while suffering casualties estimated to be as high as 11 million. But at least no one has yet called for a return to major wars in order to “right things” for men in Western societies.

For its part, the US has been at war constantly since the invasion of Afghanistan in 2001 and, indeed since December 1941 but for some gaps – very few when you include covert operations such as support for Saddam Hussein during the decade-long Iran-Iraq War of the 1980s and the not-so-covert downing of an Iranian civil airliner with 288 people on board by the USS Vincennes in 1988; for more in the Reagan era, add the Iran-Contra scandal where the US backed counter-revolutionaries attempting to overthrow the democratically elected government in Nicaragua (carried out in flagrant violation of US law but, have no fear, all convicted perpetrators were pardoned by G.H.W. Bush). However, the armed forces are no longer a citizen’s army but rather a professional force in the service of the US President, Congress having given up its right alone to declare war (same for tariffs). This arrangement distances the wealthy and members of the government from war itself and insulates them and most of the population from war’s human consequences.

In addition to serving Orwell’s purposes, US wars have consistently been designed to further the interests of corporations – Dick Cheney, Halliburton and Iraq; multiple incursions in Central America and the Caribbean to benefit United Fruit; the annexation of Hawaii to benefit the Dole Food Company; shipping and a convenient revolution in Colombia to create Panama in order to build the canal. As another such example, the source of the US-Cuba conflict stems from the Cuban nationalizations of United Fruit plantations and the Cuban expropriations of hotels and gambling operations belonging to Meyer Lansky and the Mafia, back in 1959. From there things escalated to the Cuban Missile Crisis of 1962 and on to the current bizarre situation.

As the forever wars endure, on the home front the military are obsequiously accorded veneration once reserved for priests and ministers. Armistice Day, which celebrated the end of a war, now is Veterans Day making for two holidays honoring the armed forces, one in Spring and one in Fall. Sports events routinely are opened by Marine Color Guards accompanied by Navy jet flyovers. In fact, the military actually pays the National Football League for this sort of pageantry designed to identify patriotism with militarism.

The courage and skill of US armed forces members in the field are exemplary. But what makes all this veneration for the military suspect is how little success these martial efforts have been having. Let’s not talk about Vietnam. The first Iraq War (Iraq I) did drive the Iraqis out of Kuwait but all that was made necessary by the U.S. ambassador’s giving Saddam Hussein an opening to invade in the first place – Iraq I also failed to remove the “brutal dictator” Saddam, a misstep which later became a reason for Iraq II. The war in Afghanistan began with the failure to capture Osama Bin Laden at Tora Bora and today the Taliban are as powerful as ever and the poppy trade continues unabated. Iraq II led to pro-Iranian Shiite control of the government, Sunni disaffection and ISIS. Any progress in Syria or Iraq against ISIS has been spearheaded by the Kurds, allies whom the US has thrown under the bus lest the US incur the wrath of the Turks. The Libya that NATO forces bombed during 7 months in 2011 is now a failed state.

There are 195 countries in the world today. The US military is deployed in 150 of them. The outsized 2018 US military budget of some $6.8B was larger than the sum of the next seven such budgets around the world. Von Clausewitz famously wrote that war is an extension of diplomacy. But military action has all but replaced diplomacy for the US – “if you have a hammer, everything looks like a nail.” For budget details, click  HERE .

Recently, the US was busy bombing Libya and pacifying Iraq. Right now the US is involved in hostilities in Afghanistan, Syria and Yemen. Provocations involving oil tankers in the Gulf of Oman appear to be leading to armed conflict with Iran; this is all so worrisomely reminiscent of WMDs and so sadly similar to the fraudulent claim of attack in the Gulf of Tonkin that led to the escalation of the Vietnam War (as revealed by the Pentagon Papers, McNamara’s memoirs and NSA documents made public in 2005).

London bookmakers are notorious for taking bets on American politics. Perhaps, they also could take bets on where the US is likely to invade next. Boots on the ground in Yemen? Instead would the smart money be on oil-rich Libya with its warlords and the Benghazi incident? But Libya was already bombed exhaustively – interestingly right after Gaddafi tried to establish a gold-based pan-African currency, the dinar, for oil and gas transactions. Others would bet on Iran since the drums of war have already started beating and the unilateral withdrawal of the US from the Iran nuclear deal does untie the President’s hands; moreover and ominously for them, Iran just dropped the dollar as its exchange currency. Also invading the Shiite stronghold would ingratiate the US with Sunni ally Saudi Arabia (thus putting the US right in the middle of a war of religion); then too it could please Bibi Netanyahu who believes attacking Persia would parallel the story line of the Book of Esther. What about an invasion of Iran simply to carve out an independent Kurdistan straddling Iran and Iraq – to make it up to the Kurds? And weren’t American soldiers recently killed in action in Niger? And then there’s Somalia. Venezuela next? Etc.

As Pete Seeger sang, “When will they ever learn”?

The Third Person V: The Redacted Goddess

From the Hebrew Bible itself, it is clear that Canaanite polytheism persisted among the Israelites throughout nearly all the Biblical period; this is attested to by the golden calf, by the constant re-appearances of Baal, by the lamentations and exhortations of the prophets, etc. What recent scholarship has brought to the fore, however, is that the female Canaanite goddess Asherah was also an important part of the polytheism of the Israelites and their world.
In addition to more rigorous readings of the Hebrew Bible by historians and translators, modern archaeological scholarship has done much to fill in the picture of Asherah’s importance in Biblical Palestine. Her connection with the Shekinah and the Shekinah’s persistence over a long time in Judaism have also been developed by historians like Raphael Patai, notably with his seminal study The Hebrew Goddess (1967).
In the Hebrew Bible, there are those multiple  references to Baal and to other pagan gods – is Asherah among them? Mystère.
On her own, Asherah is actually referenced some 40 times in the Hebrew Bible in the pluralized form Asherim, but her presence has been covered over by editorial slights of pen.
There are references to Asherah in the First Book of Kings, Chapter 11, where Solomon indulges in idolatry and builds worship sites for her – in this text she is invoked as the goddess Ashtoreth of the Zidonians (a rival Canaanite group). Indeed he built altars for multiple gods and goddesses; Solomon, it appears, was working overtime to indulge his very multiple foreign wives and concubines.
But the term Asherim could refer both to Asherah herself (as with Elohim and El) and to eponymous objects associated with her cult, in particular to shrines under trees known as Asherah Trees and wooden figures known as Asherah Poles. The trick of the interpreters and translators of the Hebrew Bible has been to systematically render Asherim as “wooden poles” or “wooden groves” or simply as the anonymous “groves.”
The Septuagint translation of the Hebrew Bible (3rd and 2nd centuries BC) into Greek religiously follows this practice and this trick was perpetuated both by St. Jerome in his Latin translation and by the authors of the King James Bible.
For example, in the Hebrew Bible, in Judges 3:7 we have a reference to Baal and Asherah; in the New King James Bible (1982), the Hebrew text is translated as
    So the children of Israel did evil in the sight of the LORD. They forgot the LORD their God, and served the Baals and Asherahs.
While in the classic King James version, we have
    And the children of Israel did evil in the sight of the LORD, and forgat the LORD their God, and served Baalim and the groves.
The Catholic Douay-Rheims Bible follows the Latin Vulgate of St. Jerome and it too renders “Asherim” as “groves” in this verse and elsewhere.
In 1 Kings, worship of Asherah was encouraged in the court of King Ahab by his queen Jezebel which led the prophet Elijah to rail against the presence of prophets of Baal and Asherah at the court: the New King James translation of 1 Kings 18:19 reads
    Now therefore, send and gather all Israel to me [Elijah ] on Mount Carmel, the four hundred and fifty prophets of Baal, and the four hundred prophets of Asherah, who eat at Jezebel’s table.
Again, note that this is a new translation; the original King James reads
    Now therefore send, and gather to me [Elijah] all Israel unto mount Carmel, and the prophets of Baal four hundred and fifty, and the prophets of the groves four hundred, which eat at Jezebel’s table.
For a picture of Jezebel, Ahab and Elijah getting together, click HERE
Asherah was even present in the Temple in Jerusalem – statues of her were erected there during the time of King Manasseh (2 Kings 21:7) but then smashed during the reign of his grandson, the reformer King Josiah (2 Kings 23:14). This outbreak of iconoclasm is given in the New King James Bible as
    He [Josiah] also tore down the quarters of the male shrine prostitutes that were in the temple of the LORD, the quarters where women did weaving for Asherah.
Which can be contrasted with the King James text
    And he brake down the houses of the sodomites, that were by the house of the LORD, where the women wove hangings for the grove.
Here too it is only modern translations of the Hebrew text that bring Asherah out into the open.
Josiah’s reform did not long survive his reign, as the following four kings “did what was evil in the eyes of Yahweh” (2 Kings 23:32, 37; 24:9, 19). It is not clear just what those evils were – but they must have been pretty bad to earn a reference in the Bible.
Monotheism was thus slow to become dominant among the population. Right up to the destruction of Solomon’s Temple in Jerusalem by the Babylonians and the Babylonian Captivity, we have Biblical references to idolatry among the Israelites and, in particular, to the worship of Asherah. Indeed, one of Asherah’s titles was Queen of Heaven and this is how she is referred to in Jeremaiah 7:18 when the prophet is lamenting the Israelites’ continuing idolatry in the period just before the destruction of the Temple:
    The children gather wood, and the fathers kindle the fire, and the women knead their dough, to make cakes to the queen of heaven, and to pour out drink offerings unto other gods, that they may provoke me to anger.
After the exile in Babylon was brought to an end by the pro-Israelite Persian King Cyrus the Great, the construction of the 2nd Temple in Jerusalem began and was completed in 515 BC; it is at this time that monotheism built around Yahweh finally becomes firmly established as the official version of Judaism. So by the time of the Septuagint two centuries later, “Asherim” is systematically translated into Greek as “groves.” For official Judaism and its male-based monotheism, it was important to redact references to El/Yahweh’s consort Asherah, the Queen of Heaven.
But while the female principle of Asherah might have been expunged from the Judaism of the Hellenized diaspora and from “high temple” Judaism in Jerusalem itself, popular religion in the countryside of Biblical Palestine was another thing entirely. As heiress to Asherah, the Shekinah emerged in the religious practices of the Aramaic speaking Jews of the region – the world of John the Baptist and Jesus of Nazareth. The Shekinah first figures in the Targums, Aramaic writings from the Biblical Palestine, and she appears in the Talmud as well as in the Kaballah. As Yahweh became more distant from the material world, more modern in transcendence, it is this Shekinah who assured the link between Yahweh and that same material world and thereby provided the link between the Jewish world in the Palestine of the time of Christ and the Christian Holy Spirit.
Then too, following the suppression of Asherah in official Judaism, from Hellenic and Mediterranean sources there came the feminine principle Wisdom/Sophia which infiltrated Proverbs and Isaiah and which is the center piece of the Wisdom of Solomon (aka the Book of Wisdom ), written in Greek in the 1st century BC. Early Christians confirmed the link between Wisdom/Sophia and the Holy Spirit in writings referring to the Father and Son and Holy Spirit, in identifying the Seven Pillars of Wisdom with the Seven Gifts of the Holy Spirit, etc.
A third influence on the Christian Holy Spirit would most likely have come from the Essenes who insisted on the role of the Holy Spirit as dweller in the hearts of men and women; this aspect of the Holy Spirit is shared by the Shekinah whose name literally means “indweller”.
However, while the term Shekinah and the term Holy Spirit became interchangeable in the Aramaic and the Hebrew Talmudic writings, the Shekinah as such did not even make the transition from the folk Judaism of the Holy Land to the Greek speaking Gentile world, only the form Holy Spirit did.
The Gospels and the Epistles were all written in Greek. Their mission facilitated by the Pax Romana, the Greek speaking teams under Roman citizen Paul of Tarsus “hijacked” Christianity and delivered it to activist converts of the Greek speaking world at the end of the Hellenistic Era. After the Crucifixion and the Resurrection, the Christians were a small group in Jerusalem clustered around St James – the apostle referred to as the “the brother of Jesus” by Protestant scholars and as “James the Lesser” by Catholic scholars. Things moved quickly. In the period from the Crucifixion to the completion of the Epistles and Gospels, Christianity was taken from the Aramaic speaking Jewish population of Judea where it had begun and turned over to the Greek speaking population of the Mediterranean.
With the First Jewish War (66-73 AD) and the destruction of the Temple (70 AD), the early Gentile Christians of the Roman Empire would have had every political reason to separate their young movement from its Jewish roots. There was also the theological motivation of securing control over the interpretation of the Hebrew scriptures and prophecies.
In fact, in making the break from the Jewish world, the first thing the new Christians did was to eliminate the Semitic practices of dietary laws and male circumcision. In contrast, when the 3rd Abrahamic religion, Islam, rose among a Semitic people some 500 years later, both practices continued to be enforced.
As related in the post “Joshua and Jesus” on this blog site, the Aramaic/Hebrew name of Jesus is Yeshua, which is transliterated directly into English as Joshua but which becomes Jesus after being passed through the Greek language filter. Indeed, had the Gospels been written in Aramaic, the language of the Targums and the language of Yeshua and his disciples, we would have a much better sense today of what the Christian Savior actually said, taught and did – and why. In particular, the conflict between the Jews of the countryside and the Pharisees of the Temple in Jerusalem would have been more substantiated. At the nativity. it would have been “Messaiah Adonai” in the language the shepherds spoke rather than the “Christ the Lord” of the Greek version, the Aramaic being much better at capturing the spirit of the Hebrew scriptures. As it is, the one time Jesus speaks in Aramaic comes as He is dying on the cross and cries out (Matthew 27:46):
    Eli, Eli, lama sabachthani? that is to say, My God, my God, why hast thou forsaken me?
In an Aramaic New Testament, the feminine noun “Shekinah” would likely have been used instead of “Holy Spirit”; in contrast the noun “spirit” is neuter in Greek and masculine in Latin. Severed, however, from its roots, the theology of the Holy Spirit took on a life of its own in the hands of rationalist intellectuals of the Graeco-Roman world. Affaire à suivre.

The Third Person IV: El and Yahweh

Quietly, in post-Biblical Judaism, there arises an actor, God’s Shekinah, who substitutes for God in interactions with the material world. The term Shekinah does not occur in the Hebrew Bible. It did not originate among the Hellenized Jews of Alexandria or Antioch. It did not originate among the Pharisees at the Temple in Jerusalem. It did not originate in Talmudic writings. It did not come from the scrolls of the Essenes. Rather, it came from the Aramaic speaking land of Biblical Palestine in the years leading up to the time of Christ; it first appears in the Targums, an Aramaic oral and written literature consisting of commentary on passages from the Bible and of homilies to be delivered in the synagogues in the towns and villages of Biblical Palestine; this is the world of John the Baptist and of Jesus of Nazareth, the two cousins who both met a politically charged death. This Shekinah of the folk Judaism of Palestine is a precursor of the Holy Spirit, the manifestation of God in the physical world as a presence or actor. The God of Jewish monotheism is a male figure; however, the Shekinah is female (and in the Kaballah actually becomes a full fledged goddess in her own right). So how did this female principle enter Judaism in the post-Biblical period, in the years before the advent of Christianity at a time when a monotheism built around an aloof male deity was at last firmly established in Judaism? Mystère.

To get to the bottom of this, we have to put the history of Judaism in a larger context. First, in the Middle East and the Mediterranean area, the local deities were generally headed up by a husband-wife pair: Zeus and Hera among the Greeks, Jupiter and Juno among the Romans, Anu and Ki among the Sumerians, Osiris and Isis among the Egyptians, El and Asherah among the Canaanites.

The Hebrew Bible assigns a Sumerian origin to Abraham who hails from the legendary city Ur of the Chaldees. Then there is the enslavement of the Israelites in Egypt and their escape from the Pharaoh followed by the (bloody) conquest of the Promised Land of Canaan. However, from the historical and archeological evidence, it is best to consider the Israelites simply as part of the larger the Canaanite world. For a lecture by Professor William Dever on supporting archaeological research, click HERE ; for historical scholarship by Professor Richard Elliott Bernstein, see The Exodus (Harper Collins, 2017). Admittedly, such revisionism provides for a less exciting a story than the Biblical tale with its plagues and wars, but it does place the Israelites among the creators of the alphabet, one of the most powerful intellectual achievements of civilization.

As the Israelites differentiated themselves from other dwellers in the Land of Canaan, they replaced polytheism with monotheism. But ridding themselves of the principal god El would prove complicated. First, this name for God is used some 2500 times in the Hebrew Bible in its grammatically plural form Elohim; when Elohim is the subject of a sentence, the verb, however, is third-person singular – this is still true in Hebrew in Israel today. In a similar way, the god Baal is referred to as Baalim throughout the Hebrew Bible.

It is also noteworthy that the Arabic term for Allah is derived from “El” and means “The [single] God.” In fact, the name Israel itself is usually construed to mean “may El rule”; however, others claim, with scholarship at hand, that the origin is the triad of gods Is Ra El, the first two being Egyptian deities. For its part, Beth-El means “House of God”; some scholars trace Babel back to the Akkadian for “Tower of God.”

“El” also survives in the names of angels such as Michael, Raphael and Gabriel: respectively, “who is like God,” “healer from God,” and “God is my strength.” Interestingly, despite their frequent appearances there, angels are not given names in the Hebrew Bible until books written in the 2nd century B.C: in the Book of Daniel, Michael and Gabriel appear; Raphael appears in the Book of Tobit – despite its charm, this work is deemed apocryphal by Jewish and Protestant scholars. In the Quran, Gabriel and Michael are both mentioned by name. In the Gospels, the angel Gabriel plays an important role while Michael is featured in the Book of Revelation in the story of the fallen angels and in the Epistle of St. Jude where he is promoted to Archangel.

Racing forward to modern times, the angelic el-based naming pattern continues with the names of Superman’s father Jor-El and Superman’s own name Kal-El. That the authors of Superman, Joe Shuster and Jerry Siegel were both children of Jewish immigrants may well have had something to do with this.

But in addition to Elohim, there is another denotation in the Hebrew Bible for the God of the Israelites – the four Hebrew consonants יהוה (YHWH in the Latin alphabet); in the Christian world this name is rendered simply as God or as Yahweh or less frequently as Jehovah. This designation for God is used over 6500 times in the Hebrew Bible!

One reason for the multiple ways of designating God is that the Hebrew Bible itself is not the work of a single author. In fact, scholars discern four main authorships for the Torah, the first five books, that are called the Elohist, the Yahwist, the Deuteronomist and the Priestly Source. The texts of these different authors (or groups of authors) were culled and merged over time by the compilers of the texts we have today. As the nomenclature suggests, the Elohist source generally uses Elohim to refer to God while the Yahwist source generally uses YHWH.

This kind of textual analysis serves to explain some discrepancies and redundancies in the Hebrew Bible. For example, there are two versions of the creation of humankind in Genesis. The first one appears at the beginning (Genesis 1: 26-28) and concludes the sixth day of Creation; it is attributed to the Elohist author – there is no reference to Adam or Eve, and women and men are created together; : the King James Bible has

    Then God said, “Let Us make man in Our image, according to Our likeness; let them have dominion over the fish of the sea, over the birds of the air, and over the cattle, over all the earth and over every creeping thing that creeps on the earth.” So God created man in His own image; in the image of God He created him; male and female He created them. Then God blessed them, and God said to them, “Be fruitful and multiply; fill the earth and subdue it; have dominion over the fish of the sea, over the birds of the air, and over every living thing that moves on the earth.”

While these poetic verses are well known, there is that more dramatic version: Adam and Eve, the tree of knowledge of good and evil, the serpent, the apple, original sin, naked bodies, fig leaves, etc. This vivid account is in the following chapter, Genesis 2:4-25, and this version of events is attributed to the Yahwist author.

For Michaelangelo’s treatment of the creation, click  HERE

Another example of a twice told tale in Genesis is the account of the deluge and Noah’s Ark – but in this case doing things in pairs is most appropriate!

There are various theories as to the origin of the figure of Yahweh. One school of thought holds that a god with the name Yahweh was a Canaanite deity of lower rank than El who became the special god of the Israelites and displaced El.

However, in the Bible itself, in Exodus, the Yahwist writer traces the name to the time that God in the form of a burning bush is speaking to Moses; here is the King James text of Exodus 3:14

    And God said unto Moses, I AM THAT I AM: and he said, Thus shalt thou say unto the children of Israel, I AM hath sent me unto you.

The thinking is that the consonants YHWH form a code – a kind of acronym – for the Yahwist’s “I AM”. With this interpretation, these four letters become the tetragrammaton – the “ineffable name of God,” the name that cannot be said. Indeed, when reading the Hebrew text in a religious setting, there where the tetragrammaton YHWH is written, one verbalizes it as “Adonai” (“Lord”), thus never pronouncing the name of God itself. With this, the God of the Israelites emerges as a transcendent being completely different from the earthy pagan deities – a momentous step theologically.

But in the original Canaanite Pantheon, El was accompanied by his consort Asherah, the powerful Mother Goddess and Queen of Heaven. For images of Asherah, click HERE.

Moreover, Asherah is also described as the consort of Yahweh on pottery found in Biblical Palestine. Indeed, inscriptions from several places including Kuntillet ‘Ajrud in the northeast Sinai have the phrase “YHWH and his Asherah.” For an excellent presentation of the archeological record of Asherah worship by professor and author William Dever, click HERE .

It would seem that Asherah is a natural candidate to provide the missing link to the Shekinah which would solve our current mystery. But Judaism is presented to us as a totally androcentric monotheism with no place for Asherah or other female component. Indeed, Asherah will not be found in the Septuagint, the masterful Greek translation of the Hebrew Bible done in Alexandria in the 3rd and 2nd centuries BC; Asherah will not be found in the Latin Vulgate, the magisterial translation of the Bible into Latin done in the 4th Century by St. Jerome; and Asherah will not be found in the scholarly King James Bible. Something’s afoot! A case of cherchez la femme! Have references to her simply been redacted from the Hebrew Bible and its translations? Mystère.

The Third Person III: The Shekinah

To help track the Jewish origins of the Christian Holy Spirit, there is a rich rabbinical literature to consider, a literature which emerged as the period of Biblical writing came to an end in the centuries just before the advent of Christianity. Already in the pre-Christian era, the rabbis approached the Tanakh (the Hebrew Bible) with a method called Midrash for developing interpretations, commentaries and homilies. Midrashic practice and writings had a real influence on the New Testament. St. Paul himself studied with the famous rabbi Gamaliel (Acts 22: 3) who in turn was the grandson of the great Talmudic scholar Hillel the Elder, one of the greatest figures in Jewish history; his simple and elegant formulation of the Golden Rule is often cited today:

    “What is hateful to you, do not do to your fellow: this is the whole      Torah; the rest is the explanation; go and learn”

Hiller the Elder is also famous for his laconic leading question

    “If not now, when?”

For an image of Hillel, click HERE

In the writings of St Paul, this passage from the Epistle to the Galatians (5:22-23) is considered an example of Midrash:

    But the fruit of the Spirit is love, joy, peace, longsuffering, gentleness, goodness, faith,

    Meekness, temperance: against such there is no law.

The virtues on this list are called the Fruits of the Holy Spirit and the early Christians understood this Midrash to be a reference to the Holy Spirit. Much like the way the Seven Pillars of Wisdom in the Book of Isaiah provide an answer to Question 177 in the Baltimore Catechism, so these verses of Paul provide the answer to Question 719:

Q. Which are the twelve fruits of the Holy Ghost?

A. The twelve fruits of the Holy Ghost are Charity, Joy, Peace, Patience, Benignity, Goodness, Long-suffering, Mildness, Faith, Modesty, Continency, and Chastity.

The numerically alert will have noted that St Paul only lists nine such virtues. But, it seems that St. Jerome added those three extra fruits to the list when translating Paul’s epistle from Greek into Latin and so the list is longer in the Catholic Bible’s wording of the Epistle!

It is also possible that Jerome was working with a text that already had the interpolations. The number 12 does occur in key places in the scriptures – the 12 sons of Jacob whence the 12 tribes of Israel, the 12 apostles, …. . Excessive numerological zeal on the part of early Christians plus a certain prudishness could well have led to the insertion of modesty, continence and chastity into the list. In the Protestant tradition and in the Greek Orthodox Church, the number still stands at 9, but that is still numerically pious since there are 9 orders of Angels as well.

For some centuries before the time of Christ, the Jewish population of Biblical Palestine was not Hebrew speaking. Instead, Aramaic, another Semitic language spoken all over the Middle East, had become the native language of the people; Hebrew was preserved, of course, by the Scribes, Pharisees, Essenes, Sadducees, rabbis and others directly involved with the Hebrew texts and with the oral tradition of Judaism.

The Targums were paraphrases of passages from the Tanakh together with comments or homilies that were recited in synagogues in Aramaic, beginning some time before the Christian era. The leader, the meturgeman, who presented the Targum would paraphrase a text from the Tanakh and add commentary, all in Aramaic to make it more comprehensible and more relevant to those assembled in the synagogue.

Originally, the Targums were strictly oral and writing them down was prohibited. However, Targumatic texts appear well before the time that Paul was sending letters to converts around the Mediterranean. In fact, a Targumatic text from the first century BC, known as the Targum of Job, was discovered at Qumran, the site of the Dead Sea Scrolls.

By the time of the Targums, Judaism had gone through centuries of development and change – and it was still evolving. In fact, the orthodoxy of the post-biblical period demanded fresh readings even of the scriptures themselves. Indeed, the text of the Tanakh still raises theological problems – in particular in those places where the text anthropomorphizes God (Yahweh); for example, there is God’s promise to Moses in Exodus 33:14

    And he [The Lord] said, My presence shall go with thee, and I will give thee rest.

In the Targums and in the Talmudic literature, the writers are careful to avoid language which plunks Yahweh down into the physical world. The Targums tackle this problem head on. They introduce a new force in Jewish religious writing, the Shekinah. This Hebrew noun is derived from the Hebrew verb shakan which means “to dwell” and the noun form Shekinah is translated as “The one who dwells” or more insistently as “The one who indwells”; it can refer both to the way God’s spirit can inhabit a believer and to the way God can occupy a physical location.

The verb “indwell” is also often used in English language discussions and writing about the Judaic Shekinah and about the Christian Holy Spirit. This word goes back to Middle English and it was rescued from obsolescence by John Wycliffe, the 14th century reformer who was the first to translate the Bible into English from the Latin Vulgate of St. Jerome. It is popular today in prayers and in Calls to Worship in Protestant churches in the US; e.g.

    “May your Holy Spirit surround and indwell this congregation now and forevermore.”

In the Targums, the term Shekinah is systematically applied as a substitute for names of God to indicate that the reference is not to God himself; rather the reference is shifted to this agency, aspect, emanation, viz. the Shekinah. Put simply, when the original Hebrew text says “God did this”, the Targum will say something like “The Shekinah did this” or “The Lord’s Shekinah did this.”

In his study Targum and Testament, in analyzing the example of Exodus 33:14 above, Martin McNamara translates the Neofiti Targum’s version of the text this way:

    “The glory of my Shekinah will accompany you and will prepare a resting place for you.”

Here is an example given in the Jewish Encyclopedia (click HERE ) involving Noah’s son Japeth. Thus, in Genesis 9:27, we have

    May God extend Japheth’s territory; may Japheth live in the tents of Shem, and may Canaan be the slave of Japheth.

In the Onkelos Targum, the Hebrew term for God “Elohim” in Genesis is replaced by “the Lord’s Shekinah” and the paraphrase of the meturgeman becomes (roughly)

    “May the Lord’s Shekinah extend Japheth’s territory; may Japheth live in the tents of Shem, and may Canaan be the slave of Japheth.”

For a 16th century French woodcut that depicts this son of Noah, click HERE .

Another function of the Shekinah is to represent the presence of God in holy places. In Jewish tradition, the Spirit of God occupied a special location in the First Temple. This traces back to Exodus 25:8 where it is written

    And let them make Me a sanctuary; that I may dwell among them.

Again following the Jewish Encyclopedia’s analysis, the Onkelos Targum paraphrases this declaration of Yahweh’s as

    “And they shall make before Me a sanctuary and I shall cause My Shekinah to dwell among them.”

This sanctuary will become the Temple built by Solomon in Jerusalem. Indeed, in many instances, the Temple is called the “House of the Shekinah” in the Targums.

Jesus was often addressed as “Rabbi” in the New Tesyament; in John 3:2, the Pharisee Nicodemus says “Rabbi, we know that you are a teacher who has come from God”. Today, the role of Midrash and the Targums on Jesus’ teachings and on the language of the four Gospels is a rich area of research. Given that the historical Jesus was an Aramaic speaker, the Targums clearly would be a natural source for homilies and a natural methodology for Jesus to employ.

Let us recapitulate and try to connect some threads in the Jewish literature that lead up to the Christian Holy Spirit: The Essene view of the Holy Spirit as in-dwelling is very consistent with the Shekinah and with the Christian Holy Spirit. The Essene personification of Wisdom as a precursor to the Holy Spirit is as well. The Targumatic and Talmudic view of the indwelling Shekinah as the manifestation of God in the physical world has much in common with the Christian view of the Holy Spirit; in point of fact, this is the primary role of the Holy Spirit as taught in Sunday Schools and in Parochial Schools.

One more thing: the term “Holy Spirit” itself only appears three times in the Tanakh and there it is used much in the way Shekinah is employed in the rabbinical sources. The term is used often, however, by the Essenes in the Dead Sea Scrolls. Then it appears in the Talmudic literature where it is associated with prophecy as in Christianity: in Peter’s 2nd Epistle (1:21) we have

    For the prophecy came not in old time by the will of man: but holy men of God spake as they were moved by the Holy Ghost

Moreover, in Talmudic writings, the terms Holy Spirit and Shekinah eventually became interchangeable – for scholarship, consult Raphael Patai, The Hebrew Goddess.

What is also interesting is that in Hebrew the words Wisdom (chokmâh), Spirit (ruach) and Shekinah are all feminine. However, grammatical gender is not the same as biological gender – a flagrant example is that in German “young girl” is neuter (das Mädchen). So, we could not infer from grammatical gender alone that the Holy Spirit was a female force.

However, in the Wisdom Literature, Wisdom/Sophia is a female figure. Following Patai again, one can add to that an assertion by Philo of Alexandria, the Hellenized Jewish philosopher who lived in the first half of the 1st century: in his work On the Cherubim, Philo flatly states that God is the husband of Wisdom. Moreover, in the Talmud and the Kaballah, the Shekinah has a female identity which is developed to the point that in the late medieval Kaballah, the Shekinah becomes a full-fledged female deity.

So there are two female threads, Wisdom/Sophia and the Shekinah, coming from the pre-Christian Jewish literature that are both strongly identified with the Holy Spirit of nascent Christianity, identifications which persisted as Christianity spread throughout the Roman Empire. Wisdom/Sophia of the Wisdom Literature looks to be an importation from the Greek culture which dominated the Eastern Mediterranean Sea in the Hellenistic Age, an importation beginning at the end of the biblical era in Judaism. The Shekinah, though, only appears in the Talmudic and Targumatic literatures. So the next question is what is the origin of the concept of the Shekinah. And then, are the Shekinah and Wisdom/Sophia interconnected? Mystères. More to come.

Esther, Trump and Blasphemy

Israeli Prime Minister Benjamin Netanyahu recently (March 21, 2019) asserted that Donald Trump’s support for Israeli annexation of the Golan-Heights has anti-Iranian biblical antecedents. The annexation would strengthen Israel’s position vis-à-vis pro-Iranian forces in Syria. Calling it a “Purim Miracle,” Netanyahu cited the Book of Esther where purportedly Jews killed Persians in what is today Iran rather than the other way around as the Persian viceroy Haman planned. This came to pass thanks to Esther’s finding favor with the Persian King named Ahasuerus, the ruler considered today by scholars to be Xerxes, the grandson of King Cyrus the Great; in need of a new queen Xerxes selected the Jewish orphan Esther as the most beautiful of all the young women in his empire. High ranking government officials like Mike Pence and Mike Pompeo rally to the idea that Trump was created by God to save the Jewish people. Indeed, BaptistNews.com reports that, when asked about it in an interview with the Christian Broadcasting Network, Secretary of State Pompeo said it is possible that God raised up President Trump, just as He had Esther, to help save the Jewish people from the menace of Iran, as Persia is known today. Pompeo added “I am confident that the Lord is at work here.”

However, the account in the Book of Esther is contested by scholars since there are there are no historical records to back up the biblical story; and the Cinderella elements in the narrative should require outside verification.  Moreover, the Book of Esther itself is not considered canonical by many (Martin Luther among them) and parts of it are excluded from the Protestant Bible. These are details though – the key point is that the Feast of Purim is an important event, celebrating as it does the special relationship between God and His Chosen People; bringing Donald Trump into this is simply blasphemy.

For its part, the reign of Xerxes is well documented – he was the Persian invader of Greece whose forces prevailed at Thermopylae but were then defeated at the Battle of Salamis. According to Herodotus, Xerxes watched battles perched on a great throne and, at Thermopylae he “thrice leaped from the throne on which he sat in terror for his army.”

Moreover, there is well-documented history where the Persians came to the aid of the Jews. Some background: the First Temple in Jerusalem was built during the reign of King Solomon (970-931 BC); the Temple was destroyed by the Babylonians in 598 BC and a large portion of the population was exiled to Babylon. With the conquest of Babylon in 539 BC by the Persian King Cyrus the Great, the Babylonian Captivity came to an end, Jews returned to Jerusalem and began the construction of the Second Temple (515 BC). It was also Cyrus himself who urged the rebuilding of the Temple and, for his efforts on behalf of the Jews, Cyrus is the only non-Jew considered a Messiah in the Hebrew Bible (Isaiah 45:1). So here we have a reason for Israelis and Iranians to celebrate history they share.

The Third Person II: The Wisdom Literature

The rabbinical term for the Hebrew Bible is the Tanakh; the term was introduced in the Middle Ages and is an acronym drawn from the Hebrew names for the three sections of the canonical Jewish scriptures: Torah (Teachings), Neviim (Prophets) and Ketuvim (Writings). The standard Hebrew text of the Tanakh was compiled in the Middle East in the Middle Ages and is known as the Masoretic Text (from the Hebrew word for tradition).

Since the Holy Spirit does not appear in the Tanakh as a standalone actor and is only alluded to there three times, the question arises whether the Holy Spirit plays a role in other pre-Christian Jewish sources. Mystère.

In 1947, Bedouin lads discovered the first of the Dead Sea Scrolls in a cave (which one of them had fallen into) at the site of Qumran on the West Bank of the River Jordan near the Dead Sea. These texts were compiled in the centuries just before the Christian era by a monastic Jewish group called the Essenes and they include copies of parts of the Tanakh. On the other hand, the texts of the Dead Sea Scrolls also include non-biblical documents detailing the way of life of the Essenes and their special beliefs. In the scrolls, there are multiple mentions of the Holy Spirit and the role of the Holy Spirit there has much in common with the later Christian concept: the Essenes believed themselves to be holy because the Holy Spirit dwelt within each of them; indeed, from a scroll The Community Rule, we learn that each member of the group had first to be made pure by the Holy Spirit. Another interesting intersection with early Christianity is that the Essenes celebrated the annual renewal of their covenant with God at the Jewish harvest feast of Shavuot which also commemorates the day when God gave the Torah to the Israelites establishing the Mosaic covenant. The holiday takes place fifty days after Passover; in the Greek of the New Testament, this feast is called Pentecost and it is at that celebration that the Holy Spirit establishes a covenant with the Apostles.

Wisdom, aka Holy Wisdom, emerges as a concept and guiding principle in the late Biblical period. Wisdom is identified with the Christian Holy Spirit, for example, through the Seven Pillars of Wisdom; thus, after relocating to North America and leaping ahead many centuries to 1885 and the Baltimore Catechism, it is Isaiah 11:2 that provides the answer to Question 177:

Q. Which are the gifts of the Holy Ghost?
A. The gifts of the Holy Ghost are Wisdom, Understanding, Counsel, Fortitude, Knowledge, Piety and Fear of the Lord.

Isaiah 11:3 and Proverbs 9:10 make it clear that, among these Seven Pillars of Wisdom, Fear of the Lord is the most fundamental of these gifts – for once, we can’t blame this sort of thing on Catholics and Calvinists !

Wisdom as a personification plays an important role in the Wisdom Literature, a role that also links pre-Christian Jewish writings to the Christian Holy Spirit. The Book of Proverbs, itself in the Tanakh, is part of this literature. But there is something surprising going on: in Hebrew grammar, the gender of the word for Wisdom, Chokmâh, is feminine; in the Wisdom Literature, Wisdom has feminine gender, not only grammatically but sexually as well. Indeed, in Chapter 8 of Proverbs, Wisdom puts “forth her voice”

1 Doth not wisdom cry? And understanding put forth her voice?
2 She standeth in the top of high places, by the way in the places of the paths.
3 She crieth at the gates, at the entry of the city, at the coming in at the doors.
4 Unto you, O men, I call; and my voice is to the sons of man.

and declaims that she was there before the Creation

22 The LORD possessed me in the beginning of his way, before his works of old.
23 I was set up from everlasting, from the beginning, or ever the earth was.
24 When there were no depths, I was brought forth; when there were no fountains abounding with water.
25 Before the mountains were settled, before the hills was I brought forth.

So the author of this part of the Book of Proverbs (4th century BC) clearly sees Wisdom as a kind of goddess.

The Tanakh, which corresponds basically to the Protestant Old Testament, excludes Wisdom books that are included in the Catholic Bible such as the Book of Sirach (aka Book of Ecclesiasticus) and the Wisdom of Solomon (aka Book of Wisdom). These texts, however, develop this “goddess” theme further.

The Book of Sirach, which dates from the late 2nd century BC, has these verses in the very first chapter where this preternatural female note is struck cleanly:

5 To whom has wisdom’s root been revealed? Who knows her subtleties?
6 There is but one, wise and truly awe-inspiring, seated upon his throne:
7 It is the LORD; he created her, has seen her and taken note of her.

The meme of Wisdom as a feminine goddess-like being also occurs in Psalm 155 one of the Five Aprocryphal Psalms of David, texts which date from the pre-Christian era:

5 For it is to make known the glory of Yahweh that wisdom has been given;
6 and it is for recounting his many deeds, that she has been revealed to humans:


12 From the gates of the righteous her voice is heard, and her song from the assembly of the pious.
13 When they eat until they are full, she is mentioned, and when they drink in community

What is more, this theme also appears in the Essene Wisdom texts of the Dead Sea Scrolls, e.g. the Great Psalms Scroll and Scroll 4Q425. In fact, the latter begins with a poem to Wisdom in the idiom of the Beatitudes

    “Blessed are those who hold to Wisdom’s precepts
and do not hold to the ways of iniquity….
Blessed are those who rejoice in her…
Blessed are those who seek her …. “

Then too the word for “Wisdom” in the Greek of the Wisdom of Solomon is “Sophia”, the name for a mythological female figure and a central female concept in Stoicism and in Greek philosophy more generally. Indeed, “philosophy” itself means “love of Sophia.” For a 2nd century statue of Sophia, click HERE . For a painting by Veronese, click HERE .

Most dramatically, Chapter 8 of the Wisdom of Solomon begins

1 Wisdom reacheth from one end to another mightily: and sweetly doth she order all things.
2 I loved her, and sought her out from my youth, I desired to make her my spouse, and I was a lover of her beauty.
3 In that she is conversant with God, she magnifieth her nobility: yea, the Lord of all things himself loved her.
4 For she is privy to the mysteries of the knowledge of God, and a lover of his works.

So these sources all imply that the Holy Spirit derives from a female precedent.

What is more, in Christian Gnosticism, Sophia becomes both the Bride of Christ and the Holy Spirit of the Holy Trinity. This movement denied the virgin-birth on the one hand and taught that the Holy Spirit was female on the other. The Gnostic Gospel of St. Phillip is one of the texts found in 1945 at Nag Hammadi in Egypt and the manuscript itself dates from the 2nd century; in this Gospel the statement of the Angel of the Lord to Joseph in Matthew 1:20

    “the child conceived in her is from the Holy Spirit”

is turned on its head by the argument that this is impossible because the Holy Spirit is female:

    “Some said Mary became pregnant by the holy spirit. They are wrong and do not know what they are saying. When did a woman ever get pregnant by a woman?”

Not unsurprisingly, Gnosticism was branded as heretical by stalwart defenders of orthodoxy such as Tertullian and Irenaeus. Tertullian, “the Father of Western Theology,” was the first Christian author known to use the Latin term “Trinitas” for the Triune Christian God – naturally, the thought of a female Third Person was anathema to him, leading as it would to a blasphemous mėnage à trois.

In the Hebrew language literature, Wisdom/Sophia as a personification enters the Tanakh relatively late in the game in the Book of Proverbs and then somewhat later in the Aprocrypha and in the Dead Sea Scrolls. She appears as Sophia in the Greek language Wisdom of Solomon of the late 1st century BC; so Wisdom/Sophia appears to be an influence from the Hellenistic world and its Greek language, religion and philosophy. But why does this begin to happen at the end of the Biblical period? Is Wisdom/Sophia filling a vacuum that was somehow created in Jewish religious life? But the literature search does not end here. In the pre-Christian period there also emerged rabbinical practices such as Midrash and writings such as the Jerusalem Talmud and the Targums. Are further threads leading to the Holy Spirit of Christianity to be found there? Further examples of links to a female diety? Links to Wisdom/Sophia herself? Mystères. More to come.

The Third Person I : The Holy Spirit

To Christians, the Holy Spirit (once known as the Holy Ghost in the English speaking world) is the Third Person of the Holy Trinity, along with God the Father and God the Son.
Indeed for Protestants and Catholics, the Nicene Creed reads
    “We believe in the Holy Spirit, the Lord, the giver of life,
who proceeds from the Father and the Son,
who with the Father and the Son is worshiped and glorified,
who has spoken through the prophets.”
Or more simply in the Apostles’ Creed
    ” I believe in the Holy Spirit.”
The phrase “and the Son” does not appear in all versions of the Nicene Creed and it was a key factor in the Great Schism of 1054 AD that separated the Greek Orthodox Church from the Roman Catholic Church. A mere twenty years later in another break with the Orthodox Church, Pope Gregory VII instituted the requirement of celibacy for Catholic priests. It makes one think that, had the schism not taken place, the Catholic Church would not have made that move from an all male priesthood to the celibate all male priesthood which is plaguing it today.
The earliest Christian texts are the Epistles of St. Paul: his first Epistle dates from AD 50, while the earliest gospel (that of St. Mark) dates from AD 66-70. However, since the epistles were written after the events described in the gospels, they come later in editions of the New Testament.
Paul wrote that first epistle, known as 1 Thessalonians, in Greek to converted Jews of the diaspora and the other new Christians in the Macedonian city of Thessaloniki (Salonica) on the Aegean Sea. Boldly, at the very beginning of the letter, Paul lays out the doctrine of the Holy Trinity: in the New Revised Standard Version, we have
      1 To the church of the Thessalonians in God the Father and the Lord Jesus Christ: Grace to you and peace.
     2 We always give thanks to God for all of you and mention you in our prayers, constantly
    3 remembering before our God and Father your work of faith and labor of love and steadfastness of hope in our Lord Jesus Christ.
    4 For we know, brothers and sisters beloved by God, that he has chosen you,
    5 because our message of the gospel came to you not in word only, but also in power and in the Holy Spirit and with full conviction; just as you know what kind of persons we proved to be among you for your sake.
    6 And you became imitators of us and of the Lord, for in spite of persecution you received the word with joy inspired by the Holy Spirit.
Judaism is famously monotheistic and we understand that when Paul refers to “God” in his epistle, he is referring to Yahweh, the God of Judaism – “God the Father” to Christians. Likewise, the reference to “Jesus Christ” is clear – Jesus was a historical figure. But Paul also expected people in these congregations to understand his reference to the Holy Spirit and to the power associated with the Holy Spirit.
So who were in these congregations that Paul was writing to who would understand what Paul was trying to say? Mystère.
By the time of Christ, Jews had long established enclaves in many cities around the Mediterranean including, famously, Alexandria, Corinth, Athens, Tarsus, Antioch and Rome itself. Under Julius Caesar, Judaism was declared to be a recognized religion, religio licita, which formalized its status in the Empire. Many Jews like Paul himself were Roman citizens. In Augustus’ time, the Jews of Rome even made it into the writings of Horace, one of the leading lights of the Golden Age of Latin Literature: for one thing, he chides the Jews of Rome for being insistent in their attempts at converting pagans – something that sounds unusual today, but the case has been made that proselytism is a natural characteristic of monotheism, which makes sense when you think about it.
Estimates for the Jewish share of the population of the Roman Empire at the time of Christ range from 5% to 10% – which is most impressive. (For a cliometric analysis of this diaspora and of early Christianity, see Cities of God by Rodney Stark.) These Hellenized, Greek speaking Jews used Hebrew for religious services and readings. Their presence across the Roman Empire was to prove critical to the spread of Christianity during the Pax Romana, a spread so rapid that already in 64 A.D. Nero blamed the Christians for the fire that destroyed much of Rome, the fire that he himself had commanded.
During the Hellenistic Period, the three centuries preceding the Christian era, Alexandria, in particular, became a great center of Jewish culture and learning – there the first five books of the Hebrew Bible were translated into Greek (independently and identically by 70 different scholars according to the Babylonian Talmud) yielding the Septuagint and creating en passant the Greek neologism diaspora as the term for the dispersion of the Jewish people. Throughout the Mediterranean world, the Jewish people’s place of worship became the synagogue (a Greek word meaning assembly).
In fact, Greek became the lingua franca of the Roman Empire itself. St. Paul even wrote his Epistle to the Romans in Greek. The emperor Marcus Aurelius wrote the twelve books of his Meditations in Greek; Julius Caesar and Mark Anthony wooed Cleopatra in Greek. In Shakespeare’s Julius Caesar, the Senator and conspirator Casca reports that Cicero addressed the crowd in Greek adding that he himself did not understand the great orator because “It was Greek to me” – here Shakespeare is putting us on because Plutarch reports that Casca did indeed speak Greek. As for Cicero, for once neither defending a political thug (e.g. Milo) nor attacking one (e.g. Cataline), he delivered his great oration on behalf of a liberal education, the Pro Archaia, to gain Roman citizenship for his personal tutor, Archias a Greek from Antioch.
The spread of Christianity in the Greek speaking world was spearheaded by St. Paul as attested to by his Epistles and by the Acts of the Apostles. Indeed, Paul’s strategy in a new city was first to preach in synagogues. Although St Paul referred to himself as the Apostle to the Gentiles, he could still better be called the Apostle to the Urban Hellenized Jews, Jews like himself. Where he did prove himself an apostle to the Gentiles was when Paul, in opposition to some of the original apostles, declared that Christians did not have to follow Jewish dietary laws and that Christians need not practice the Semitic tribal practice of male circumcision; Islam, which also originated in the Semitic world, enforces both dietary laws and male circumcision.
The theology of the Holy Spirit is called pneumatology from pneuma the Greek word for spirit (or breath or wind) ; pneuma is the Septuagint’s translation of the Hebrew ruach. Scholars consider his pneumatology as central to Paul’s thinking.
In fact, Paul refers to the Holy Spirit time and again in his writings and in Paul’s Epistles the Holy Spirit is an independent force; the same applies to the Gospels: the Holy Spirit is a participant at the Annunciation, at the baptism of Christ, at the Temptation of Christ. In the Acts of the Apostles, it is the Holy Spirit who descends on the Apostles in the form of tongues of fire when they are gathered in Jerusalem for the Jewish harvest feast of Shavuot which takes places fifty days after Passover; in the Greek of the New Testament, this feast is called Pentecost (meaning “fifty”) and it is at this celebration that the Holy Spirit gives the Apostles the Gift of Tongues (meaning “languages”) and launches them on their careers as fishers of men.
For a Renaissance painting of the baptism of Jesus with the Holy Spirit present in the form of a dove, a work of Andrea del Verrocchio and his student Leonardo da Vinci, click HERE . According to the father of art history Giorgio Vasari, after this composition Verrocchio resolved never to paint again for his pupil had far surpassed him! While we’re dropping names, click HERE for Caravaggio’s depiction of Paul fallen from his horse after Jesus revealed Himself to him on the Road to Damascus.
Although important in the New Testament, reference to the “Holy Spirit” only occurs three times in the Old Testament and it is never used as a standalone noun phrase as it is in the New Testament; instead, it is used with possessive pronouns that refer to Yahweh such as “His Holy Spirit” (Isaiah 63:10,11) and “Thy Holy Spirit” (Psalms 51:11); for example, in the King James Bible, this last verse reads
    Cast me not away from thy presence; and take not thy holy spirit from me.
This is key – from the outset, in Christianity, the Holy Spirit is autonomous, part of the Godhead, not just a messenger of God such as an angel would be. And Paul and the evangelists assume that their readers know what they are writing about; they don’t go into long explanations to explain who the Holy Spirit is or where the Holy Spirit is coming from.
The monotheism of Judaism has a place only for Yahweh, the God of the chosen people. But Christianity and its theology started in the Jewish world of the first century A.D.; so the concept of the Holy Spirit must have its roots in that world even though it is not there in Biblical Judaism. Jewish religious culture was as dynamic as ever in the post-Biblical period leading up to the birth of Christianity and beyond. New texts were written in Aramaic as well as in Greek and in Hebrew, creating a significant body of work.
So the place to start to look for the origin of the Holy Spirit is in the post-Biblical literature of Judaism. More to come.

Liberal Semantics

The word “liberal” originated in Latin, then made its way into French and from there into English. The Oxford English Dictionary gives this as its primary definition
“Willing to respect or accept behaviour or opinions different from one’s own; open to new ideas.”
However, it also has a political usage as in “the liberal senator from Massachusetts.” This meaning and usage must be relatively new: for one thing, we know that “liberal” was not given a political connotation by Dr. Samuel Johnson in his celebrated dictionary of 1755:
    Liberal, adj. [liberalis, Latin, libėral, French]
1. Not mean; not low in birth; not low in mind.
2. Becoming a gentleman.
3. Munificent; generous; bountiful; not parcimonious.
So when did the good word take on that political connotation? Mystère.
We owe the attribution of a political meaning to the word to the Scottish Enlightenment and two of its leading lights, the historian William Robertson and the political economist Adam Smith. Robertson and Smith were friends and correspondents as well as colleagues at the University of Edinburgh; they used “liberal” to refer to a society with safeguards for private property and an economy based on market capitalism and free-trade. Roberts is given priority today for using it this way in his 1769 book The History of the Reign of the Emperor Charles V. On the other hand, many in the US follow the lead of conservative icon Friedrich Hayek who credited Smith based on the fact that the term appears in The Wealth of Nations (1776); Hayek wrote The Road to Serfdom (1944), a seminal work arguing that economic freedom is a prerequisite for individual liberty.
Today, the related term “classical liberalism” is applied to the philosophy of John Locke (1632-1704) and he is often referred to as the “father of liberalism.” His defense of individual liberty, his opposition to absolute monarchy, his insistence on separation of church and state, and his analysis of the role of “the social contract” provided the U.S. founding fathers with philosophical tools crucial for the Declaration of Independence, the Articles of Confederation and ultimately the Constitution. It is this classical liberalism that also inspired Simon Bolivar, Bernardo O’Higgins and other liberators of Latin America.
In the early 19th century, the Whig and Tory parties were dominant in the English parliament. Something revolutionary happened when the Whigs engineered the passage of the Reform Act of 1832 which was an important step toward making the U.K. a democracy in the modern sense of the term. According to historians, this began the peaceful transfer of power from the landed aristocracy to the emergent bourgeois class of merchants and industrialists. It also coincided with the end of the Romantic Movement, the era of the magical poetry of Keats and Shelley, and led into the Victorian Period and the well intentioned poetry of Arnold and Tennyson.
Since no good deed goes unpunished (especially in politics), passage of the Reform Act of 1832 also led to the demise of the Whig Party: the admission of the propertied middle class into the electorate and into the House of Commons itself split the Whigs and the new Liberal Party emerged. The Liberal Party was a powerful force in English political life into the 20th century. Throughout, the party’s hallmark was its stance on individual liberties, free-markets and free-trade.
Accordingly, in the latter part of the 19th century in Europe and the US, the term “liberalism” came to mean commitment to individual freedoms (in the spirit of Locke) together with support of free-market capitalism mixed in with social Darwinism. Small government became a goal: “That government is best that governs least” to steal a line from Henry David Thoreau.
Resistance to laissez-faire capitalism developed and led to movements like socialism and labor unions. In the US social inequality also fueled populist movements such as that led by William Jennings Bryan, the champion of Free Silver and other causes. Bryant, a brilliant orator, was celebrated for his “Cross of Gold” speech, an attack of the gold standard, in which he intoned
    “you shall not crucify mankind upon a cross of gold.”
He was a national figure for many years and ran for President on the Democratic ticket three times; he earned multiple nicknames such as The Fundamentalist Pope, the Boy Orator of the Platte, The Silver Knight of the West and the Great Commoner.
At the turn of the century, in the US public intellectuals like John Dewey began to criticize the basis of laissez-faire liberalism as too individualistic and too threatening to an egalitarian society. President Theodore Roosevelt joined the fray, led the “progressive” movement, initiated “trust-busting” and began regulatory constraints to rein big business in. The Sixteenth Amendment which authorized a progressive income tax made it through Congress and the state legislatures during this presidency.
At this time, the meaning of the word “liberal” took on its modern political meaning: “liberal” and “liberalism” came to refer to the non-socialist, non-communist political left – a position that both defends market capitalism and supports infrastructure investment and social programs that benefit large swaths of the population; in Europe the corresponding phenomenon is Social Democracy, though the Social Democrats tend to be more to the left and stronger supporters of the social safety net, not far from the people who call themselves “democratic socialists” in the US today.
On the other hand, the 19th century meaning of “liberalism” has been taken on by the term “neo-liberalism” which is used to designate aggressive free-market capitalism in the age of globalization.
In the first term of Woodrow Wilson’s presidency, Congress passed the Clayton Anti-Trust Act as well as legislation establishing the Federal Reserve System and the progressive income tax. Wilson is thus credited with being the founder of the modern Democratic Party’s liberalism – this despite his anti-immigrant stance, his anti-Catholic stance and his notoriously racist anti-African-American stance.
The great political achievement of the era was the 19th Amendment which established the right of women to vote. The movement had to overcome entrenched resistance, finally securing the support of Woodrow Wilson and getting the necessary votes in Congress in 1919. Perhaps, it is this that has earned Wilson his standing in the ranks of Democratic Party liberals.
Bryan, for his part a strong supporter of Wilson and his liberal agenda in the 1912 election, then served as Wilson’s first Secretary of State, resigning over the handling of the Lusitania sinking. His reputation has suffered over the years because of his humiliating battle with Clarence Darrow in the Scopes “Monkey” Trial of 1925 (Fredric March and Spencer Tracy resp. in “Inherit the Wind”); at the trial, religious fundamentalist Bryan argued against teaching human evolution in public schools. It is likely this has kept him off the list of heroes of liberal politics in the US, especially given that this motion picture, a Stanley Kramer “message film,” was an allegory about the McCarthy era witch-hunts. Speaking of allegories, a good case can be made that the Wizard of Oz is an allegory about the populist movement and the Cowardly Lion represents Bryan himself – note, for one thing, that in L. Frank Baum’s book Dorothy wears Silver Shoes and not Ruby Slippers!
The truly great American liberal was FDR whose mission it was to save capitalism from itself by enacting social programs called for by socialist and labor groups and by setting up regulations and guard rails for business and markets. The New Deal programs provided jobs and funded projects that seeded future economic growth; the regulations forced capitalism to deal with its problem of cyclical crises, panics and depressions. He called for a “bank holiday,” kept the country more or less on the gold standard by issuing an executive order to buy up nearly all the privately held the gold in the country (hard to believe today), began Social Security and unemployment insurance, instituted centralized controls for industry, launched major public works projects (from the Lincoln Tunnel to the Grand Coulee Dam), brought electricity to farms, archived the nation’s folk music and folklore, sponsored projects which brought live theater to millions (launching the careers of Arthur Miller, Orson Welles, Eliza Kazan and many others) and more. This was certainly not a time of government shutdowns.
In the post WWII period and into the 1960s, there were even “liberal Republicans” such as Jacob Javits and Nelson Rockefeller; today “liberal Republican” is an oxymoron. The most daring of the liberal Republicans was Earl Warren, the one-time Governor of California who in 1953 became Chief Justice of the Supreme Court. In that role, Warren created the modern activist court, stepping in to achieve justice for minorities, an imperative which the President and the Congress were too cowardly to take on. But his legacy of judicial activism has led to a politicized Supreme Court with liberals on the losing side in today’s run of 5-4 decisions.
Modern day liberalism in the U.S. is also exemplified by LBJ’s Great Society which instituted Medicare and Medicaid and which turned goals of the Civil Rights Movement into law with the Civil Rights Act of 1964 and the Voting Rights Act of 1965.
JFK and LBJ were slow to rally to the cause of the Civil Rights Movement (Eleanor Roosevelt was the great liberal champion of civil rights) but in the end they did. Richard Nixon and the Republicans then exploited anti-African-American resentment in the once Democratic “solid South” and implemented their “Southern strategy” which, as LBJ feared, has turned those states solidly Republican ever since. The liberals’ political clout was also gravely wounded by the ebbing of the power of once mighty labor unions across the heartland of the country. Further, the conservative movement was energized by the involvement of ideologues with deep pockets like the Koch brothers and by the emergence of charismatic candidates like Ronald Reagan. The end result has been that only the West Coast and the Northeast can be counted on to elect liberal candidates consistently, places like San Francisco and Brooklyn
What is more, liberal politicians have lost their sense of mission and have failed America in many ways since that time as they have moved further and further to the right in the wake of electoral defeats, cozying up to Wall Street along the way. For example, it was Bill Clinton who signed the bill repealing the Glass-Steagall Act undoing one of the cornerstones of the New Deal; he signed the bill annulling the Aid to Families With Dependent Children Act which also went back to the New Deal; he signed the bill that has made the US the incarceration capital of the world, the Violent Crime Control and Law Enforcement Act.
Over the years, the venerable term “liberal” itself has been subjected to constant abuse from detractors. The list of mocking gibes includes tax-and-spend liberal, bleeding heart liberal, hopey-changey liberal, limousine liberal, Chardonnay sipping liberal, Massachusetts liberal, Hollywood liberal, … . There is even a book of such insults and a web site for coming up with new ones. And there was the humiliating defeat of the liberal standard bearer Hillary Clinton in 2016.
So battered is it today that “liberal” is giving way to “progressive,” the label of choice for so many of the men and women of the class of 2018 of the House of Representatives. Perhaps, one hundred years is the limit to the shelf life of a major American political label which would mean “liberal” has reached the end of the line – time to give it a rest and go back to Samuel Johnson’s definition?

Conservative Semantics

Conservatism as a political philosophy traces its roots to the late 18th century: its intellectual leaders were the Anglo-Irish Member of Parliament Edmund Burke and the Scottish economist and philosopher Adam Smith.

In his speeches and writings, Burke extolled tradition, the “natural law” and “natural rights”; he championed social hierarchy, an established church, gradual social change and free markets; he excoriated the French Revolution in his influential pamphlet Reflections on the Revolution in France, a defense of monarchy and the institutions that protect good social order.

Burke is also well known in the U.S. for his support for the colonists in the period before the American Revolution notably in his Speech on Conciliation with the Colonies (1775) where he alerts Parliament to the “fierce spirit of liberty” that characterizes Americans.

Adam Smith, a giant figure of the Scottish Enlightenment, was the first great intellectual champion of laissez-faire capitalism and author of the classic The Wealth of Nations (1776).

Burke and Smith formed a mutual admiration society. According to a biographer of Burke, Smith thought that “on subjects of political economy, [Burke] was the only man who, without communication, thought on these subjects exactly as he did”; Burke, for his part, called Smith’s opus “perhaps the most important book ever written.” Their view of things became the standard one for conservatives throughout the 19th century and well into the 20th.

However, there is an internal inconsistency in traditional conservatism. The problem is that, in the end, laissez-faire capitalism upends the very social structures that traditional conservatism seeks to maintain. The catch-phrase of the day among pundits has become “creative destruction”; this formula, coined by the Austrian-American economist Joseph Schumpeter, captures the churning of capitalism which systematically creates new industries and new social institutions that replace the old – e.g. Sears by Amazon, an America of farmers by an America of city dwellers. Marx argued that capitalism’s failures would lead to its demise; Schumpeter argued that capitalism has more to fear from its triumphs: ineluctably the colossal success of capitalism hollows out the social institutions and mores which nurture capitalism such as church-going and the Protestant Ethic itself. Look at Western Europe today where capitalism is triumphant but where church attendance is reduced to three events: “hatch, match and dispatch,” to put it the playful way Anglicans do.

The Midas touch is still very much with us: U.S. capitalism tends to transform every activity it comes upon into a money-making version of itself. Thus something once innocent and playful like college athletics has been turned into a lucrative monopoly: the NCAA rules over a network of plantations staffed by indentured workers and signs billion dollar television contracts. Health care, too, has been transformed into a money-making machine with lamentable results: Americans pay twice as much for doctors’ care and prescription drugs as those in other advanced industrialized countries and the outcomes are grim in comparison – infant mortality and death in childbirth are off the charts in the U.S. and life expectancy is low compared to those other countries.

On the other hand, a modern capitalist economy can work well for its citizens. We have the examples of Scandinavia and of countries like Japan and Germany. Economists like Thomas Piketty write about the “thirty glorious” years after 1945 when post WWII capitalism built up a solid, prosperous middle class in Western Europe. Add to this what is known as the “French paradox” – the French drink more than Americans, smoke more, have sex more and still live some years longer. To make things worse, their cuisine is better, their work week is shorter and they take much longer vacations – one more example of how a nation can make capitalism work in the interest of its citizenry.

In American political life, in the 1930s, the label “conservative” was grabbed by forces opposed to FDR and the New Deal. Led by Senator Josiah W. Bailey of North Carolina, democrats with some Republican support published “The Conservative Manifesto,” a document which extolled the virtues of free enterprise, limited government and the balance of power among the branches of government.

In the post-war period the standard bearer of conservatism in the U.S. was Republican Senator Robert Taft of Ohio who was anti-New-Deal, anti-union, pro-business and who, as a “fiscal conservative,” stood for reduced government spending and low taxes; he also stood for a non-interventionist foreign policy. His conservatism harked back to Burke’s ideals of community: he supported Social Security, a minimum wage, public housing and federal aid to public education.

However, the philosophy of the current “conservative” political leadership in the U.S. supports all the destructive social Darwinism of laissez-faire capitalism, reflecting the 17th century English philosopher Thomas Hobbes and his dystopian vision much more than either Burke or Smith. Contemporary “conservatism” in the U.S. is hardly traditional conservatism. What happened? Mystère.

A more formal manifesto of Burkean conservatism, The Conservative Mind, was published in 1953 by Russell Kirk, then a professor at Michigan State. But conservative thought was soon co-opted and transformed by a wealthy young Texan whose family money came from oil prospecting – in Mexico and Venezuela! William F. Buckley, like Kirk a Roman Catholic, was founder and long time editor-in-chief of the seminal conservative weekly The National Review. Buckley is credited with (or accused of) transforming traditional Burkean conservatism into what goes by the name of “conservatism” in the U.S. today; he replaced the traditional emphasis on community with his libertarian view point “individualism” and replaced Taft’s non-interventionism with an aggressive Cold War political philosophy – the struggle against godless communism became the great moral cause of the “conservative movement.” For a portrait of the man, click HERE .

To his credit, Buckley kept his distance from fringe groups such as the John Birch Society; Buckley also eschewed Ayn Rand and her hyper-individualistic, atheistic philosophy of Objectivism; a man of letters himself, Buckley was likely appalled by her wooden prose – admittedly Russian and not English was her first language, but still she was no Vladimir Nabokov. On the other hand, Buckley had a long friendship with Norman Mailer, the literary icon from Brooklyn, the opposite of Buckley in almost every way.

Buckley as a cold war warrior was very different from libertarians Ron Paul and Rand Paul who both have an isolationist philosophy that opposes military intervention. On the other hand, Buckley always defended a position of white racial supremacy and the Pauls  have expressed eccentric views on race presumably justified by their shared libertarian concept of the right of individuals to do whatever they choose to do even if it includes discrimination against others. For example, Rand Paul has stated that he would have voted against the Civil Rights Act of 1964 which outlawed the Jim Crow Laws of the segregationist states “because of the property rights element … .”

In the 1960s, 70s and 80s, Buckley’s influence spread. The future president Ronald Reagan was weaned off New Deal Liberalism through reading The National Review; in turn Buckley became a supporter of Reagan and they appeared together on Buckley’s TV program The Firing Line. The “conservative movement” was also propelled by ideologues with deep pockets and long term vision like the Koch brothers – for a interesting history of all this, see Nancy MacLean’s Democracy in Chains.

To the Buckley conservatives today, destruction of social institutions, “creative” or otherwise, is somehow not a problem and militarism is somehow virtuous.

As for destruction, among the social structures that have fallen victim recently to creative destruction is the American middle class itself, as income inequality has grown apace. This process began at the tail end of the 1960s and has been accelerating since Ronald Reagan’s presidency as Keynesian economics has given way to “supply side” economics; moreover, the guardrails for capitalism imposed by the New Deal have come undone: the Glass-Steagall Act has been repealed, the labor movement has been marginalized, and high taxes on the wealthy have become a thing of the past – contrast this with the fact that Colonel Tom Parker, the manager of Elvis Presley, considered it his patriotic duty to keep The King in the 90% tax bracket back in the day.

As for militarism, despite VE Day and VJ Day, since the 1950s, the U.S. has been engaged in an endless sequence of wars – big (Korea) and small (Grenada), long (Vietnam) and short (the First Gulf War), visible (Afghanistan) and invisible (Niger), loud (Iraq) and quiet (Somalia), … . All of which has created a situation much like the permanent state of war of Orwell’s 1984.

Moreover, since Buckley’s time, the American “conservatives” have even moved further right: reading Ayn Rand (firmly atheist and pro-choice though she was) in high-school or college has become a rite of passage, e.g. ex-Speaker Paul Ryan. An interventionist, even war-mongering, wing of the “conservative movement” has emerged, the “neo-conservatives” or “neo-cons.” Led by Dick Cheney, they were the champions of George W. Bush’s invasion of Iraq and they applaud all troop “surges” and new military interventions.

As David Brooks recently pointed out in his New York Times column (Nov. 16, 2018), the end of the Cold War deprived the “conservative movement” of its great moral cause, the struggle against godless communist collectivism. And what was a cause has morphed into expensive military adventurism. Indeed, the end of the Cold War failed to yield a “peace dividend” and the military budget today threatens the economic survival of the nation – the histories of France, Spain and many other countries bear witness to how this works itself out, alas! In days of yore, it would have been the fiscal restraint of people known as conservatives that kept government spending in check; today “conservative” members of Congress continue to sound like Robert Taft on the subject of government spending when attacking programs sponsored by their opponents, but they do not hesitate to drive the national debt higher by over-funding the military and pursuing tax cuts for corporations and the wealthy. Supply-side economics cleaves an ever widening income gap, the least conservative social policy imaginable. Then too these champions of the free market and opponents of government intervention rushed to bail out the big banks (but not the citizens whose homes were foreclosed on) during the Great Recession of 2008. All this leads one to think that this class of politicians is serving its donor class and not the working class, the middle class, the upper middle class or even much of the upper class.

Perhaps semantic rock bottom is reached when “conservative” members of Congress vote vociferously against any measure for environmental conservation. But this is predictable given the lobbying power of the fossil fuel industry, a power so impressive that even the current head of the Environmental Protection Agency is a veteran lobbyist for Big Coal. Actually for these conservatives, climate change denial is consistent with their core beliefs: fighting the effects of global warming effectively will require large-scale government intervention, significantly increased regulation of industry and agriculture as well as binding international agreements – all of which are anathema to conservatives in the U.S. today.

Still there where the word “conservative” is most misapplied is in matters judicial. One speaks today of “conservative majorities” on the Supreme Court but these majorities have proved themselves all too ready to rewrite laws and overturn precedent in 5-4 decisions in an aggressive phase of judicial activism.

So for those who fear that corruption of its language is dangerous for the U.S. population, this is the worst of times: “liberal” which once designated a proponent of Gilded Age laissez-faire capitalism is now claimed by the heirs of the New Deal and the Great Society; “conservative” which once designated a traditionalist is now the label for radical activists both political and judicial. “Liberal” is yielding to “progressive” now. However, the word “conservative” has a certain gravitas to it and “conservatism” has taken on the trappings of a religious movement complete with patron saints like Ronald Reagan and Margaret Thatcher; “conservative” is likely to endure, self-contradictory though it has become.

The Roberts Court


In 2005, with the retirement of Justice Rehnquist, John Roberts was named to the position of Chief Justice by Republican president George W. Bush. Another change in Court personnel occurred in 2008 when Sandra Day O’Connor retired and was replaced by Justice Samuel Alito. With Roberts and Alito, the Court had an even more solid “conservative” majority than before – the result being that more than ever in a 5-4 decision a justice’ vote would be determined by the party of the president who appointed him or her.

It was Ronald Reagan who named the first woman justice to the Supreme Court with the appointment of Sandra Day O’Connor in 1981. It was also Ronald Reagan who began the practice of Republican presidents’ naming ideological, conservative Roman Catholics to the Supreme Court with the appointment of Antonin Scalia in 1986. This practice on the part of Republican presidents has indeed been followed faithfully as we have to include Neil Gorsuch in this group of seven – though an Episcopalian today, Gorsuch was raised Catholic, went to parochial school and even attended the now notorious Georgetown Prep. Just think: with Thomas and Gorsuch already seated, the Brett Kavanaugh appointment brings the number of Jesuit trained justices on the Court up to three; this numerologically magic number of men trained by an organization famous for having its own adjective, plus the absence of true WASPs from the Supreme Court since 2010, plus the fact that all five of the current “conservative” justices have strong ties to the cabalistic Federalist Society could all make for an interesting conspiracy theory – or at least the elements of a Dan Brown novel.

It is said that Chief Justice Roberts is concerned about his legacy and does not want his Court to go down in history as ideological and “right wing.” However, this “conservative” majority has proven radical in their 5-4 decisions, decisions for which they then have full responsibility.

They have put gun manufacturers before people by replacing the standard interpretation of the 2nd Amendment that went back to Madison’s time with a dangerous one by cynically appealing to “originalism” and claiming the authority to speak for Madison and his contemporaries (District of Columbia v. Heller 2008)

Indeed, with Heller there was no compelling legal reason to play games with the meaning of the 2nd Amendment – if the over 200 years of interpretation of the wording of the amendment isn’t enough, if the term “militia” isn’t enough, if the term “bear arms” isn’t enough to link the amendment to matters military in the minds of the framers, one can consult James Madison’s original text:

    “The right of the people to keep and bear arms shall not be infringed; a well armed, and well regulated militia being the best security of a free country: but no person religiously scrupulous of bearing arms, shall be compelled to render military service in person.” [Italics added].

The italicized clause was written to reassure Quakers and other pacifist religious groups that the amendment was not forcing them to serve in the military, but it was ultimately excluded from the final version for reasons of separation of church and state. This clause certainly indicates that the entirety of the amendment, in Madison’s view, was for the purpose of maintaining militias: Quakers are not vegetarians and do use firearms for hunting. Note too that Madison implies in this text and in the shorter final text as well that “the right to bear arms” is a collective “right of the people” rather than an individual right to own firearms.

The radical ruling in Heller by the five “conservative” justices has stopped all attempts at gun control, enriched gun manufacturers, elevated the National Rifle Association to the status of a cult and made the Court complicit in the wanton killings of so many.

The “conservative” majority of justices has overturned campaign finance laws passed by Congress and signed by the President by summoning up an astonishing, ontologically challenged version of the legal fiction that corporations are “persons” and imbuing them with new First Amendment rights (Citizens United v. FEC 2008).

Corporations are treated as legal “persons” in some court matters, basically so that they can pay taxes and so that the officers of the corporation are not personally liable for a corporation’s debts. But, there was no compelling legal reason to play Frankenstein in Citizens United and create a new race of corporate “persons” by endowing corporations with a human-like right to free speech that allows them to spend their unlimited money on U.S. political campaigns; this decision is the first of the Roberts Court’s rulings to make this list of all-time worst Supreme Court decisions, a list (https://blogs.findlaw.com/supreme_court/2015/10/13-worst-supreme-court-decisions-of-all-time.html ) compiled for legal professionals. It has also made TIME magazine’s list of the two worst decisions in the last 60 years and likely many other such rankings. The immediate impact of this decision has been a further gap between representatives and the people they are supposed to represent; the political class was at least somewhat responsive to the voters, now they are only responsive to the donor class. This likely works well for the libertarians and conservatives who boast “this is a Republic, not a Democracy.”

These same five justices have continued their work by

  • usurping Congress’ authority and undoing hard-fought for minority protections from the Voting Rights Act by adventuring into areas of history and politics that they clearly do not grasp and basing the decision on a disingenuous view of contemporary American race relations (Shelby County v. Holder 2013),

  • doubling down on quashing the Voting Rights Act five years later in a decision that overturned a lower court ruling that Texas’ gerrymandered redistricting map undercut the voting power of black and Hispanic voters (Texas Redistricting Case 2018)

  • breaching the separation of Church and State by ascribing “religious interests” to companies in a libertarian judgment that can justify discrimination in the name of person’s individual freedoms, the “person” in this case being a corporation no less (Burwell v. Hobby Lobby Stores 2014)

  • gravely wounding the labor movement by overturning the Court’s own ruling in a 1977 case, Abood v. Detroit Board of Education, thus undoing years of established Labor Law practice (Janus v. AFSCME 2018) – a move counter to the common law principle of following precedent.

These six decisions are examples of “self-inflicted wounds” on the part of the Roberts Court and can be added to a list begun by Chief Justice Charles Evans Hughes, a list that begins with Dred Scott. The recent accessions of Neil Gorsuch and Brett Kavanaugh to seats on the Court may well make for even more decisions of this kind.

This judicial activism is indeed as far away from the dictionary meaning of “conservatism” as one can get. Calling these activist judges “conservative” makes American English a form of the “Newspeak” of Orwell’s 1984. The Court can seem to revel in its arrogance and its usurpation of power: Justice Scalia would dismiss questions about Bush v. Gore with “Get over it” – a rejoinder some liken to “Let them eat cake” – and he refused to recuse himself in cases involving Dick Cheney, his longtime friend (Bush v. Gore, Cheney v. U.S. District Court).

The simple fact is that the courts have become too politicized. The recent fracas between the President and the Chief Justice where the President claimed that justices’ opinions depended on who appointed them just makes this all the more apparent.

Pundits today talk endlessly on the topic of how “we are headed for a constitutional crisis,” in connection with potential proceedings to impeach the President. But we are indeed in a permanent constitutional crisis in any case. Thus, there is a clear majority in the country that wants to undo the results of Citizens United and of Heller in particular – both are decisions which shot down laws enacted by elected representatives. Congressional term limits are another example; in 1995 with U.S. Term Limits Inc. v. Thornton, the Court nullified 23 state laws instituting term limits for members of Congress, thereby declaring that the Constitution had to be amended for such laws to pass judicial review.

In the U.S. the Congress is helpless when confronted with this kind of dilemma; passing laws cannot help since the Court has already had the final say, a true Catch-22. This is an American constitutional problem, American Exceptionalism gone awry. In England and Holland, for example, the courts cannot apply judicial review to nullify a law; in France, the Conseil Constitutionel has very limited power to declare a law unconstitutional; this was deliberately engineered by Charles de Gaulle to avoid an American style situation because per DeGaulle “la [seule] cour suprême, c’est le peuple” (The only supreme court is the people).

So what can the citizen majority do? The only conceivable recourse is to amend the Constitution; but the Constitution itself makes that prospect dim since an amendment would require approval by a supermajority in both houses of Congress followed by ratification by two-thirds of the states; moreover, states with barely 4% of the population account for over one-third of the states, making it easy and relatively inexpensive for opposition to take hold – e.g. the Equal Rights Amendment. Also, those small, rural states’ interests can be very different from the interests of the large states – one reason reform of the Electoral College system is an impossibility with the Constitution as it is today. Only purely bureaucratic measures can survive the amendment ratification process. Technically, there is a second procedure in Article V of the Constitution where a proposal to call a Constitutional Convention can be initiated by two-thirds of the states but then approval of an amendment by three-fourths of the states is required; this procedure has never been used. Doggedly, the feisty group U.S. Term Limits (USTL), the losing side in that 1995 decision, is trying to do just that! For their website, click https://www.termlimits.com/  .

What has happened is that the Constitution has been gamed by the executive and judicial branches of government and the end result is that the legislative branch is mostly reduced to theatrics. Thus, for example, while the Congress is supposed to have the power to make war and the power to levy tariffs, these powers have been delegated to the office of the President. Even the power to make law has, for so many purposes, been passed to the courts where every law is put through a legal maze and the courts are free to nullify the law or change its meaning invoking the interpretation du jour of the Constitution and/or overturning legal precedents, all on an as needed basis.

This surge in power of the judiciary was declared to be impossible by Alexander Hamilton in his Federalist 78 where he argues approvingly that the judiciary will necessarily be the weakest branch of the government under the Constitution. But oddly no one is paying attention to the rock-star founding father this time. For example, this Federalist Paper of Hamilton’s is sacred scripture for the assertive Federalist Society but they seem silent on this issue – but that is not surprising given that they have become the gatekeepers for Republican presidents’ nominees for the Supreme Court.

Americans are simply blinded to problems with the Constitution by the endless hymns in its praise in the name of American Exceptionalism. Many in Europe also argue that the way the Constitution contributes to inaction is a principal reason that voter participation in the U.S. is far lower than that in Europe and elsewhere. Add to that, the American citizen’s well founded impression that it is the money of corporations, billionaires and super-PACs in cahoots with the lobbyists of the Military Industrial Complex, Big Agriculture, Big Pharma, Big Oil and Big Banks that runs the show and you have a surefire formula to induce voter indifference. Even the improved turnout in the 2018 midterm elections was unimpressive by international standards.

This is not a good situation; history has examples of what happens when political institutions are no longer capable of running a complex nation – the French Revolution, the fall of the Roman Republic, the rise of Fascism in post WWI Europe … .

Bush v. Gore

In 1986, when Warren Burger retired, Ronald Reagan promoted Associate Justice William Rehnquist to the position of Chief Justice and nominated Antonin Scalia to fill Rehnquist’s seat. This created a solid conservative kernel on the Court consisting of the five justices Rehnquist, Scalia, Thomas, O’Connor and Kennedy; there was also Justice John Paul Stevens (appointed by Gerald Ford) who was considered a “moderate conservative.” On occasion O’Connor or Kennedy could become a swing vote and turn things in another direction and Stevens too voted against the conservative majority on some important decisions.
While more conservative than the Burger Court, the Rehnquist Court did not overthrow the legacy of the Warren Court; on the other hand, it promoted a policy of “New Federalism” which favored empowering the states rather than the federal government.
This philosophy was applied in two cases that weakened Roe v. Wade, the defining ruling of the Burger Court.
Thus in Webster v. Reproductive Health Services (1989), the Court upheld a Missouri law that restricted the way state funds could be used in connection with counseling and other aspects of abortion services; this ruling allowed states to legislate in ways thought to have been ruled out by Roe.
As a second example, we have their ruling in Planned Parenthood v. Casey (1992) which also weakened Roe by giving much more power to the states to control access to abortion. Thus today in states like Mississippi, there is virtually no such access. All this works against the poor and the less affluent as women need to travel far, even out of state, to get the medical attention they seek.
Then the Rehnquist Court delivered one of the most controversial, politicized decisions imaginable with its ruling in Bush v. Gore (2000). With this decision, the Court came between a state supreme court and the state’s election system and hand-delivered the presidency to Republican George W. Bush.
After this case, the Court made other decisions that generated some controversy, but in these it came down, relatively speaking, on the liberal side in ruling on anti-sodomy laws, on affirmative action and on election finance. However, Bush v. Gore is considered one of the worst Supreme Court decisions of all time. For a list that includes this decision, Dred Scott, Plessy v. Ferguson and ten others, click HERE ; for a TIME magazine piece that singles it out as one of the two worst decisions since 1960 (along with Citizens United v. FEC), click HERE .
Naturally, the 5-4 decision in Bush v. Gore by the Court’s conservative kernel is controversial because of the dramatic end it put to the 2000 presidential election. There are also legal and procedural aspects of the case that get people’s dander up.
To start there is the fact that in this decision the Court overruled a state supreme court on the matter of elections, something that the Constitution itself says should be left to the states.
For elections, Section 4 of Article 1 of the U.S. Constitution leaves the implementation to the states to carry out, in the manner they deem fit – subject to Congressional oversight but not to court oversight:
    “The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof; but the Congress may at any time by Law make or alter such Regulations, except as to the Places of chusing (sic) Senators.”
N.B. In the Constitution, the “Senators” are an exception because at that time the senators were chosen by the state legislatures and direct election of senators by popular vote did not come about until 1913 and the 17th Amendment.
From the time of the Constitution, voting practices have varied from state to state. In fact, at the outset free African-Americans with property could vote in Maryland and that lasted until 1810; women of property could vote in New Jersey until 1807; in both cases, the state legislatures eventually stepped in and “restored order.”
In the Constitution, it is set out that the electors of the Electoral College should be named by each state legislature and not voted for by the people at all – Hamilton and Madison were most fearful of “mob rule.” The only founding father who expressed some admiration for the mass of U.S. citizenry was, not surprisingly, Jefferson who famously asserted in a letter to Lafayette that “The yeomanry of the United States are not the canaille [rabble] of Paris.”
Choosing Electors by popular vote was established nation-wide, however, by the 1820s; by Section 4 of Article 1 above, it is each state’s responsibility to implement its own system for choosing electors; there is no requirement for things to be uniform. In fact, today Nebraska and Maine use congressional district voting to divide up their electors among the candidates while all the other states use plurality voting where the presidential candidate with the most votes is awarded all the electoral votes from that state. In yet another break with the plurality voting system that the U.S. inherited from England, the state of Maine now employs ranked choice voting to elect Congressional representatives – in fact, in 2018 a candidate in a House Congressional race in Maine with fewer first place votes but a larger total of first and second place votes emerged the victor in the second round of the instant runoff.
So from a Constitutional point of view, the Supreme Court really did not have the authority to take the case Bush v. Gore on. In his dissent, Justice John Paul Stevens decried this usurpation of state court power:
    [The court displayed] “an unstated lack of confidence in the impartiality and capacity of the state judges who would make the critical decisions if the vote count were to proceed”.
Moreover, Sandra Day O’Connor said as much when some years later in 2013 she expressed regret over her role in Bush v. Gore telling the Chicago Tribune editorial board: “Maybe the court should have said, ‘We’re not going to take it, goodbye.’ ”
Taking this case on added to the Courts history of “self inflicted wounds” to use the phrase Chief Justice Charles Evans Hughes applied to bad decisions the Court just did not have to make the way they did for any compelling legal reason.
The concurring justices admitted that their decision was not truly a legal ruling but rather an ad hoc way of making a problem go away when they said that the ruling in Bush v. Gore should not be considered a precedent for future cases:
    “Our consideration is limited to the present circumstances, for the problem of equal protection in election processes generally presents many complexities.”
Another odd thing was that the ruling did not follow the usual practice of having one judge write the deciding opinion with concurring and dissenting opinions from the other justices. Instead, they followed a technique known as a per curiam decision which is usually reserved for 4-4 hung court when no actual decision is being made. It is a technique for dodging responsibility for the decision and not assigning credit for a decision to a particular justice. As another example of how this method of laying down a decision is employed, in the state of Florida their Supreme Court often issues per curiam decisions in death penalty cases.
Borrowing a trope from the Roman orator Cicero, we pass over in silence the revelation that three of the majority judges in this case had reason to recuse themselves by not mentioning the fact that Justice Thomas’ wife was very active in the Bush transition team even as the case was before the Court, by leaving out the fact that Justice Scalia’s son was employed by the very law firm that argued Bush’s case before the Court, by omitting the fact that Justice Scalia and vice-presidential candidate Dick Cheney were longtime personal friends and by skipping over the fact that according to The Wall Street Journal and Newsweek, Justice O’Connor had previously said that a Gore victory would be a disaster for her because she would not want to retire under a Democratic president!
For an image of Cicero practicing his craft before an enthralled Roman Senate, click HERE .
So we limit ourselves to quoting Harvard Professor Alan Dershowitz who summed things up this way:
    “The decision in the Florida election case may be ranked as the single most corrupt decision in Supreme Court history, because it is the only one that I know of where the majority justices decided as they did because of the personal identity and political affiliation of the litigants. This was cheating, and a violation of the judicial oath.”
Another villain in the piece is Governor Jeb Bush of Florida whose voter suppression tactics implemented by Secretary of State Katherine Harris disenfranchised a significant number of voters. In the run up to this election, according to the Brennan Center for Justice at NYU, some 4,800 eligible African-American Florida voters were wrongly identified as convicted felons and purged from the voting rolls. Given that 86% of African-American voters went for Gore over Bush in 2000, one can do the math and see that Gore would likely have won if but 20% of these African-American voters had been able to cast ballots.
Yet another villain in the piece and in the recurring election problems in Florida is the plurality voting system that the state uses to assign all its votes for its electors to the candidate who wins the most votes (but not necessarily the majority of votes). This system works poorly in cases where the elections are as tight as again and again they prove to be in Florida. In 2000, had Florida been using ranked-choice voting (to account for votes for Nader and Buchanan) or congressional district voting (as in Maine and Nebraska), there would have no recount crisis at all – and either way Gore would in all probability have won enough electoral votes to secure the presidency and the matter never would have reached the Supreme Court.
Sadly, the issues of the presidential election in Florida in 2000 are still very much with us – the clumsiness of plurality voting when elections are close, the impact of voter suppression, antiquated equipment, the role of the Secretary of State and the Governor in supervising elections, … . The plot only thickens.

The Warren Court, Part B

In the period from 1953-1969, Earl Warren became the most powerful Chief Justice since John Marshall as he led the Court through a dazzling series of rulings that established the judiciary as a more than equal partner in government – an outcome deemed impossible by Alexander Hamilton in his influential paper Federalist 78 and an outcome deemed undesirable for the separation of powers in government by Montesquieu in his seminalThe Spirit of the Laws. The impact of this Court was so dramatic that it provoked a nationwide call among conservatives for Warren’s impeachment. (Click  HERE).
As the Cold War intensified, the historical American separation of Church and State was compromised. In the name of combating godless communism, the national motto was changed! From the earliest days of the Republic, the motto had been “E Pluribus Unum,” which is the Latin for “Out of Many, One”; this motto was adopted by an Act of Congress under the Articles of Confederation in 1782. In 1956, the official motto became “In God We Trust” and that text now appears on all U.S. paper currency. Shouldn’t they at least have asked what deist leaning Washington, Jefferson, Franklin and Hamilton would have thought before doing this – after all their pictures are on the bills?
Spurred on by the fact that the phrase “under God” appears in most versions of the Gettysburg Address,
          that this nation, under God, shall have a new birth of freedom
groups affiliated with organized religion like the Knights of Columbus successfully campaigned for Congress to insert this phrase into the Pledge of Allegiance; this was done in 1954. Interestingly, “under God” does not appear in Lincoln’s written text for the cemetery dedication speech but was recorded by listeners who were taking notes. Again, this insertion in the Pledge was justified at the time by need to rally the troops in the struggle against atheistic communism. In particular, in the Catholic Church a link was established between the apparitions of The Virgin at Fatima in the period from May to October of 1917 and the October Revolution in Russia in 1917 – though the Revolution actually took place on November 6th and 7th in the Gregorian Calendar; with the Cold War raging, the message of Fatima became to say the rosary for the conversion of Russia, a directive that was followed fervently by the laity, especially school children during the 1950s and 1960s. In addition, the “Second Secret” of Our Lady of Fatima was revealed to contain the line “If my requests are heeded, Russia will be converted, and there will be peace.” When the Soviet Union fell at the end of 1991, credit was ascribed to Ronald Reagan and other political figures; American Catholics of a certain age felt slighted indeed when their contributing effort went unrecognized by the general public!!
The course was reversed somewhat with the Warren Court’s verdict in Engle v. Vitale (1962) when the Court declared that organized school prayer violated the separation of Church and State. A second (and better known) decision followed in Abington School District v. Schempp (1963) where the Court ruled that official school Bible reading also violated the separation of Church and State. This latter case is better known in part because it involved the controversial atheist Madalyn Murray O’Hair who went on to make an unsuccessful court challenge to remove “In God We Trust” from U.S. paper currency. Ironically, the federal courts that thwarted this effort cited Abington School District where Justice Brennan’s concurring opinion explicitly stated that “the motto” was simply too woven into the fabric of American life to “present that type of involvement which the First Amendment prohibits.” In the U.S., “God” written with a capital “G” refers specifically to the Christian deity; so a critic deconstructing Brennan’s logic might argue that Brennan concedes that worship of this deity is already an established religion here.
The Warren Court also had a significant impact on other areas of rights and liberties.
With Baker v. Carr (1962) and Reynolds v. Sims (1964), the Court codified the principle of “one man, one vote.” In the Baker case, the key issue was whether state legislative redistricting was a matter for state and federal legislatures or whether it came under the authority of the courts. Here, the Court overturned its own decision in Colegrove v. Green (1946) where it ruled that such redistricting was a matter for the legislatures themselves with Justice Frankfurter declaring “Courts ought not to enter this political thicket.” The majority opinion in the Baker ruling was written by Justice Brennan; Frankfurter naturally dissented. In any case, this was a bold usurpation of authority on the part of the Supreme Court, something hard to undo even should Congress wish to do so. Again we are very far from Marbury v. Madison; were that case to come up today, one would be very surprised if the Supreme Court didn’t instruct Secretary of State Madison to install Marbury as Justice of the Peace in Washington D.C.
With Gideon v. Wainwright (1963) the Court established the accused’s right to a lawyer in state legal proceedings. This right is established for defendants vis-à-vis the federal government by the Bill of Rights with the Fifth and Sixth Amendments; this case extended that protection to defendants in dealings with the individual states.
With Miranda v. Arizona (1966), it mandated protection against self-incrimination – the “Miranda rights” that a plaintiff must be informed of. A Virginia state law banning interracial marriage was struck down as unconstitutional in Loving v. Virginia (1967), a major civil rights case on its own.
The Gideon and Miranda rulings were controversial, especially Miranda, but they do serve to protect the individual citizen from the awesome power of the State, very much in the spirit of the Bill of Rights and of the Magna Carta; behind the Loving case is an inspiring love story and, indeed, it is the subject of a recent motion-picture.
Warren’s legacy is complex. On the one hand, his Court courageously addressed pressing issues of civil rights and civil liberties, issues that the legislative and executive branches would not deal with. But by going where Congress feared to tread, the delicate balance of the separation of powers among the three branches of government has been altered, irreparably it appears.
The Warren Court (1953-1969) was followed by the Burger Court (1969-1986).
Without Earl Warren, the Court quickly reverted to making decisions that went against minorities. In San Antonio Independent School District v. Rodriguez (1973), the Court held in a 5-4 decision that inequities in school funding did not violate the Constitution; the ruling implied that discrimination against the poor is perfectly compatible with the U.S. Constitution and the right to an education is not a fundamental right. This decision was based on the fact that the right to an education does not appear in the Constitution echoing the logic of Marbury. Later some plaintiffs managed to side-step this ruling by appealing directly to state constitutions. We might add that all 5 concurring justices in this 5-4 ruling were appointed by a Republican president – a pattern that is all too common today; judicial activism fueled by political ideology is a dangerous force.
The following year in another 5-4 decision Milliken v. Bradley (1974), the Court further weakened Brown by overturning a circuit court’s ruling. With this ruling, the Court scratched a plan for school desegregation in the Detroit metropolitan area that involved separate school districts, thus preventing the integration of students from Detroit itself with those of adjacent suburbs like Grosse-Pointe. The progressive stalwarts Marshall, Douglas and Brennan were joined by Byron White in their dissent; the 5 concurring justices were all appointed by Republican presidents. The decision cemented into place the pattern of city schools with black students and surrounding suburban schools with white students.
The most controversial decision made by the Burger Court was Roe v. Wade (1973). This ruling invoked the Due Process Clause of the 14th Amendment and established a woman’s right to privacy as a fundamental right and declared that abortion could not be subject to state regulation until the third trimester of pregnancy. Critics, including Ruth Bader Ginsburg, have found fault with the substance of the decision and its being “about a doctor’s freedom to practice his profession as he thinks best…. It wasn’t woman-centered. It was physician-centered.” A fresh attempt to overturn Roe and subsequent refinements such as Planned Parenthood v. Casey (1992) is expected, given the current ideological makeup of the conservative majority on the Court and the current Court’s propensity to overturn even recent rulings.
Today to the overweening power of the Court has been added a political dimension in that in 5-4 decisions there continues to be, with rare exceptions, that direct correlation between a justice’s vote and the party of the president who appointed that justice. To that add the blatantly partisan political shenanigans we have seen on the part of the Senate Majority leader in dealing with Supreme Court nominations and add the litmus test provided by the conservative/libertarian Federalist Society. The plot thickens. Affaire à suivre.

The Warren Court, Part A

Turning to the courts when the other branches of government would not act was the technique James Otis and the colonists resorted to in the period before the American Revolution, the period when the Parliament and the Crown would not address “taxation without representation.” Like the colonists, African-Americans had to deal with a government that did not represent them. Turning to the courts to achieve racial justice and to bring about social change was then the strategy developed by the NAACP. However, for a long time, even victories in the federal courts were stymied by state level opposition. For example, Guinn v. United States (1915) put an end to one “literacy test” technique for voter suppression but substitute methods were quickly developed.
In the 1950’s, the Supreme Court finally undid the post Civil War cases where the Court had authorized state level suppression of the civil rights of African-Americans – e.g. the Slaughterhouse Cases (1873), the Civil Rights Cases (1875), Plessy v. Ferguson (1896); these were the Court decisions that callously rolled back the 13th, 14th and 15th amendments to the Constitution and locked African-Americans into an appalling system.
The first chink in Plessy was made in a case brilliantly argued before the Supreme Court by Thurgood Marshall, Sweatt v. Painter (1950). The educational institution in this case was the University of Texas Law School at Austin, which at that time actually had a purportedly equal but certainly separate school for African-American law students. The Court was led by Kentuckian Fred M. Vinson, the last Chief Justice to be appointed by a Democratic president – in this case Harry Truman! Marshall exposed the law school charade for the scam that it was. Similarly and almost simultaneously, in McLaurin v. Oklahoma State Regents for Higher Education, the Court ruled that the University of Oklahoma could not enforce segregation in classrooms for PhD students. In these cases, the decisions invoked the Equal Protection Clause of the 14th Amendment; both verdicts were unanimous.
These two important victories for civil rights clearly meant that by 1950 things were starting to change, however slowly – was it World War II and the subsequent integration of the military? Was it Jackie Robinson, the Brooklyn Dodgers and the integration of baseball? Was it the persistence of African-Americans as they fought for what was right? Was it the Cold War fear that U.S. racial segregation was a propaganda win for the international Communist movement? Was it the fear that the American Communist party had gained too much influence in the African-American community – indeed Langston Hughes, Paul Robson and other leaders had visited the Soviet Union; the leading scholar in the U.S. of African-American history was Herbert Aptheker, a card-carrying member of the Communist Party. Or was it an enhanced sense of simple justice on the part of nine “old white men”?
The former Republican Governor of California, Earl Warren, was named to succeed Vinson in 1953 by President Dwight D. Eisenhower. The Warren Court would overturn Plessy and other post Civil War decisions that violated the civil rights of African-Americans and go on to use the power of the Court in other areas of political and civil liberties. This was a period of true judicial activism. Experienced in government, Warren saw that the Court would have to step in to achieve important democratic goals that the Congress was unwilling to act on. Several strong, eminent jurists were part of this Court. There were the heralded liberals William O. Douglas and Hugo Black. There was Viennese-born Felix Frankfurter, a former Harvard Law Professor and a co-founder of the ACLU; Frankfurter was also a proponent of judicial restraint which strained his relationship with Warren over time as bold judgments were laid down. For legal intricacies, Warren relied on William J. Brennan, another Eisenhower appointee but a friend of Labor and a political progressive. Associate Justice John Marshall Harlan, the grandson and namesake of the sole dissenter in Plessy, was the leader of the conservative wing.
Perhaps, the most well-known of the Warren era cases is Brown v. Board of Education, which grouped several civil rights suits that were being pursued by the NAACP and others; the ruling in this case, which like Sweatt was again based on the Equal Protection Clause, finally undid Plessy. This case too was argued before the Court by Thurgood Marshall.
Brown was followed by several other civil rights cases which ended legal segregation in other aspects of American life. Moreover, when school integration was not being implemented around the country, with the case Brown v. Board of Education II (1955), the Court ordered schools to desegregate “with all deliberate speed”; this elusive phrase proved troublesome. It was introduced in the 1912 decision in Virginia v. West Virginia by Court wordsmith Oliver Wendell Holmes Jr. and it was used in Brown II at the behest of Felix Frankfurter, that champion of judicial restraint. This decision was 9-0 as it was in all the Warren Court’s desegregation cases, something that Warren considered most important politically.
With Brown II and other cases, the Court ordered states and towns to carry out its orders. This kind of activism is patently inconsistent with the logic behind Marbury v. Madison where John Marshall declared that the Court could not order anything that was not a power it was explicitly given in the Constitution, not even something spelt out in an act of Congress. No “foolish consistency” to worry about here.
However, school desegregation hit many obstacles. The resistance was so furious that Prince Edward County in Virginia actually closed its schools down for 5 years to counter the Court’s order; in Northern cities like Boston, enforced busing led to rioting; Chris Rock “jokingly” recounts that in Brooklyn NY he was bused to a neighborhood poorer than the one he lived in – and he was beaten up every day to boot.
The most notorious attempt to forestall the desegregation ruling took place in Little Rock, AR in September, 1957. Nine (outstanding) African-American students had been chosen to enroll in previously all white Central High School. The governor, Orval Faubus, actually deployed National Guard troops to assist segregationists in their effort to prevent these students from attending school. President Dwight D. Eisenhower reacted firmly; the Arkansas National Guard was federalized and taken out of the governor’s control and the elite 101st Airborne Division of the U.S. Army (the “Screaming Eagles”) was sent to escort the nine students to class, all covered on national television:
Segregationist resistance did not stop there: among other things, the Little Rock schools were closed for the 1958-59 school year in a failed attempt to turn city schools into private schools and this “Lost Year” was blamed on African-American students. It was ugly.
The stirring Civil Rights movement of the 1950s and 1960s fought for racial equality on many fronts. It spawned organizations and leaders like SNCC (Stokely Carmichael), CORE (Roy Innis) and SCLC (Martin Luther King Jr.) and it spawned activists like Rosa Parks, John Lewis, Michael Schwerner, James Chaney and Andrew Goodman. The price was steep; people were beaten and people were murdered.
The President and Congress were forced to react and enacted the Civil Rights Act of 1964 and the Voting Rights Act of 1965. The latter, in particular, had enforcement provisions which Supreme Court decisions like Guinn had lacked. This legislation reportedly led Lyndon Johnson to predict that the once Solid South would be lost to the Democratic Party. Indeed today, the New South is comprised of “deep red” states. Ironically, it was the Civil Rights movement that made the prosperous New South possible – with segregation, companies (both domestic and international) wouldn’t relocate there or expand operations there; with segregation, the impressive Metro system MARTA of Atlanta could never have been possible; with segregation, a modern consumer economy cannot function; with segregation, Alabama wouldn’t be the reigning national football champion – and college football is a big, big business. .
Predictably, there was a severe backlash against the new legislation and already in 1964 two expedited challenges reached the Warren Court, Heart of Atlanta v. United States and Katzenbach v. McClung. Both rulings were in favor of the Civil Rights Act by means of 9-0 decisions. Interestingly, in both cases, the Court invoked the Commerce Clause of the Constitution rather than the 13th and 14th amendments basing the decision on the authority of the federal government to regulate interstate commerce rather than on civil liberties; experts warn that this could make these decisions vulnerable in the future.
The period of slavery followed by the period of segregation and Jim Crow laws lasted 346 years from 1619 to 1965. Until 1776, this repression was enforced by the English Crown and Parliament, then until the Civil War by the Articles of Confederation and the U.S. Constitution; and then until 1965 by state governments and the Supreme Court. During this time, there was massive wealth accumulation by white America, drawn in no small measure from the profits of slave labor and later the Jim Crow economy. Great universities such as the University of Virginia, Duke and Clemson owe their existence to fortunes gained through this exploitation. Recently, it was revealed that Georgetown University profited from significant slave sales in Maryland to finance its operations. In the North too, the profits from selling factory product to the slave states, to say nothing of the slave trade itself, contributed to the endowments of the great universities of the northeast. Indeed, Columbia, Brown and Harvard have publicly recognized their ties to slavery and the slave trade.  On the other hand, Europeans who arrived in the U.S. in the waves of immigration following the Civil War and their descendants were able, in large numbers, to accumulate capital and accede to home ownership and eventually to higher education. Black America was simply denied this opportunity for those 346 years and today the level of black family wealth is still appallingly low – to cite a Washington Post article of Sept. 28, 2017: “The median net worth of whites remains nearly 10 times the size of blacks’. Nearly 1 in 5 black families have zero or negative net worth — twice the rate of white families.”
It is hard to imagine how this historical injustice can ever be righted. The Supreme Court has played a nefarious role in all this from the Marshall Court’s assiduous defense of the property rights of slave owners (Scott v. London (1806), etc.) to Dred Scott to Plessy, weakening the 14th and 15th amendments en passant, enabling Jim Crow and creating the world of “Separate But Equal.” Earl Warren’s leadership was needed in the period following the Civil War but alas that is not what happened.
In addition to these celebrated civil rights cases, the Warren Court also had to take on suits involving separation of Church and State and involving protection of the individual citizen from the awesome power of the State, the very thing that made the Bill of Rights necessary. More to come. Affaire à suivre.

Business and Baseball

The twentieth century began in 1901. Teddy Roosevelt became President after William McKinley’s assassination by an anarchist at the Pan American Exposition in Buffalo NY. This would prove a challenging time for the Supreme Court and judicial review. By the end of the century the power and influence of the Court over life in America would far exceed the limits stipulated by the Baron de Montesquieu in The Spirit of the Laws or those predicted by the analysis of Alexander Hamilton in Federalist 78.
Normally, the most visible of the justices on the Court is the Chief Justice but in the period from 1902 till 1932, the one most quotable was Associate Justice Oliver Wendell Holmes Jr. Holmes Sr. was the famous physician, writer and poet, author of Old Ironsides and other entries in the K-12 canon. For his part, Holmes Jr. wrote Supreme Court decisions and dissents that have become part of the lore of the Court.
In 1905, the 5-4 Court ruled against the state of New York in one of its more controversial decisions, Lochner v. New York. Appealing to laissez-faire economics, the majority ruled that the state did not have the authority to limit bakery workers hours to 10 hours a day, 60 hours a week even if the goal was to protect the workers’ health and that of the public. The judges perverted the Due Process Clause of the 14th Amendment which reads.
    [Nor] shall any State deprive any person of life, liberty, or property, without due      process of law
They invoked this clause of a civil rights Amendment to rule that the New York law interfered with an individual baker’s right to enter into a private contract. In his dissent, Holmes attacked the decision for applying the social Darwinism of Herbert Spencer (coiner of the phrase “survival of the fittest”) to the Constitution; rather pointedly, Hughes wrote
    The Fourteenth Amendment does not enact Mr. Herbert Spencer’s Social Statics.
Over time, the anti-labor aspects of this decision were undone by legislation but its influence on the discussion of “due process” continues. It has given rise to the verb “lochnerize” which is defined thusly by Wiktionary:
    To read one’s policy preferences into the Constitution, as was (allegedly) done by the U.S. Supreme Court in the 1905 case Lochner v. New York.
The parenthetical term “allegedly” presumably refers to Holmes’ critique. Two other contributions of Lochner to the English language are the noun “Lochnerism” and the phrase “The Lochner Era.”
In 1917, Congress passed the Espionage Act which penalized protests and actions that contested American participation in WWI. This law and its added amendments in the Sedition Act (1918) were powerful tools for suppressing dissent, something pursued quite vigorously by the Wilson administration. A challenge to the act followed quickly with Schenk v. United States (1919). The Court ruled in favor of the Espionage Act unanimously; Holmes wrote the opinion and created some oft cited turns of phrase:
    The most stringent protection of free speech would not protect a man in falsely shouting fire in a theatre and causing a panic.
    The question … is whether the words used … create a clear and present danger that .. will bring about the substantive evils that Congress has a right to prevent.
Holmes’ opinion notwithstanding, the constitutionality of the Espionage Act is still debated because of its infringement on free speech.
Schenck was then followed by another case involving the Espionage Act, Debs v. United States (1919). Eugene Debs was the union activist and socialist leader whom the Court had already ruled against in the Pullman case known as In re Debs (1895). Writing again for a unanimous court, Holmes invoked Schenck and ruled that the right of free speech did not protect protest against the military draft. Debs was sentenced to ten years in Prison and disenfranchised; that did not prevent him from running for President in 1920 – he received over 900.000 votes, more than 3% of the total.
Debs was soon pardoned by President Warren G. Harding in 1921 and even invited to the White House! The passionate Harding apparently admired Debs and did not approve of the way he had been treated by Wilson, the Espionage Act and the Court; Harding famously held that “men in Congress say things worse than the utterances” for which Debs was convicted. In 1923, having just announced a campaign to eliminate the rampant corruption in Washington, Harding died most mysteriously in the Palace Hotel in San Francisco: vampire marks on his neck, no autopsy, hasty burial – suspects ranged from a Norwegian seaman to Al Capone hit men to Harding’s long- suffering wife. Harding was succeeded by Calvin Coolidge who is best remembered for his insight into the soul of the nation: “After all, the chief business of the American people is business.”
Although the Sedition Act amendments were repealed in 1921, the Espionage Act itself lumbers on. It has been used in more recent times against Daniel Ellsberg and Edward Snowden.
Today, Holmes is also remembered for his opinion in an anti-trust suit pitting a “third major league” against the established “big leagues.” The National League had been a profitable enterprise since 1876, with franchises stretching from Boston to St. Louis. At the top of the century, in 1901, the then rival American League was formed but the two leagues joined together in time for the first World Series in 1903. The upstart Federal League managed to field eight teams for the 1914 and 1915 seasons, but interference from the other leagues forced them to end operations. A suit charging the National and American leagues with violating the Sherman Anti-Trust Act was filed in 1915 and it was heard before Judge Kenesaw Mountain Landis – who, interestingly, was to become the righteous Commissioner of Baseball following the Black Sox Scandal of 1919. In Federal Court Landis dramatically slow-walked the Federal League’s case and the result was that different owners made various deals, some buying into National or American League teams and/or folding their teams into established teams; the exception was the owner of the Baltimore Terrapins franchise – the terrapin is a small turtle from Maryland, but the classic name for a Baltimore team is the Orioles; perhaps, the name still belonged to the New York Yankees organization since the old Baltimore Orioles, when dropped from the National League, joined the new American League in 1901 and then moved North to become the New York Highlanders in 1903, that name being changed to New York Yankees in 1913. Be that as it may, the league-less Terrapins continued to sue the major leagues for violating anti-trust law; this suit made its way to the Supreme Court as Federal Baseball Club v. National League.
In 1922, writing for a unanimous court in the Federal case, Holmes basically decreed that Major League Baseball was not a business enterprise engaged in interstate commerce; with Olympian authority, he wrote:
    The business [of the National League] is giving exhibitions of baseball. … the exhibition, although made for money, would not be called trade or commerce in the commonly accepted use of those words.
So, this opinion simply bypasses the Commerce Clause of the Constitution, which states that the Congress shall have power
    To regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes.
With this verdict, Major League Baseball, being a sport and not a business, was exempt from anti-trust regulations such. This enabled “the lords of baseball” to continue to keep a player bound to the team that first signed that player; the mechanism for this was yet another clause, the “reserve clause” which was attached to all the players’ contracts. The “reserve clause” also allowed a team (but not a player) to dissolve the player’s contract on 10 days notice. Obviously this had the effect of depressing player salaries. It also led to outrages such as Sal “The Barber” Maglie’s being blackballed for three seasons for having played in the Mexican League and such as the Los Angeles Dodgers’ treatment of Brooklyn great, Carl “The Reading Rifle” Furillo. Interestingly, although both were truly star players, neither is “enshrined” in the Baseball Hall of Fame; Furillo, though, is featured in Roger Kahn’s classic The Boys of Summer and both Furillo and Maglie take the field in Doris Kearns Goodwin’s charming memoir Wait Till Next Year.
The Supreme Court judgment in Federal was mitigated by subsequent developments that were set in motion by All-Star outfielder Curt Flood’s courageous challenge to the “reserve clause” in the suit Flood v. St Louis (1972); this case was decided against Flood by the Supreme Court in a 5-3 decision that was based on the precedent of the Federal ruling; in this case justice Lewis Powell recused himself because he owned stock in Anheuser-Busch, the company that owned the St. Louis franchise – an honorable thing to do but something which exposes the potential class bias of the Court that might lie behind decisions favoring corporations and the powerful. Though Flood lost, there were vigorous dissents by Justices Marshall, Douglas and Brennan and his case rattled the system; the players union was then able to negotiate for free agency in 1975. However, because of the anti-trust exemption, Major League Baseball still has much more control over its domain than do other major sports leagues even though the NFL and the NCAA benefit from legislation exempting them too from some anti-trust regulations.
In 1932 when Franklin D. Roosevelt became President, the U.S. was nearly three years into the Great Depression. With the New Deal, the congress moved quickly to enact legislation that would serve both to stimulate the economy and to improve working conditions. In the First Hundred Days of the Roosevelt presidency, the National Industrial Recovery Act (NIRA) and the Agricultural Adjustment Act (AAA) were passed. Both were then declared unconstitutional in whole or in part by the Court under Chief Justice Charles Evans Hughes: the case Schechter Poultry Corp. v. United States (1935) was brought by a Kosher poultry business in Brooklyn NY (for one thing, the NIRA regulations interfered with its traditional slaughter practices); with United States v. Butler (January 6, 1936) the government filed a case against a processor of cotton in Illinois who contested paying “processing and floor-stock taxes” to support subsidies for the planters of cotton. The first decision invoked the Commerce Clause of the Constitution; the second invoked the Taxing and Spending Clause which empowers the Federal Government to impose taxes.
In reaction, Roosevelt and his congressional allies put together a plan in 1937 “to pack the court” by adding six additional justices to its roster. The maneuver failed, treated with opprobrium by many. However, with the appointment of new justices to replace retiring justices, Roosevelt soon had a Court more to his liking and also by then the New Deal people had learned from experience not to push programs that were too clumsy to pass legal muster.
The core programs launched by the AAA were continued thanks to subsequent legislation that was upheld in later cases before the Court. The pro-labor part of the NIRA was rescued by the National Labor Relations Act (aka the Wagner Act) of 1935. The act which protected labor unions was sponsored by Prussian-born Senator Robert F. Wagner Sr.; his feckless son Robert Jr. was the Mayor of New York who enabled its two National League baseball teams to depart for the West Coast in 1957 – two teams that between them had won the National League pennant every fall for the previous six seasons, two teams with stadium-filling heroes like Willie Mays and Sandy Koufax; moreover, the Dodgers would never have been able to treat fan-favorite Carl Furillo so shabbily had the team still been in Brooklyn: he would not have been dropped mid-season which precisely made him ineligible for the pension of a 15 year veteran, would not have had to go to court to obtain money still due him and would not have been blackballed from organized baseball.

The Dred Scott Decision

Early in its history, the U.S. Supreme Court applied judicial review to acts of Congress. First here was Hylton v. United States (1796) and there was Marbury v. Madison (1803); with these cases the Court’s power to decide the constitutionality of a law was established – constitutional in the first case, unconstitutional in the second. But it would take over 50 years for the Court again to declare a law passed by Congress and signed by the President to be unconstitutional. Moreover, this fateful decision would push the North and South apart to the point of no return. From the time of the Declaration of Independence, the leadership of the country had navigated carefully to maintain a union of free and slave states; steering this course was delicate and full of cynical calculations. How did these orchestrated compromises keep the peace between North and South? Mystère.

During the Constitutional Convention (1787), to deal with states’ rights and with “the peculiar institution” of chattel slavery, two key arrangements were worked out. The Connecticut Compromise favored small states by according them the same number of Senators as the larger states; the Three-Fifths Compromise included 3/5ths of enslaved African Americans in a state’s population count for determining representation in the House of Representatives and, thus, in the Electoral College as well. The Electoral College itself was a compromise between those who wanted direct election of the President and those, like Madison and Hamilton, who wanted a buffer between the office and the people – it has worked well in that 5 times the system has put someone in the office of President who did not win the popular vote.

The compromise juggernaut began anew in 1820 with an act of Congress known as the Missouri Compromise which provided for Maine to enter the union as a free state and for Missouri to enter as a slave state. It also set the southern boundary of Missouri, 36° 30′, as the northern boundary for any further expansion of slavery; at this point in time, the only U.S. land west of the Mississippi River was the Louisiana Territory and so the act designated only areas of today’s Arkansas and Oklahoma as potential slave states; click HERE . (This landscape would change dramatically with the annexation of Texas in 1845 and the Mexican War of 1848.)

Then there was the Compromise Tariff of 1833 that staved off a threat to the Union known as the Nullification Crisis, a drama staged by John C. Calhoun of South Carolina, Andrew Jackson’s Vice-President at the time. The South wanted lower tariffs on finished goods and the North wanted lower tariffs on raw materials. Calhoun was a formidable and radical political thinker and is known as the “Marx of the Master Class.” His considerable fortune went to his daughter and then to his son-in-law Thomas Green Clemson, who in turn in 1888 left most of this estate to found Clemson University, which makes one wonder why Clemson did not name the university for his illustrious father-in-law.

As a result of the Mexican War, in 1848, Alta California became a U.S. territory. The area was already well developed with roads (e.g. El Camino Real), with cities (e.g. El Pueblo de la Reina de Los Angeles), with Jesuit prep schools (e.g. Santa Clara) and with a long pacified Native American population, herded together by the Spanish mission system. With the Gold Rush of 1849, the push for statehood became unstoppable. This led to the Great Compromise (1850) which admitted California as a free state and which instituted a strict fugitive slave law designed to thwart Abolitionists and the Underground Railroad. Henry Clay of Kentucky was instrumental in all three of these nineteenth century compromises which earned him the titles “the Great Compromiser” and “the Great Pacificator,” both of which school textbooks like to perpetuate. Clay, who ran for President three times, is also known for stating “I would rather be right than be President” which sounds so quaint in today’s world where “truth is not truth” and where facts can yield to “alternative facts.”

In 1854, the Missouri Compromise was modified by the Kansas-Nebraska Act that was championed by Lincoln’s opponent for Senator Stephen Douglas; this act applied “squatter sovereignty” to the territories of Kansas and Nebraska which were north of 36° 30′ – this meant that the settlers there themselves would decide whether to outlaw slavery or not. Violence soon broke out pitting free-staters (John Brown and his sons among them) against pro-slavery militias from neighboring Missouri, all of which led to the atrocities of “bleeding Kansas.”

But even the atrocities in Kansas did not overthrow the balance of power between North and South. So how did a Supreme Court decision in 1857 undo over 80 years of carefully orchestrated compromises between the North and South? Mystère.

In 1831, Dred Scott, a slave in Missouri, was sold to Dr. John Emerson, a surgeon in the U.S. army. Emerson took Scott with him as he spent several years in the free state of Illinois and in the Wisconsin Territory where slavery was outlawed by Northwest Ordinance of 1787 and by the Missouri Compromise itself. When in the Wisconsin Territory, Scott married Harriet Robinson, also a slave; the ceremony was performed by a Justice of the Peace. Logically, this meant that they were not considered slaves anymore because in the U.S. at that time, slaves were prohibited from marrying because they could not enter into a legal contract; legal marriage, on the other hand, has been the basis of transmission of property and accumulation of wealth and capital since ancient Rome.

Some years later, back in Missouri, with help from an abolitionist pastor and others, Scott sued for his freedom on the grounds that his stay in free territory was tantamount to manumission; this long process began in 1846. For an image of Dred Scott, the plaintiff and the individual, click HERE .

Previous cases of this kind had been decided in the petitioner’s favor; but, due to legal technicalities and such, this case reached the Missouri Supreme Court where the ruling went against Scott; from there it went to the U.S. Supreme Court.

John Marshall was the fourth Chief Justice and served in that capacity from 1801 to 1835. His successor, appointed by Andrew Jackson, was Roger Taney (pronounced “Tawny”) of Maryland. Taney was a Jackson loyalist and also a Roman Catholic, the first but far from the last Catholic to serve on the Court.

In 1857, the Court declared the Missouri Compromise to be flat-out unconstitutional in the most egregious ruling in its history, the Dred Scott Decision. This was the first time since Marbury that a federal law was declared unconstitutional: in the 7-2 decision penned by Taney himself, the Chief Justice asserted that the federal government had no authority to control slavery in territories acquired after the creation of the U.S. as a nation, meaning all the land west of the Mississippi. Though not a matter before the Court, Taney ruled that even free African Americans could not be U.S. citizens and drove his point home with painful racist rhetoric that former slaves and their descendants “had no rights which the white man was bound to respect.” The Dred Scott Decision drove the country straight towards civil war.

Scott himself soon gained his freedom thanks to a member of a family who had supported his case. But sadly he died from tuberculosis in 1858 in St. Louis. Scott and his wife Harriet have been honored with a plaque on the St. Louis Walk of Fame along with Charles Lindbergh, Chuck Berry and Stan Musial; in Jefferson City, there is a bronze bust of Scott in the Hall of Famous Missourians, along with Scott Joplin, Walt Disney, Walter Cronkite, and Rush Limbaugh making for some strange bedfellows.

President James Buchanan did approve of the Taney decision, however, thinking it put the slavery question to rest. This is certainly part of the reason Buchanan used to be rated as the worst president in U.S. history. It is also thought by historians that Buchanan illegally consulted with Taney before the decision came down, perhaps securing Buchanan’s place in the rankings for the near future despite potential new competition in this arena.

The Dred Scott Decision wrecked the reputation of the Court for years – how blinded by legalisms could justices be as not to realize what their rulings actually said! Charles Evans Hughes, Chief Justice from 1930 to 1941 and foe of FDR and his New Deal, lamented how the Dred Scott Decision was the worst example of the Court’s “self-inflicted wounds.” The Court did recover in time, however, to return to the practice of debatable, controversial decisions.

To start, in 1873, it gutted the 14th Amendment’s protection of civil rights by its 5-4 decision in the Slaughterhouse Cases, a combined case from New Orleans where a monopoly over slaughter houses had been set up by the State Legislature. The decision seriously weakened the “privileges and immunities” clause of the Amendment:

    No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States.

There is a subtlety here: the U.S. Constitution’s Bill of Rights protects the citizen from abuse by the Federal Government; it is left to each state to have and to enforce its own protections of its citizens from abuse by the state itself. This clause was designed to protect the civil liberties of the nation’s new African-American citizens in the former slave states. The damage done by this decision would not be undone until the great civil rights cases of the next century.

 In the Civil Rights Cases of 1883, the Supreme Court declared the Civil Rights Act of 1875 to be unconstitutional, thereby authorizing racial discrimination by businesses setting the stage for Jim Crow legislation in former Confederate states and border states such as Maryland, Missouri and Kentucky – thus undoing the whole point of Reconstruction.

On the other hand, sandwiched around the Civil Rights Cases, were some notable decisions that supported civil rights such as Strauder v. West Virginia (1880), Boyd v. United States (1886) and Yick Wo v. Hopkins (1886). The first case was brought to the Court by an African American and the third of these was brought by a Chinese American.

Somewhat later, during the “Gay Nineties,” the Supreme Court laid down some fresh controversial decisions to usher the U.S. into the new century. In 1894 there was in Re Debs, where the court allowed the government to obtain an injunction and use federal troops to end a strike against the Pullman Company. As a practical matter, this unanimous decision curbed the growing power of labor unions and for the next forty years, “big business” would use court injunctions to suppress strikes. The “Debs” in this case was Eugene Debs, then the head of the American Railway Union; later the Court would uphold the Sedition Act of 1918 to rule against Debs in a case involving his speaking out against American entry into WWI.

It is interesting to note that this time with in Re Debs the Court did not follow papal guidelines as it had with the Discovery Doctrine in Johnson v. McIntosh and other cases under John Marshall. This despite the fact that it had been handed an opportunity to do so. In his 1891 encyclical De Rerum Novarum (“On Revolution”), Pope Leo XIII had come out in favor of labor unions; this was done at the urging of American cardinals and bishops who, at that time, were a progressive force in the Catholic Church. Without the urging of U.S. hierarchy, the pope would likely have condemned unions as secret societies lumped in with the Masons and Rosicrucians.

As the turn of the century approached, the Court’s ruling in Plesssy v. Ferguson (1896) upheld racial segregation on railroads, in schools and in other public facilities with the tag line “separate but equal.” Considered one of the very worst of the Court’s decisions, it legalized racial segregation for another seventy years. In fact, it has never actually been overturned. The celebrated 1954 case Brown v. Board of Education only ruled against it in the case of schools and educational institutions – one of the clever legal arguments the NAACP made was that for Law Schools and Medical Schools, “separate but equal” was impossible to implement. Subsequent decisions weakened Plesssy further but technically it is still “on the books.” The case itself dealt with “separate but equal” cars for railway passengers; one of the contributions of the Civil Rights movement to the economy of the New South is that it obviated the need for “separate but equal” subway cars and made modern transportation systems possible in Atlanta and other cities.

The number and range of landmark Supreme Court decisions expanded greatly in the twentieth century and that momentum continues to this day. We have drifted further and further from the view of Montesquieu and Hamilton that the Judiciary should be a junior partner next to the legislative and executive branches of government. The feared gouvernement des juges is upon us.

The Discovery Doctrine

John Marshall, the Federalist from Virginia and legendary fourth Chief Justice of the Supreme Court, is celebrated today for his impact on the U.S. form of government. To start, there is the decision Marbury v. Madison in 1803. In this ruling, the Court set a far-reaching precedent by declaring a law passed by Congress and signed by the President to be inconsistent with the Constitution – which at that point in time was a document six and a half pages long , its twelve amendments included. However, his Court laid down no other rulings of unconstitutionality of federal laws. So what sort of other stratagems did John Marshall resort to in order to leave his mark? Mystėre.
One way the Marshall Court displayed its power was by means of three important cases involving the status and rights of Native Americans. The logic behind the first of these, Johnson v. McIntosh, is astonishing and the case is used in law schools today as a classic example of a bad decision. The basis of this unanimous decision, written by Marshall himself, is a doctrine so medieval, so racist, so Euro-centric, so intolerant, so violent as to beggar belief. Yet it so buried in the record that few are even remotely aware of it today. It is called the Doctrine of Christian Discovery or just the Discovery Doctrine.
Simply put, this doctrine states that a Christian nation has the right to take possession of any territory whose people are not Christians.
The term “Discovery” refers to the fact that the European voyages of discovery (initially out of Portugal and Spain) opened the coast of Africa and then the Americas to European takeovers.
All this marauding was justified (even ordered) by edicts issued by popes written for Christian monarchs.
In his bull (the term for one of these edicts) entitled Romanus Pontifex (1452), Pope Nicholas V, in a burst of Crusader spirit, ordered the Portuguese King Alfonso V to “capture, vanquish, and subdue the Saracens, pagans, and other enemies of Christ,” to “put them into perpetual slavery,” and “to take all their possessions and property.” Columbus himself sailed with instructions to take possession of lands not ruled by Christian leaders. Alexander VI was the quintessential Renaissance pope, famous among other things for making nepotism something of a science – he was the father of Lucrezia Borgia (the passionate femme fatale of paintings, books and films, click HERE ) and of Cesare Borgia (the model for Machiavelli’s prince, click HERE ). In his Bulls of Donation of 1493, Alexander extended to Spain the right and duty to take sovereignty over all non-Christian territories “discovered” by its explorers and conquistadors; and then on behalf of Spain and Portugal, with the Line of Demarcation, Alexander divided the globe into two zones one for each to subjugate.
Not to be left behind, a century or so later, when England and Holland undertook their own voyages of discovery and colonization, they adopted the Discovery Doctrine for themselves despite the Protestant Reformation; France did as well. What is more, after Independence, the Americans “inherited” this privilege; indeed, in 1792, U.S. Secretary of State Thomas Jefferson declared that the Discovery Doctrine would pass from Europe to the newly created U.S. government – interesting that Jefferson, deist that he was, would resort to Christian privilege to further U.S. interests! In American hands, the Discovery Doctrine also gave rise to doctrines like Manifest Destiny and American Exceptionalism.
The emphasis on enslavement in Romanus Pontifex is dramatic. The bull was followed by Portuguese incursion into Africa and Portuguese involvement in the African slave trade, till then a Muslim monopoly. In the 1500’s, African slavery became the norm in New Spain and in New Portugal. In August 1619, when the Jamestown colony was only 12 years old, a ship that the Dutch had captured from Portuguese slavers reached the English settlement and Africans were traded for provisions – one simple application of the Discovery Doctrine, one fateful day for the U.S.
Papal exhortations to war were not new in 1452. A bull of Innocent III in 1208 instigated a civil war in France, the horrific Albigensian Crusade. Earlier, in 1155 the English conquest of Ireland was launched by a bull of Pope Adrian IV (the only English pope no less); this conquest has proved long and bloody and has created issues still unresolved today. And even earlier there was the cry “God Wills It” (“Deus Vult”) of Pope Urban II and the First Crusade.
Hopping forward to the U.S. of 1823, in Johnson v. McIntosh, the plaintiff group, referred to as “Johnson,” claimed that a purchase of land from Native Americans in Indiana was valid although the defendant McIntosh for his part had a claim to overlapping land from a federal land grant (federal would prove key). An earlier lower court had dismissed the Johnson claim. Now (switching to the historical present) John Marshall, writing for a unanimous court, reaffirms the lower court’s dismissal of the Johnson suit. But that isn’t enough. After a lengthy discussion of the history of the European voyages of discovery in the Americas, Marshall focuses on the manner in which each European power acquired land from the indigenous occupants. He outlines the Discovery Doctrine and how a European power gains sovereignty over land its explorers “discover”; he adds that the U.S. inherited this power from Great Britain and reaches the conclusion that only the Federal Government can obtain title to Native American land. Furthermore, he concludes that indigenous populations only retain the “right of occupancy” in their lands and that this right can still be dissolved by the Federal Government.
One of the immediate upshots of this decision was that only the Federal Government could purchase land from Native Americans. Going forward, this created a market with only one buyer; a monopoly is created when there is only one seller; a market like this one with only one buyer is called a monopsony, a situation which could work against Native American interests – for the pronunciation of monopsony, click HERE . To counter the efforts of the Apple Computer company to muddy the waters, there’s just “one more thing”: the national apple of Canada, the name of the defendant in this case and the name of the inventor of the stylish raincoat are all written “McIntosh” and not “Macintosh.” (“Mc” is the medieval scribes’ abbreviation of “Mac” the Gaelic patronymic of Ireland and Scotland; other variants include “M’c”, “M'” and “Mc” with the “c” raised with two dots or a line underneath it.)
The decision in Johnson formalized the argument made by Jefferson that the Discovery Doctrine applied to relations between the U.S. government and Native Americans. This doctrine is still regularly cited in federal cases and only recently the Discovery Doctrine was invoked by none other than Justice Ruth Bader Ginsburg writing for the majority in City of Sherrill v. Oneida Indian Nation of New York (2005), a decision which ruled against Oneida claims to sovereignty over once tribal lands that the Oneida had managed to re-acquire!
What has happened here with Johnson is that John Marshall made the Discovery Doctrine part of the law of the land thanks to the common law reliance on precedent. A similar thing happens when a ruling draws on the natural law of Christian theology, a practice known as “natural law jurisprudence.” In effect, in both scenarios, the Court is making law in the sense of legislation as well as in the sense of a judicial ruling.
A few years after Johnson, in response to the state of Georgia’s efforts to badger the Cherokee Nation in an effort to drive them off their lands, the Cherokee asked the Supreme Court for an injunction to put a stop to the state’s practices. The case Cherokee Nation v. Georgia (1831) was dismissed by the Court on a technicality drawn from its previous decision – the Cherokee, not being a foreign nation but rather a “ward to its guardian” the Federal Government, did not have standing to sue before the Court; thereby adding injury to the insult that was Johnson.
The next year Marshall actually made a ruling in favor of the Cherokee nation in Worcester v. Georgia (1832) which laid the foundation for tribal sovereignty over their lands. However, this was not enough to stop Andrew Jackson from carrying out the removal of the Cherokee from Georgia in the infamous Trail of Tears. In fact, confronted with Marshall’s decision, Jackson is reported to have said “Let him enforce it.”
The U.S. is not the only country to use the Discovery Doctrine. In the English speaking world, it has been employed in Australia, New Zealand and elsewhere. In the Dutch speaking world, it was used recently in 1975 with the accession of Suriname to independence where it is the basis for the rights (or lack of same) of indigenous peoples. Even more recently in 2007, the Russian Federation invoked it when placing its flag on the floor of the Arctic Ocean to claim oil and gas reserves there. Interesting that Orthodox Christians would honor papal directives once it was in their economic interest – reminiscent of Jefferson
In addition to Marbury and the cases dealing with Native Americans, there are several other Marshall Court decisions that are accorded “landmark” status today such as McCulloch v Maryland (1819), Cohens v. Virginia (1821) and Gibbons v. Ogden (1824) – all of which established the primacy of federal law and authority over the states. This consistent assertion of federal authority is the signature achievement of John Marshall.
Marshall’s term of 34 years is the longest for a Chief Justice. While his Court did declare state laws unconstitutional, for the Supreme Court to declare another federal law unconstitutional would take over half a century after Marbury. This would be the case that plunged the country into civil war. Affaire à suivre. More to come.

Marbury v. Madison

The Baron de Montesquieu and James Madison believed in the importance of the separation of powers among the executive, legislative and judicial branches of government. However, their view was that the third branch would not have power equal to that of either of the first two but enough so that no one branch would overpower the other two. In America today, things have shifted since 1789 when the Constitution became the law of the land: the legislative branch stands humbled by the reach of executive power and thwarted by endless interference on the part of the judiciary.
The dramatically expanded role of the executive can be traced to the changes the country has gone through since 1789 and the quasi-imperial military and economic role it plays in the world today.
The dramatically increased power of the judiciary is largely due to judicial review:
(a) the practice whereby a court can interpret the text of the Constitution itself or of a law passed by the Congress and signed by the President and tell us what the law “really” means, and
(b) the practice whereby a court can declare a law voted on by the Congress and signed by the President to be unconstitutional.
In fact, the term “unconstitutional” now so alarms the soul that it is even the title of Colin Quinn’s latest one-man show.
Things are very different in other countries. In the U.K., the Parliament is sovereign and its laws mean what Parliament says they mean. In France, in the Constitution of the Fifth Republic (1958) the reach of the Cour Constitutionelle is very limited: Charles DeGaulle in particular was wary of the country’s falling into a gouvernement des juges – this last expression being a pejorative term for a situation like that in U.S. today where judges have power not seen since the time of Gideon and Samuel of the Hebrew Bible.
The U.S. legal system is based on the Norman French system (hence trial by a jury of one’s peers, “voir dire” and “oyez, oyez”) and its evolution into the British system of common law (“stare decisis” and the doctrine of precedence). So why is the U.S. so different from countries with whom the U.S. has so much in common in terms of legal culture? How did this come about? In particular, where does the power to declare laws to be unconstitutional come from? Mystėre.
A famous early example of Judicial Review occurred in Jacobean England about the time of the Jamestown settlement and about the time the King James Bible was finished. In 1610, in a contorted dispute know as Dr. Bonham’s Case over the right to practice medicine, Justice Edward Coke opined in his decision that “in many cases, the common law will control Acts of Parliament.” This was not well received, Coke lost his job and Parliamentary Sovereignty became established in England. Picking himself up, Coke went on to write his Institutes of the Lawes of England which became a foundational text for the American legal system and which is often cited in Supreme Court decisions, an example being no less a case than Roe v. Wade.
Another English jurist who had a great influence on the American colonists in the 18th century was Sir William Blackstone. His authoritative Commentaries on the Laws of England of 1765 became the standard reference on the Common Law, and in this opus, parliamentary sovereignty is unquestioned. The list of subscribers to the first edition of the Commentaries included future Chief Justices John Jay and John Marshall and even today the Commentaries are cited in Supreme Court decisions between 10 and 12 times a year. Blackstone had his detractors, however: Alexis de Tocqueville described him as “an inferior writer, without liberality of mind or depth of judgment.”
Blackstone notwithstanding, judicial review naturally appealed to the colonists: they were the target of laws enacted by a parliament where they had no representation; turning to the courts was the only recourse they had. Indeed, a famous and stirring call for the courts to overturn an act of the British Parliament was made by James Otis of Massachusetts in 1761. The Parliament had just renewed the hated writs of assistance and Otis argued (brilliantly it is said) that the writs violated the colonists’ natural rights and that any act of Parliament that took away those rights was invalid. Still, the court decided in favor of Parliament. Otis’ appeal to natural rights harkens back to Coke and Blackstone and to the natural law concept that was developed in the late Middle Ages by Thomas Aquinas and other scholastic philosophers. Appeal to natural law is “natural” when working in common law systems where there is no written text to fall back on; it is dangerous, however, in that it tugs at judges’ religious and emotional sensibilities.
Judicial review more generally emerged within the U.S. in the period under the Articles of Confederation where each state had its own constitution and legal system. By 1787, state courts in 7 of the 13 states had declared laws enacted by the state legislatures to be invalid.
A famous example of this took place in Massachusetts where slavery was still legal when the state constitution went into effect. Subsequently, in a series of cases known collectively as the Quock Walker Case, the state supreme court applied judicial review to overturn state law as unconstitutional and to abolish slavery in Massachusetts in 1783.
As another example at the state level before the Constitution, in New York the state constitution provided for a Council of Revision which applied judicial review to all bills before they could become law; however, a negative decision by the Council could be overturned by a 2/3 majority vote in both houses of the state legislature.
In 1784 in New York, in the Rutgers v. Waddington case, Alexander Hamilton, taking a star turn, argued that a New York State law known as the Trespass Act, which was aimed at punishing Tories who had stayed loyal to the Crown during the Revolutionary War, was invalid. Hamilton’s argument was that the act violated terms of the Treaty of Paris of 1783; this treaty put an end to the Revolutionary War and in its Articles VI and VII addressed the Tories’ right to their property. Clearly Hamilton wanted to establish that federal treaties overruled state law but also he would well have wanted to keep Tories and their money in New York. Indeed, the British were setting up the English speaking Ontario Province in Canada to receive such émigrés including a settlement on Lake Ontario alluringly named York – which later took back its original Native Canadian name Toronto. For a picture of life in Toronto in the old days, click HERE .
The role of judicial review came up in various ways at the Constitutional Convention of 1787. For example, with the Virginia Plan, Madison wanted there to be a group of judges to assist the president in deciding to veto a bill or not, much like the New York State Council of Revision – and here too this could be overturned by a supermajority vote in Congress. The Virginia Plan was not adopted; many at the Convention saw no need for an explicit inclusion of judicial review in the final text but they did expect the courts to be able to exercise constitutional review. For example, Elbridge Gerry of Massachusetts (and later of gerrymander fame) said federal judges “would have a sufficient check against encroachments on their own department by their exposition of the laws, which involved a power of deciding on their constitutionality.” Luther Martin of Maryland (though born in New Jersey) added that as “to the constitutionality of laws, that point will come before the judges in their official character…. “ For his part, Martin found that the Constitution as drawn up made for too strong a central government and opposed its ratification.
The Federalist Papers were newspaper articles and essays written by the founding fathers John Jay, James Madison and Alexander Hamilton, all using the pseudonym “Publius,” a tip of the hat to the great Roman historian Publius Cornelius Tacitus; for the relevance of Tacitus across time, try Tacitus by historian Ronald Mellor. A group formed around Hamilton and Jay giving birth to the Federalist Party, the first national political party – it stood for a strong central government run by an economic elite; this quickly gave rise to an opposition group the Democratic Republicans (Jefferson, Burr, …) and the party system was born, somewhat to the surprise of those who had written the Constitution. Though in the end, judicial review was left out of the Constitution, right after the Convention, the need for it was brought up again in the Federalist Papers : in June 1788 Hamilton, already a star, published Federalist 78 in which he argued for the need for judicial review of the constitutionality of legislation as a check on abuse of power by the Congress. In this piece, he also invokes Montesquieu on the relatively smaller role the judiciary should have in government compared to the other two.
Fast forward two centuries: the Federalist Society is a political gate-keeper which was founded in 1982 to increase the number of right-leaning judges on the federal courts. Its founders included such high-profile legal thinkers as Robert Bork (whose own nomination to the Supreme Court was so dramatically scuttled by fierce opposition to him that it led to a coinage, the verb “to bork”). The Society regrouped and since then members Antonin Scalia, John G. Roberts, Clarence Thomas, Samuel Alito and Neil Gorsuch have acceded to the Supreme Court itself. (By the way, Federalist 78 is one of their guiding documents.)
Back to 1788: Here is what Section 1 of Article III of the Constitution states:                     The judicial Power of the United States shall be vested in one Supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish.
Section 2 of Article III spells out the courts’ purview:                                                           The judicial Power shall extend to all Cases, in Law and Equity, arising under this Constitution, the Laws of the United States, and Treaties made, or which shall be made, under their Authority;—to all Cases affecting Ambassadors, other public Ministers and Consuls;—to all Cases of admiralty and maritime Jurisdiction;—to Controversies to which the United States shall be a Party;—to Controversies between two or more States;—between a State and Citizens of another State;—between Citizens of different States;—between Citizens of the same State claiming Lands under Grants of different States, and between a State, or the Citizens thereof, and foreign States, Citizens or Subjects.
So while Article III lays out responsibilities for the court system, it does not say the courts have the power to review the work of the two other branches of government nor call any of it unconstitutional.
Clause II of Article 6 of the Constitution is known as the Supremacy Clause and states that federal law overrides state law. In particular, this would imply that a federal court could nullify a law passed by a state. But, again, it does not allow for the courts to review federal law.
So there is no authorization of judicial review in the U.S. Constitution. However, given the precedents from the state courts and the positions of Madison, Gerry, Martin, Hamilton et al., it is as though lines from the Federalist 78 such as these were slipped into the Constitution while everyone was looking:
  The interpretation of the laws is the proper and peculiar province of the courts. A constitution is, in fact, and must be regarded by the judges as, a fundamental law. It therefore belongs to them to ascertain its meaning, as well as the meaning of any particular act proceeding from the legislative body. … If there should happen to be an irreconcilable variance between the [Constitution and an act of the legislature], the Constitution ought to be preferred to the statute.
The Supreme Court of the United States (SCOTUS) and the federal court system were created straightaway by the Judiciary Act passed by Congress and signed by President George Washington in 1789.
The first “big” case adjudicated by the Supreme Court was Chisholm v. Georgia (1793). Here the Court ruled in favor of the plaintiff Alexander Chisholm and against the State of Georgia, implicitly ruling that nothing in the Constitution prevented Chisholm from suing the state in federal court. This immediately led to an outcry amongst the states and to the 11th Amendment which precludes a state’s being sued in federal court without that state’s consent. So here the Constitution was itself amended to trump a Court decision.
The precedent for explicit judicial review was set seven years later in 1796 in the case Hylton v. United States: this was the first time that the Court ruled on the constitutionality of a law passed by Congress and signed by the President. It involved the Carriage Act of 1794 which placed a yearly tax of $16 on horse-drawn carriages owned by individuals or businesses. Hylton asserted that this kind of tax violated the powers of federal taxation as laid out in the Constitution while Alexander Hamilton, back in the spotlight, pled the government’s case that the tax was consistent with the Constitution. Chief Justice Oliver Ellsworth and his Court decided in favor of the government, thus affirming the constitutionality of a federal law for the first time; by making this ruling, the Court claimed for itself the authority to determine the constitutionality of a law, a power not provided for in the Constitution but one assumed to come with the territory. This verdict held sway for 101 years; it was overturned in 1895 (Pollock v. Farmers’ Loan and Trust) and then reaffirmed after the passage of the 16th amendment which authorized taxes on income and personal property.
Section 13 of the Judiciary Act of 1789 mandated SCOTUS to order the government to do something specific for a plaintiff if the government is obliged to do so according to law but has failed to do so. In technical terms, the court would issue a writ of mandamus ordering the government to act – mandamus, meaning “we command it,” is derived from the Latin verb mandare.
John Marshall, a Federalist from Virginia, was President John Adam’s Secretary of State. When Jefferson, and not Adams, won the election of 1800, Adams hurried to make federal appointments ahead of Jefferson’s inauguration that coming March; these were the notorious “midnight appointments.” Among them was the appointment of John Marshall himself to the post of Chief Justice of the Supreme Court. Another was the appointment of William Marbury to a judgeship in the District of Columbia. It was Marshall’s job while he was still Secretary of State to prepare and deliver the paperwork and official certifications for these appointments. He failed to accomplish this in time for Marbury and some others; when Jefferson took office he instructed his Secretary of State, James Madison, not to complete the unfinished certifications.
In the “landmark” case Marbury v. Madison (1803), William Marbury petitioned the Court under Section 13 of the Judiciary Act to order the Secretary of State, James Madison, to issue the commission for Marbury to serve as Justice of the Peace in the District of Columbia as the certification was still unfinished thanks to John Marshall, now the Chief Justice. In a legalistic tour de force, the Court affirmed that Marbury was right and that his commission should be issued but ruled against him. John Marshall and his judges declared Section 13 of the Judicial Act unconstitutional because it would (according to the Court) enlarge the authority of the Court beyond that permitted by the Constitution.
Let’s try to analyze the logic of this decision: put paradoxically, the Court could exercise a power not given to it in the Constitution to rule that it could not exercise a power not given to it in the Constitution. Put ironically, it ascribed to itself the power to be powerless. Put dramatically, Marbury, himself not a lawyer, might well have cheered on Dick the Butcher who has the line “let’s kill all the lawyers” in Henry VI, Part 2 – but all this business is less like Shakespeare and more like Aristophanes.
Declaring federal laws unconstitutional did not turn into a habit in the 19th century. The Marshall court itself did not declare any other federal laws to be unconstitutional but it did find so in cases involving state laws. For example, Luther Martin was on the losing side in McCulloch v. Maryland (1819) when the Court declared a Maryland state law levying a tax on a federally authorized national bank to be unconstitutional.
The story doesn’t end there.
Although another law wouldn’t be ruled unconstitutional by the Supreme Court until 1857, the two plus centuries since Marbury would see a dramatic surge in judicial review and outbreaks of judicial activism on the part of courts both left-wing and right-wing. There is a worrisome tilt toward increasing judicial power: “worrisome” because things might not stop there; the 2nd Book of Samuel (the last Judge of the Israelites) is followed by the 1st Book of Kings, something to do with the need to improve national defense. Affaire à suivre. More to come.

Voting Arithmetic

Voting is simple when there are only two candidates and lots of voters: people vote and one of the candidates will simply get more votes than the other (except in the very unusual case of a tie when a coin is needed). In other words, the candidate who gets a majority of the votes wins; naturally, this is called majority voting. Things start to get complicated when there are more than two candidates and a majority of the votes is required to elect a candidate; in this case, multiple ballots and lots of horse trading can be required until a winner emerges.

The U.S. has continued many practices inherited from England such as the system of common law. One of the most important of these practices is the way elections are run. In the U.K. and the U.S., elections are decided (with some exceptions) by plurality : the candidate who polls the largest number of votes is the winner. Thus if there are 3 candidates and one gets 40% of the votes while the other two get 30%, the one with 40% wins – there is no runoff. This is called plurality voting in the U.S. and relative majority voting in the U.K.

The advantages of plurality voting are that it is easy to tabulate and that it avoids multiple ballots. The glaring disadvantage is that it doesn’t provide for small party representation and leads to two party dominance over the political system – a vote for a third party candidate is a vote thrown away and counts for nothing. As a random example, look at the presidential election in the Great State of Florida in 2000: everything would have been the same if those who voted for Ralph Nader had simply stayed home and not voted at all. As a result, third parties never get very far in the U.S. and if they do pick up some momentum, it is quickly dissipated. In England, it is similar with Labour and the Conservatives being the dominant parties since the 1920’s. Is this handled differently elsewhere? Mystère.

To address this problem of third party representation, countries like Italy and Holland use proportional voting for parliamentary elections. For example, suppose there are 100 seats in parliament, that the 3 parties A, B and C propose lists of candidates and that party A gets 40% of the votes cast and B and C each get 30%; then A is awarded 40 seats in parliament while B and C each get 30 seats. With this system, more than two parties will be represented in parliament; if no one party has a majority of the seats, then a coalition government will be formed.

Another technique used abroad is to have majority voting with a single runoff election. Suppose there are 4 parties (A,B,C,D) with candidates for president; a first election is held and the two top vote getters (say, A and B) face off in a runoff a week or two later. During this time, C and D can enter into a coalition with A or B and their voters will now vote for the coalition partner. So those minority votes in the first round live to fight another day. Also that president, once elected, now represents a coalition which mitigates the kind of extreme Manichaeism of today’s U.S. system. It can lead to some strange bedfellows, though. In the 2002 presidential election in France, to stop the far-right National Front candidate Jean-Marie LePen from winning the second round, the left wing parties had to back their long time right wing nemesis Jacques Chirac.
However, one might criticize and find fault with these countries and their election systems, the fact is that voter participation is far higher than that in the U.S.

For U.S. presidential elections, what with the Electoral College and all that, “it’s complicated.” For Hamilton, Madison and others, the Electoral College would serve as an additional buffer between the masses and the government: one way this was to be achieved was by means of the “faithless elector,” one who does not vote for the candidate he pledged to – this stratagem would overturn a mass vote for a potential despot. This was considered a feature and not a bug; this feature is still in force and some pledged electors do employ it – in the 2016 election, seven electors voted against their pledged candidates, two against Trump and five against Clinton. But, except for faithless electors, how else could the Electoral College stymie the will of the people? Mystère.

That the Electoral College can indeed serve as a buffer between the presidency and the population has been proven by four elections (1876, 1888, 2000, 2016) where the Democratic candidate carried the popular vote but the Republican candidate obtained a majority in the Electoral College; most scandalously, in the 1876 election, in a backroom deal, 20 disputed electoral votes were awarded to the Republican candidate Rutherford B. Hayes to give him a majority of 1 vote in exchange for the end of Reconstruction in the South – “probably the low point in our republic’s history” to cite Gore Vidal.

That the Electoral College can indeed serve as a buffer between the presidency and the population has also been proven by the elections of 1800 and 1824 where no candidate had a majority of the electoral vote; in this case, the Constitution specifies that the election is to be decided in the House of Representatives with each state having one vote. In 1824, the populist candidate, Andrew Jackson, won a plurality both of the popular vote and the electoral vote, but on the first ballot a majority of the state delegations, cajoled by Henry Clay, voted for the establishment candidate John Quincy Adams. In the election of 1800, Jefferson and Burr were the top electoral vote getters with 73 votes each. Jefferson won a majority in the House on the 36th ballot, his victory engineered by Hamilton who disliked Jefferson but loathed Burr – we know how this story will end, unfortunately.

For conspiracy theorists, it is worth pointing out that not only were all four candidates who won the popular vote but lost the electoral vote Democrats but that three of the four were from New York State as was Aaron Burr.

The most obvious shortcoming of the Electoral College system is that it is a form of gerrymandering that gives too much power and representation to rural states at the expense of large urban states; in English terms, it creates “rotten boroughs.” For example, using 2018 figures, California has 55 electoral votes for 39,776,830 people and Wyoming has 3 votes for 573,720; so, if one does the math, 1 vote for president in Wyoming is worth 3.78 votes in California. Backing up, let us “show our work.” When we solved this kind of problem in elementary school, we used the rule, the “product of the means is equal to the product of the extremes”; thus, using the camel case dear to programmers, we start with the proportion

     votesWyHas : itsPopulation :: votesCaShouldHave : itsPopulation

where : is read “is to” and “::” is read “as.” Three of the four terms have known values and so the proportion becomes

     3 : 573,720 :: votesCaShouldHave : 39,776,830

The above rule says that the product of the inner two terms is equal to that of the outer two terms. The term votesCaShouldHave is the unknown so let us call it ; let us apply the rule using * as the symbol for multiplication and let us solve the following equation:

     3 * 39,776,830 = 573,720 * x

which yields the number of electors California would have, were it to have as many electors per person as Wyoming does; this simplifies to

     x = (3 * 39,776,830)/ 573,720 = 207.99

So California would have to have 207.99 electors to make things fair; dividing this figure by 55, we find that 1 vote in Wyoming is worth 207.99/55 = 3.78 votes in California. This is a most undemocratic formula for electing the President. But things can get worse. If the race is thrown into the House, since California has 69.33 times as many people as Wyoming, the ratio jumps to 1.0 to 69.33, making this the most undemocratic way of selecting a President imaginable. For state by state population figures, click  HERE .

One simple way to mitigate the undemocratic nature of the Electoral College would be to eliminate the 2 electoral votes that correspond to each state’s 2 senators. The would change the Wyoming/California ratio and make 1 vote in the Equality State worth only 1.26 votes in the Golden State. With this counting technique, Trump would still have won the 2016 presidential election 236 to 195 (much less of a “massive landslide” than 304 to 227, the official tally) but Al Gore would have won the 2000 race, 228 to 209, even without Florida (as opposed to losing 266 to 271).

To tally the Electoral College vote, most states assign all their votes (via the “faithful” pledged electors) to the plurality winner for president in that state’s presidential tally. Nebraska and Maine, the two exceptions, use the congressional district method which assigns the two votes that correspond to Senate seats to the overall plurality winner and one electoral vote to the plurality winner in each congressional district in the state. By way of example, in an election with 3 candidates, suppose a state has 3 representatives (so 5 electoral votes) and that one candidate obtains 50% of the total vote and the other two 25% each; then if each candidate is the plurality winner in the vote from exactly one congressional district, the top vote-getter is assigned the 2 votes for the state’s senators plus 1 vote for the congressional district he or she has won and the other two candidates receive 1 electoral vote each. This system yields a more representative result but note that gerrymandering will still impact who the winner is in each congressional district. What is intrinsically dangerous about this practice, though, is that, if candidates for more than two parties are running, it can dramatically increase the chances that the presidential election will be thrown into the House of Representatives. In this situation, the Twelfth Amendment ordains that the 3 top electoral vote-getters must be considered for the presidency and so, if this method had been employed generally in the past, the elections of 1860 (Lincoln), 1912 (Wilson) and 1992 (Clinton) could well have given us presidents Stephen Douglas, Teddy Roosevelt and Ross Perot.

The congressional district method is a form of proportional voting. While it could be disastrous in the framework of the Electoral College system for electing a U.S. president, proportional voting itself is successfully implemented in many countries to achieve more equitable outcomes than that furnished by plurality voting and the two party system.

A voting system which is used in many American cities such as Minneapolis, and Oakland and in countries such as Australia and Ireland is known as ranked choice voting or instant-runoff voting. Voters in Maine recently voted for this system to be used in races for seats in the U.S. House of Representatives and in party primaries. Ranked choice voting emulates runoff elections but in a single round of balloting; it is a much more even-handed way to choose a winner than plurality voting. Suppose there are 3 candidates – A, B and C; then, on the ballot, each voter lists the 3 candidates in the order of that voter’s preference. For the first round, the count is made of the number of first place votes each candidate received; if for one candidate that number is a majority, that candidate wins outright. Otherwise, the candidate with the least number of first place votes, say A, is eliminated; now we go into the “second round” with only B and C as candidates and we add to B’s first place total the number of ballots for A that listed B as second choice and similarly for C. Now, except in the case of a tie, either B or C will have a clear majority and will be declared the winner. This will give the same result that staging a runoff between B and C would yield. With 3 candidates, at most 2 rounds are required; if there were 4 candidates, up to 3 rounds could be needed, etc.

Most interestingly, in the Maine 2018 election, in one congressional district, no candidate for the House of Representatives gathered an absolute majority on the first round but a candidate who received fewer first place votes on that first round won on the second round when he caught up and surged ahead because of the number of voters who made him their second choice. (For details, click  HERE ).

Naturally, all this is being challenged in court by the losing side. However, for elections, Section 4 of Article 1 of the U.S. Constitution leaves implementation to the states for them to carry out in the manner they deem fit – subject to Congressional oversight but not to judiciary oversight:

   “The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof; but the Congress may at any time by Law make or alter such Regulations, except as to the Places of chusing (sic) Senators.”

N.B. In this article of the Constitution, the senators are an exception because at that time the senators were chosen by the state legislatures and direct election of senators by popular vote had to wait for 1913 and the 17th Amendment.

At the first legal challenge to it, the new Maine system was upheld vigorously in the United States District Court based in large part on Section 4 of Article 1 above. For the ruling itself, click  HERE . But the story will not likely end so simply.

This kind of voting system is also used by the Academy of Motion Picture Arts and Sciences to select the nominees in each category, but they call it preferential voting. So to determine five directors, they apply the elimination process until only five candidates remain.

With ranked choice voting, in Florida, in that 2000 election, if the Nader voters listed Ralph Nader first, Al Gore (who was strong on the environment) second and George Bush third and if all Pat Buchanan voters listed Buchanan first, Bush second and Gore third, Gore would have carried the day by over 79,000 votes in the third and final round.

However one might criticize and find fault with countries like Australia and Ireland and their election systems, the fact is that voter participation is far higher than that in the U.S. For numbers, click  HERE .

Ranked voting systems are not new and have been a topic of interest to social scientists and mathematicians for a long time now. The French Enlightenment thinker, the Marquis de Condorcet, introduced the notion of the Condorcet Winner of an election – the candidate who would beat all the other candidates in a head-to-head election based on the ballot rankings; he also is the author of Condorcet’s Paradox – that a ranked choice setup might not produce a Condorcet winner. To analyze this situation, the English mathematician Charles Lutwidge Dodgson introduced the Dodgson Method, an algorithm for measuring how far the result for a given election using ranked choice voting is from producing a Condorcet Winner. More recently, the mathematician and economist Kenneth Arrow authored Arrow’s Paradox which shows that there are ways in which ranked voting can sometimes be gamed by using the idea behind Condorcet’s Paradox: for example, it is possible that in certain situations, voters can assure the victory of their most preferred candidate by listing that candidate 2nd and not 1st – the trick is to knock out an opponent one’s favorite would lose to in a head to head election by favoring a weaker opponent who will knock out the feared candidate and who will then be defeated in the final head to head election. For his efforts, Arrow was awarded a Nobel Prize; for his efforts, Condorcet had a street named for him in Paris (click  HERE  ); for his efforts, Charles Lutwidge Dogson had to latinize his first and middle names, then reverse them to form the pen name Lewis Carroll, and then proceed to write Alice in Wonderland and Jabberwocky, all to rescue himself from the obscurity that usually awaits mathematicians. For a detailed but playful presentation on paradoxes and ranked choice voting, click  HERE .

The Constitution – U.S Scripture III

In 1787, the Confederation Congress called for a Constitutional Convention with the goal of replacing the Articles of Confederation with a form of government that had the central power necessary to lead the states and the territories. This had to be a document very different from the Iroquois Great Law of Peace, from the Union of Utrecht and from the Articles of Confederation themselves. It had to provide for a centralized structure that would exercise legislative and executive power on behalf of all the states and territories. Were there existing historical precedents for a written document to set up a social contract of government? Mystère.
In antiquity, there was “The Athenian Constitution”; but this text, credited to Aristotle and his students at the Lyceum, is not a founding document; rather it is an after-the-fact compilation of the workings of the Athenian political system. In the Middle Ages there was the Magna Carta of 1215 with its legal protections such as trial by jury and its limitations on the power of the king. Though the Magna Carta itself was quickly annulled by a bull of the crusade-loving Pope Innocent III as “illegal, unjust, harmful to royal rights and shameful to the English people,” it served as a template for insulating the citizen from the power of the state.
There is a book entitled “The English Constitution” but this was published by Walter Bagehot in the latter part of the 19th century and, like “The Athenian Constitution,” it is an account of existing practices and procedures rather than any kind of founding document. This is the book that the 13 year old Elizabeth is studying when taking history lessons with the provost of Eton in the TV series “The Crown.”
For an actual example of a nation’s constitution that pre-dates 1789, one has to go back to 1600, the year that the Constitution of San Marino was adopted. However, there is no evidence that the founding fathers knew anything of this at all. Since that time, this document has been the law of this land-locked micro-state and it has weathered many storms; most recently, during the Cold War, it gave San Marino its 15 minutes of fame when the citizens elected a government of Communist Party members and then peacefully voted them out of office twelve years later. For an image of St. Martinus, stonemason and founding father of this, the world’s smallest republic, click  HERE .
The English Bill of Rights of 1689, an Act of Parliament, is a constitutional document in that it transformed an absolute monarchy into a constitutional monarchy. This is the key role of a constitution – it tempers or replaces traditional monarchy based on the Divine Right of Kings with an explicit social contract. This sharing of power between the monarch and the parliament made England the first Constitutional Monarchy – in simple terms, the division of roles made the parliament the legislature and made the king the executive. To get a sense of how radical this development was, it took place only a few years after Louis XIV of France reportedly exclaimed “L’Etat, c’est moi.”
With independence brewing, the Continental Congress in May 1776 directed the colonies to draw up constitutions for their own governance. The immediate precursors to the U.S. Constitution then were the state constitutions of 1776, 1777 and 1780, born of the break with Great Britain.
An important influence on this generation of constitutional documents was the work of the French Enlightenment philosopher Montesquieu. In his Spirit of the Laws (1748) Montesquieu analyzed forms of government and how different forms matched with different kinds of nations – small nations best being republics, medium sized nations best served by a constitutional monarchy and very large ones best being empires. His analysis broke government down into executive, legislative and (to a lesser extent) judicial powers and he argued that, to avoid tyranny, these should be separate and independent of each other so that the power of any one of these would not exceed the combined power of the other two.
In 1776, the state of Connecticut did not adopt a new constitution but continued with its (sometimes updated) Royal Charter of 1662. In the matter of religious freedom, in Connecticut, the Congregational Church was effectively the established state religion until the Constitution of 1818. Elsewhere, the antidisestablishmentarians generally lost out. For example, in the case of New York, Georgia, Rhode Island, Pennsylvania and Massachusetts, the state constitution guaranteed freedom of religion; in New Hampshire, the Constitution of 1776 was silent on the subject, but “freedom of conscience”  was guaranteed in the expanded version of 1784.  In Delaware’s case, it prohibited the installation of an established religion; in Virginia’s case, it took the Virginia Statute for Religious Freedom, written by Thomas Jefferson in 1786, to stave off the threat of an established church. On the other hand, the Maryland Constitution only guaranteed freedom of religion to “persons professing the Christian religion” (which was the same as in Maryland’s famous Toleration Act of 1649, the first step in the right direction in the colonies). In its 1776 document, Anglicanism is the established religion in South Carolina – this was undone in the 1778 revision. In the North Carolina Constitution, Article 32 affirms “That no person who shall deny the being of God, or the truth of the Protestant religion … shall be capable of holding any office … within this State”; New Jersey’s Constitution had a similar clause. It seems that from a legal point of view, a state still has the authority to have its own established or favored religion and attempts to move in this direction are still being made in North Carolina and elsewhere – the First Amendment explicitly only prohibits Congress from setting up an established religion for the country as a whole.
The challenges confronting the Constitutional Convention in Philadelphia in 1787 were many – to craft a system with a sufficiently strong central authority but not one that could morph into a dictatorship or mob rule, to preserve federalism and states’ rights (in particular, for the defense of the peculiar institution of slavery), to preserve popular sovereignty through a system of elections, etc. Who, then, rose to the occasion and provided the intellectual and political drive to get this done? Mystère.
Thomas Jefferson was the ambassador to France, John Adams was the ambassador to Great Britain and neither attended the Convention. Benjamin Franklin was one of the few who took a stand for the abolition of slavery but to no avail; Alexander Hamilton had but a bit part and his main (virtually monarchist) initiative was roundly defeated. George Washington took part only at James Madison’s urging (but did serve as president of the Convention). But it is Madison who is known as the Father of the Constitution.
Madison and the others were keen readers of the Roman historian Publius Cornelius Tacitus who pitilessly described the transformation of the Roman Senatorial class from lawmakers into sniveling courtiers with the transformation of the Roman Republic into the Roman Empire; Montesquieu also wrote about the end of the Roman Republic. On the other hand, the rumblings leading up to the French Revolution could be heard and the threat of mob rule was not unrealistic. So the fear of creating a tyrannical regime was very much with them.
Madison’s plan for a strong government that would not turn autocratic was, like some of the state constitutions, based on the application of ideas of Montesquieu. In fact, in Federalist No. 47, Madison (using the Federalist pseudonym Publius) developed Montesquieu’s analysis of the separation of powers further and enunciated the principle of “checks and balances.”
For his part, Hamilton pushed for a very strong central government modeled on the English system with his British Plan; however, this plan was not adopted, nor were the plans for structuring the government proposed by Virginia and by New Jersey. Instead, a balance between large and small states was achieved by means of the Connecticut Compromise: there would be a bicameral legislature with an upper house, the Senate, having two senators from each state; there would be a lower house, the House of Representatives, with each state having a number of representatives proportional to its population. While the senators would be appointed by the state legislatures, the representatives would be chosen by popular vote (restricted to men of property, of course).
This bicameral setup, with its upper house, was designed to reduce the threat of mob rule. However, it also brought up the problem of computing each state’s population for the purpose of determining representation in the House of Representatives. The resulting Three-Fifths Compromise stipulated that 3/5ths of the slave population in a state would count toward the state’s total population for this computation. This compromise created the need for an electoral college to elect the president, since enslaved African Americans would not each have three-fifths of a vote! So the system of electors was introduced and each state would have one elector for each member of Congress.
Far from abolishing slavery, Article 1, Section 9, Clause 1 of the Constitution prohibited Congress from making any law that would interfere with the international slave trade until 1808 at the earliest. However, Jefferson had stood for ending this traffic since the 1770’s and, in his second term in 1807, the Act Prohibiting the Importation of Slaves was passed and the ban was initiated the following year.  However, the ban was often violated and some importation of slaves continued into the Civil War. In 1807, the British set up a similar ban on the slave trade in the Empire and the British Navy actively enforced it against ships of all nations off the coast of West Africa and elsewhere, technically classifying slave traders as pirates; this was an important impediment to the importation of slaves into the United States.
For Hamilton, Madison and others, the Electoral College would serve as an additional buffer between the masses and the government: one way this was to be achieved was by means of the “faithless elector,” one who does not vote for the candidate he pledged to – this stratagem would overturn a mass vote for a potential despot. This was considered a feature and not a bug; this feature is still in force and some pledged electors do employ it – in the 2016 election, seven electors voted against their pledged candidates, two against Trump and five against Clinton.
The Constitution left it to the states to determine who is eligible to vote. With some exceptions here and there at different times, the result was that only white males who owned property were eligible to vote. This belief in the “divine right of the propertied” has its roots in the work of John Locke; it also can be traced back to a utopian composition published in 1656 by James Harrington; in The Commonwealth of Oceana he describes an egalitarian society with an ideal constitution where there is a limit on how much property a family can own and rules for distributing property; there is a senate and there are elections and term limits. Harrington promulgated the idea of a written constitution arguing that a well-designed, rational document would curtail dangerous conflicts of interest. This kind of interest in political systems was dangerous back then; Oliver Cromwell blocked publication of Harrington’s work until it was dedicated to the Lord Protector himself; with the Stuart Restoration, Harrington was jailed in the Tower of London and died soon after as a result of mistreatment. For a portrait, click HERE .
In any case, it wasn’t until 1856 that even universal suffrage for white males became established in the U.S. For the enfranchisement of rest of the population, it took the Civil War and constant militancy up to and during WWI. A uniform election day was not fixed until 1845 and there are no real federal guidelines for election standards. This issue is still very much with us, as demonstrated by a wave of voter suppression laws in the states newly released from the strictures of the Voting Rights Act by the Roberts’ Court with the 2013 decision in Shelby County v. Holder.
Finally, a four page document entitled Constitution of the United States of America was submitted to the states in September 1787 for ratification. This process required nine of the thirteen states; the first to ratify it was Delaware and the ninth was New Hampshire. There was no Bill of Rights and no provision for judicial review of legislation. Political parties were not expected to play a significant role and the provisions for the election of president and vice-president were so clumsy that they exacerbated the electoral crisis of 1800 which ultimately led to the duel between Aaron Burr and Alexander Hamilton.
The Confederation Congress declared the Constitution ratified in September 1788 and the first presidential election was held. Congress was seated and George Washington became President in the spring of 1789.
In American life, the Constitution has truly become unquestionable, sacred scripture and the word unconstitutional has the force of a curse. As a result, to a large extent, Americans are frozen in place and are not able to be forward looking in dealing with the myriad new kinds of problems, issues and opportunities that contemporary life creates.
For example, the Constitution provides for an Amendment process that requires ratification by 3/4ths of the states. When there were 13 states huddled together on the Eastern Seaboard, this worked fine and the first 10 amendments, The Bill of Rights, were passed quickly after the Constitution was adopted. However, today this process is most cumbersome. For example, any change in the Electoral College system would require an amendment to the Constitution; but any 13 states could block an attempt at change and the 13 smallest states, which have barely 4% of the population, would not find it in their interest to make any such change, alas. Another victim is term limits for members of Congress. It is in states’ interest to have senators and representatives with seniority so they can accede to powerful committee chairmanships etc.; this is the old Dixiecrat strategy that kept Strom Thurmond in the Senate until he was over 100 years old – but then the root of the word senator is the Latin senex which means “old man.” The Constitution does provide for a second way for it to be amended: 34 state legislatures would have to pass applications for a constitutional convention to deal with, say, term limits; this method has never been used successfully, but a group “U.S. Term Limits” is trying just that.
The idea of judicial review of laws passed by Congress did come up at the Convention. Madison first wanted there to be a set of judges to assist the president in deciding to veto a bill or not. In the end, nothing was set down clearly in the Constitution and the practice of having courts review constitutionality came about by a kind of judicial fiat when John Marshall’s Supreme Court ruled a section of an act of Congress to be unconstitutional. Today, any law passed by Congress and signed by the President has to go through an interminable process of review in the courts and in the end, the law means only what the courts say it means. Contrast this with the U.K. where the meaning of a law is what Parliament says it is. As a result with the Supreme Court politicized the way it is, law is actually made today by the Court and the Congress just licks its wounds. The most critical decisions are thus made by 5 unelected career lawyers. Already in 1921, seeing what was happening in the U.S., the French jurist Edouard Lambert coined the phrase “gouvernement des juges” for the way judges privileged their personal slant on cases before them to the detriment of a straightforward interpretation of the letter and spirit of the law.
The reduction of the role of Congress and the interference of the courts have also contributed to the emergence of an imperial presidency. The Constitution gives only Congress the right to levy tariffs or declare war; but now the president imposes tariffs, sends troops off to war, and governs mostly by executive order. Much of this is “justified” by a need for expediency and quick decision making in running such a complex country – but this is, as Montesquieu and others point out, the sort of thing that led to the end of the Roman Republic.

The Articles – U.S. Scripture II

The second text of U.S. scripture, the Articles of Confederation, gets much less attention than the Declaration of Independence or the Constitution. Still it set up the political structure by which the new country was run for its first thirteen years; it provided the military structure to win the war for independence; it furnished the diplomatic structure to secure French support during that war and then to reach an advantageous peace treaty with the British Empire.
The Iroquois Great Law of Peace and the Albany Plan of Benjamin Franklin that it inspired are certainly precursors of the Articles of Confederation.  In fact, the First Continental Congress set up a meeting in Albany NY in August 1775 with the Iroquois. The colonists informed the Iroquois of the possibility of the colonies’ breaking with Britain, acknowledged the debt they owed them for their example and advice, and presumably tried to test whether the Iroquois would support the British in the case of an American move for independence. In the end, the Iroquois did stay loyal to the British (the Royal Proclamation of 1763 would have had something to do with that). The French Canadians also stayed loyal to the British; in this case too, it was likely preferring “the devil you know.”
Were there other forerunners to the Articles? Mystère.
Another precursor of the Articles was the Dutch Republic’s Union of Utrecht of 1579, which created a confederation of seven provinces in the north of the Netherlands. Like that of the Iroquois, the Dutch system left each component virtually independent except for issues like the common defense. This was not a democratic system in the modern sense in that each province was controlled by a ruling clique called the regents. For affairs of common interest, the Republic had a governing body called the Staaten (parliament) with one representative from each province. Henry Hudson’s voyage in 1609 was financed by the Staaten and he loyally named the westernmost outpost of New York City Staaten Eylandt. (This is not inconsistent with the old Vaudeville joke that the name originated with the near-sighted Henry asking “Is dat an Eyelandt?!!!” in his broken Dutch.) Hudson stopped short of naming the mighty river he navigated for himself; the Native Americans called it the Mahicanituck and the Dutch simply called it the North River.
Like the American colonists but two hundred years earlier, to achieve independence the citizens of the Dutch Republic had to rebel against the mightiest empire of the time, in this case that of Philipp II of Spain. However, the Dutch Republic in its Golden Age was the most prosperous country in Europe and among the most powerful, proving its military mettle in the Anglo-Dutch Wars of the 17th century – all of which gave rise to the unflattering English language expressions Dutch Courage (bravery fueled by alcohol), Dutch Widow (a woman of ill repute), Dutch Uncle (someone not at all avuncular), Dutch Comfort (a comment like “things could be worse”) and, of course, Dutch Treat. The Dutch Republic was also remarkable for protecting civil liberties and religious freedom, keys to the domestic tranquility that did find their way into the U.S. Bill of Rights. For a painting by the Dutch Master Abraham Storck of a scene from the Four Days Battle during the Second Anglo-Dutch War, click HERE.
The Articles of Confederation were approved by the Second Continental Congress on Nov 15, 1776. Though technically not ratified by the states until 1781, the Articles steered the new country through the Revolutionary War and continued to be in force until 1789. The Articles embraced the federalism of the Iroquois Confederation and the Dutch Republic; they rejected the principle of the Divine Right of Kings in favor of republicanism and they embraced the idea of popular sovereignty affirming that power resides with the people
The Congress of the Confederation had a unicameral legislature (like the Staaten and Nebraska). It had a presiding officer referred to as the President of the United States who organized the deliberations of the Congress, but who did not have executive authority. In all, there were ten presidents, John Hancock and Richard Henry Lee among them. John Hanson, a wealthy landowner and slaveholder from Maryland, was the first president and so wags claim that “John Hanson” and not “George Washington” is the correct answer to the trivia question “who was the first U.S. president” – by the way the answer to the question “which president first called for a Day of Thanksgiving on a Thursday in November” is also “John Hanson.” For a statue of the man, click HERE
The arrangement was truly federal: each state had one vote and ordinary matters required a simple majority of the states. The Congress could not levy taxes itself but depended on the states for its revenue. On the other hand, Congress could coin money and conduct foreign policy but decisions on making war, entering into treaties, regulating coinage, and some other important issues required the vote of nine states in the Congress.
Not unsurprisingly, given the colonists’ opposition to the Royal Proclamation of 1763, during the Revolutionary War the Americans took action to wrest the coveted land west of the Appalachians away from the British. George Rogers Clark, a general in the Virginia Militia (and older brother of William of “Lewis and Clark” fame) is celebrated for the Illinois Campaign and the captures of Kaskaskia (Illinois) and Vincennes (Indiana). For the Porte de Vincennes metro stop in Paris, click HERE.
As the French, Dutch and Spanish squabbled with the English over the terms of a treaty to end the American War of Independence and dithered over issues of interest to these imperial powers ranging from Gibraltar to the Caribbean to the Spice Islands to Senegal, the Americans and the English put together their own deal (infuriating the others, especially the French). This arrangement ceded the land east of the Mississippi and South of the Great Lakes (except for Florida) to the newly born United States. The Florida territory was transferred back once again to Spain. The French had wanted all that land east of the Mississippi and West of the Appalachians to be ceded to its ally Spain who also controlled the Louisiana Territory at this time. Given how Spain returned the Louisiana Territory to France by means of a secret treaty twenty years later, the bold American diplomatic dealings in the Treaty of Paris proved to be prescient; the Americans who signed the treaty with England were Benjamin Franklin, John Adams and John Jay.
The treaty with England was signed in 1783 and ratified by the Confederation Congress then sitting at the Maryland State House in Annapolis on January 14, 1784
However, hostilities between American militias and British and Native American forces continued after Cornwallis’ defeat at Yorktown and even after the signing of the treaty that officially ended the war; in fact, the British did not relinquish Fort Detroit and surrounding settlements until Jay’s Treaty which took effect in 1796. Many thought this treaty made too many concessions to the British on commercial and maritime matters and, for his efforts, Jay was hanged and burned in effigy everywhere by anti-Federalists. Jay reportedly joked that he could find his way across the country by the light of his burning effigies. Click HERE for a political cartoon from the period.

A noted achievement of the Confederation Congress was the Ordinance of 1787 (aka the Northwest Ordinance), approved on July 13, when the Congress was seated at Federal Hall in New York City. The Northwest Territory was comprised of the five territories of Illinois, Michigan, Wisconsin, Indiana, Ohio – the elementary school mnemonic was “I met Walter in Ohio.” Four of these names are Native American in origin; Indiana is named for the Indiana Land Company, a group of real estate investors. The Ordinance outlawed slavery in these areas (but it did include a fugitive slave clause), provided a protocol for territories’ becoming states, acknowledged the land rights of Native Americans, established freedom of navigation on lakes and rivers, established the principle of public education (including universities), … . In fact no time was wasted: Ohio University was chartered in 1787; it is located in Athens (naturally) and today has over 30,000 students. The Ordinance was re-affirmed when the Constitution replaced the Articles of Confederation.

With all these successes in war and in making peace, what drove the Americans to abandon the proven formula of a confederation of tribes or provinces and seek to replace it? Again, mystère.

While the Articles of Confederation were successful when it came to waging war and preparing for new states, it was economic policy and domestic strife that made the case for a stronger central government.

Under the Articles of Confederation, the power to tax stayed with the individual states; in 1781 and again in 1786, serious efforts were made to amend the Articles so that the Confederation Congress itself could levy taxes; both efforts failed leaving the Congress without control over its own finances. During and after the war, both the Congress and individual states printed money, money that soon was “not worth a continental.”

In 1785 and 1786, a rebellion broke out in Western Massachusetts, in the area around Springfield; the leader who emerged was Daniel Shays, a farm worker who had fought in the Revolution (Lexington and Concord, Saratoga, …) and who had been wounded  – but Shays had never been paid for his service in the Continental Army and he was now being pursued by the courts for debts. He was not alone and petitions by yeoman citizens for relief from debts and taxes were not being addressed by the State Legislature. The rebels shut down court houses and tried to seize the Federal Armory in Springfield; in this, they were thwarted only by an ad hoc militia raised with money from merchants in the east of the state. After, many rebels including Shays, swiftly escaped to neighboring states such as New Hampshire and New York, out of reach for the Massachusetts militia.

Shays’ Rebellion shook the foundations of the new country and accelerated the process that led to the Constitutional Convention of 1787. It dramatically highlighted the shortcomings of such a decentralized system in matters of law and order and in matters economic; in contrast, with the new Constitution, Washington as President was able to lead troops to put down the Whiskey Rebellion and Hamilton as Secretary of the Treasury was able to re-organize the economy (national bank, assumption of states’ debts, protective tariffs, …). Click HERE for a picture of George back in the saddle; as for “Hamilton” tickets, just wait for the movie.

The Articles continued to be the law of the land into 1789: the third text of U.S. scripture, the U.S. Constitution, was ratified by the ninth state New Hampshire on June 21, 1788 and the Confederation Congress established March 4, 1789 as the date for the country to begin operating under the new Constitution.
How did that work out? More to come. Affaire à suivre.

The Declaration – U.S. Scripture I

There are three founding texts for Americans, texts treated like sacred scripture. The first is the Declaration of Independence, a stirring document both political and philosophical; in schools and elsewhere, it is read and recited with religious spirit. The second is the Articles of Confederation; a government based on this text was established by the Second Continental Congress; despite the new country’s success in waging the Revolutionary War and in reaching an advantageous peace treaty with the British Empire, this document is not venerated by Americans in quite the same way because this form of government was superseded after thirteen years by that of the third founding text, the Constitution. These three texts play the role of secular scripture in the United States; in particular, the Constitution, although only 4 pages long without amendments, is truly revered and quoted like “chapter and verse.”

Athena, the Greek goddess, is said to have sprung full grown from the head of Zeus (and in full armor, to boot); did these founding texts just emerge from the heads and pens of the founding fathers? In particular, there is the Declaration of Independence. Did it have a precursor? Was it part of the spirit of the times, of the Zeitgeist? Mystère.

Though still under British rule in 1775 when hostilities broke out at Lexington and Concord, the colonies had had 159 years of self-government at the local level: they elected law makers, named judges, ran courts and collected taxes. Forward looking government took root early in the colonies. Already in 1619, in Virginia, the House of Burgesses was set up, the first legislative assembly of elected representatives in North America; in 1620, the Pilgrims drew up the Mayflower Compact before even landing; the 1683, the colonial assembly in New York passed the Charter of Liberties. A peculiar matrix evolved where there was slavery and indentured servitude on the one hand and progress in civil liberties on the other (one example, the trial of John Peter Zenger in New York City in 1735 and the establishment of Freedom of the Press).

In fact, the period from 1690 till 1763 is known as the “period of salutary neglect,” where the British pretty much left the colonies to fend for themselves – the phrase “salutary neglect” was coined by the British parliamentarian Edmund Burke, “the father of modern conservatism.” Salutary neglect was abandoned at the end of the French and Indian War (aka The Seven Years War) which was a “glorious victory” for the British but which left them with large war debts; their idea was to have the Americans “pay their fair share.”

During the run-up to the French and Indian War, at the Albany Convention in 1754, Benjamin Franklin proposed the Albany Plan, which was an early attempt to unify the colonies “under one government as far as might be necessary for defense and other general important purposes.” The main thrust was mutual defense and, of course, it would all be done under the authority of the British crown.

The Albany Plan was influenced by the Iroquois’ Great Law of Peace, a compact that long predated the arrival of Europeans in the Americas; this compact is also known as the Iroquois Constitution. This constitution provided the political basis for the Haudenosaunee, aka the Iroquois Confederation, a confederacy of six major tribes. The system was federal in nature and left each tribe largely responsible for its own affairs. Theirs was a very egalitarian society and for matters of group interest such as the common defense, a council of chiefs (who were designated by the senior women of their clans) had to reach a consensus. The Iroquois were the dominant Indian group in the northeast and stayed unified in their dealings with the French, British and Americans. In a letter to a colleague in 1751, Benjamin Franklin acknowledged his debt to the Iroquois with this amazing admixture of respect and condescension:

    ”It would be a strange thing if six nations of ignorant savages should be capable of forming such a union, and yet it has subsisted for ages and appears indissolvable, and yet a like union should be impractical for 10 or a dozen English colonies.”

The Iroquois Constitution was also the subject of a groundbreaking ethnographic monograph. In 1724, the French Jesuit missionary Joseph-François Lafitau published a treatise on Iroquois society, Mœurs des sauvages amériquains comparées aux mœurs des premiers temps (Customs of the American Indians Compared with the Customs of Primitive Times), in which he describes the workings of the Iroquois system and compares it to the political systems of the ancient world in an attempt to establish a commonality shared by all human societies. Lafitau admired this egalitarian society where each Iroquois, he observed, views “others as masters of their own actions and of themselves” and each Iroquois lets others “conduct themselves as they wish and judges only himself.”

The pioneering American anthropologist Lewis Henry Morgan, who studied Iroquois society in depth, was also impressed with the democratic nature of their way of life, writing “Their whole civil policy was averse to the concentration of power in the hands of any single individual.” In turn, Morgan had a very strong influence on Frederick Engel’s The Origin of the Family, Private Property, and the State: in the Light of the Researches of Lewis H. Morgan (1884). Apropos of the Iroquois Constitution, Engels (using “gentile” in its root Latin sense of “tribal”) waxed lyric and exclaimed “This gentile constitution is wonderful.” Engel’s work, written after Karl Marx’s death, had for its starting point Marx’ notes on Morgan’s treatise Ancient Society (1877).

All of these European writers thought that Iroquois society was an intermediate stage in a progression subject to certain laws of sociology, a progression toward a society and way of life like their own. Of course, Marx and Engels did not think things would stop there.

At the end of the French and Indian War, the British prevented the colonists, who numbered around 2 million at this point, from pushing west over the Appalachians and Alleghenies with the Royal Proclamation of 1763; indeed, his Majesty George III proclaimed (using the royal “our”) that this interdiction was to apply “for the present, and until our further pleasure be known.” This proclamation was designed to pacify French settlers and traders in the area and to keep the peace with Native American tribes, in particular the Iroquois Confederation who, unlike nearly all other tribes, did side with the British in the French and Indian War. It particularly infuriated land investors such as Patrick Henry and George Washington – the latter, a surveyor by trade, founded the Mississippi Land Company in 1763 just before the Proclamation with the expectation of profits from investments in the Ohio River Valley, an expectation dashed by the Proclamation. Designs by Virginians on this region were not surprising given Virginia’s purported westward reach at that time (for a map, click HERE); even today, the Ohio River is the Western boundary of West Virginia. Washington recovered financially and at the time of his death was a very wealthy man.

Though the Royal Proclamation was flouted by colonists who continued to migrate west, it was the first in the series of proclamations and acts that finally outraged the colonists to the point of armed rebellion. Interestingly, in Canada the Royal Proclamation still forms the legal basis for the land rights of indigenous peoples. Doubtless, this has worked out better for both parties than the Discovery Doctrine which has been in force in the U.S. since independence – for details. click HERE .

The Proclamation was soon followed by the hated Stamp Act, which was a tax directly levied by the British government on colonists, as opposed to a tax coming from the governing body of a colony. This led to the Stamp Act Congress which was held in New York City in October 1765. It was attended by representatives from 9 colonies and famously published its Declaration of Rights and Grievances which included the key point “There should be no taxation without representation.” This rallying cry of the Americans goes back to the English Bill of Rights of 1689 which asserts that taxes can only be enacted by elected representatives: “levying taxes without grant of Parliament is illegal.”

Rumblings of discontent continued as the British Parliament and King continued to alienate the colonists.

In the spring of 1774, Parliament abrogated the Massachusetts Charter of 1691, which gave people a considerable say in their government. In September, things boiled over not in Boston, but in the humble town of Worcester. Militiamen took over the courts and in October at Town Meeting independence from Britain was declared. The Committees of Correspondence assumed authority. (For a most useful guide to the pronunciation of “Worcester” and other Massachusets place names, click HERE. By the way, the Worcester Art Museum is outstanding.)

From there, things moved quickly and not so quickly. While the push for independence was well advanced in Massachusetts, the delegates to the First Continental Congress in the fall of 1774 were not prepared to take that bold step: in a letter John Adams wrote “Absolute Independency … Startle[s] People here.”  Most delegates attending the Philadelphia gathering, he warned, were horrified by “The Proposal of Setting up a new Form of Government of our own.”

But acts of insurrection continued. For example, in December 1774 in New Hampshire, activists raided Fort William and Mary, seizing powder and weaponry. Things escalated leading to an outright battle at Lexington and Concord in April, 1775.

Two years after the First Congress, at the Second Continental Congress, the delegates were ready to move in the direction of independence, convening in Philadelphia on May 10, 1776. In parallel, on June 12, 1776 at Williamsburg, the Virginia Constitutional Convention adopted the Virginia Declaration of Rights which called for a break from the Crown and which famously begins with

    Section 1. That all men are by nature equally free and independent and have certain inherent rights, of which, when they enter into a state of society, they cannot, by any compact, deprive or divest their posterity; namely, the enjoyment of life and liberty, with the means of acquiring and possessing property, and pursuing and obtaining happiness and safety.

This document was authored principally by George Mason, a planter and friend of George Washington. Meanwhile back in Philadelphia, Thomas Jefferson (who would have been familiar with the Virginia Declaration) was charged by a committee with the task of putting together a statement presenting the views of the Second Continental Congress on the need for independence from the British. After some edits by Franklin and others, the committee brought forth the founding American document, The Declaration of Independence. The 2nd paragraph of which begins with that resounding universalist sentence:

    We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

And it continues

    Governments are instituted among Men, deriving their just powers from the consent of the governed

The ideas expressed in the Virginia Declaration of Rights and the Declaration of Independence are radical and they had to have legitimacy. So, we are back to the original mystère, where did that legitimacy come from?

Clearly, phrases like “all men are by nature equally free and independent” and “all men are created equal” reverberate with an Iroquois, New World sensibility. Scholars also see here the hand of John Locke, most literally in the use of Locke’s  phrase “the pursuit of Happiness”; they also see the influence of Locke in the phrase “consent of the governed” – the idea of popular sovereignty being a concept closely associated with social contract philosophers of the European Enlightenment such as Locke and Rousseau.

Others also see the influence of the Scottish Enlightenment, that intellectual flowering with giants like David Hume and Adam Smith. The thinkers whose work most directly influenced the Americans include Thomas Reid (“self-evident” is drawn from the writings of this founder of the Common Sense School of Scottish Philosophy) and Francis Hutchenson (“unalienable rights” is drawn from his work on natural rights – Hutchenson is also known for his impact on his student Adam Smith, who later held Hutchenson‘s Chair of Moral Philosophy at Glasgow). Still others hear echoes of Thomas Paine and the recently published Common Sense – that most vocal voice for independence.

Jefferson would have been familiar with the Scottish Enlightenment; he also would have read the work of Locke and, of course, Paine’s pamphlet. The same most probably applies to George Mason as well. In any case, the Declaration of Independence went through multiple drafts, was read and edited by others of the Second Continental Congress and eventually was approved by the Congress by vote; so it must also have reflected a generally shared sense of political justice.

On July 1, 1776, the Second Continental Congress did take that bold step and voted in favor of Richard Henry Lee’s motion for independence from Great Britain. On July 4th, they officially adopted the Declaration of Independence.

Having declared independence from the British Crown, the representatives at the Second Continental Congress now had to come up with a scheme to unify a collection of fiercely independent political entities. Where could they have turned for inspiration and example in a political landscape dominated by powerful monarchies? Mystère. More to come.


The Rule of St Benedict goes back to the beginning of the Dark Ages. Born in the north of Italy at the end of the 5th century, at the end of the Roman Empire, Benedict is known as the Father of Western Monasticism; he laid down a social system for male religious communities that was the cornerstone of monastic life in Western Europe for a thousand years and one that endures to this day. It was a way of life built around prayer, manual work, reading and, of course, submission to authority. The Benedictine abbey was led by an abbot; under him were the priests who were the ones to copy manuscripts and to sing Gregorian chant; then there were the brothers who bore the brunt of much of the physical work and lay people who bore as much or more. The Rule of St. Benedict is followed not only by the Benedictines, the order he founded, but also by the Cistercians, the Trappists and others.

The monasteries are credited with preserving Western Civilization during the Dark Ages; copying manuscripts led to the marvelous decorative calligraphy of the Book of Kells and other masterpieces – the monks even introduced the symbol @ (the arobase, aka the “at sign”) to abbreviate the Latin preposition ad. They are also credited with sending out those fearless missionaries who brought literacy and Christianity to pagan Northern tribes, bringing new people into the orbit of Rome: among them were Winfrid (aka Boniface) from Wessex who chopped down sacred oak trees to prosyletize German tribes and Willibrord from Northumbria who braved the North Sea to convert the fearsome Frisians, destroying pagan sanctuaries and temples in the process.

The monasteries also accumulated vast land holdings (sometimes as donations from aging aristocrats who were more concerned to make a deal with God than they were for the future of their children and heirs).  With land and discipline came wealth and monasteries became the target of choice for Viking marauders. At the time of the Protestant Reformation, the monasteries fell victim to iconoclasts and plundering potentates. Henry VIII dissolved the Cistercian abbey at Rievaulx in Yorkshire and other sites, procuring jewelry for Anne Boleyn in the process. This was the kind of thing that provoked the crypto-Catholic poet Shakespeare to lament about the

Bared ruin’d choirs, where late the sweet birds sang

Even today, though in ruins, Rievaulx is magnificent as a visit to Yorkshire or a click HERE will confirm. It can also be noted that the monks at Rievaulx abandoned the Rule of St. Benedict in the 15th century and became rather materialistic; this probably made them all the more tempting a target for Henry VIII.

We also owe the great beers to the monks. Today, two of the most sought after Belgian beers in the world are Trappiste and Abbaye de Leffe. The city of Munich derives its name from the Benedictine monastery that stood in the center of town. Today’s Paulaner and Augustiner beers trace back to monasteries. But what was it that turned other-wordly monks into master brewers? Mystère.

The likely story is that it was the Lenten fast that drove the monks to secure a form of nourishment acceptable to the prying papal legates. Theirs was a liquid only fast from Ash Wednesday through Holy Saturday, including Sundays. The beers they crafted passed muster as non-solid food and were rich in nutrients. There are also stories that the strong brews would be considered undrinkable by Italian emissaries from Rome and so would be authorized to be drunk during Lent as additional mortification of the flesh.

Indeed, the followers of the Rule of St. Benedict didn’t stop there. We also owe bubbly to the Benedictines. The first documented sparkling wine was made in 1531 at the Benedictine Abbey of St. Hilaire in the town of Limoux in the south of France – close to the Mediterranean, between Carcassonne and Perpignan – a mere 12 miles from Rennes-Le-Chateau (of Davinci Code fame). Though wine-making went back centuries and occasionally wine would have a certain effervescence, these churchmen discovered something new. What was this secret? Mystère.

These Benedictine monks were the first to come upon the key idea for bubbly – a second fermentation in the bottle. Their approach is what now called the “ancestral method” – first making a white wine in a vat or barrel (“cuve” in French), interrupting the vinification process, putting it in bottles and adding yeast, grape sugar or some alcohol, corking it and letting it go through a second fermentation process in the (hopefully strong and well-sealed) bottle. This is the technique used for the traditional Blanquette de Limoux; it is also used for the Clairette de Die. The classic Blanquette de Limoux was not strong in alcohol, around 6-8 % and it was rather doux and not brut. Today’s product comes in brut and doux versions and is 12.5%  alcohol.

By the way, St. Hilaire himself was not a monk but rather a 4th century bishop and defender of the orthodoxy of the Nicene Creed (the one intoned in the Catholic and Anglican masses and other services); he is known as the “Athanasius of the West” which puts him in the company of a man with a creed all of his own – the Athanasian Creed forcefully affirms the doctrine of the Triune God and is read on Trinity Sunday.

The original ancestral method from Limoux made for a pleasant quaff, but not for the bubbly of today. Did the Benedictines come to the rescue once again? What devilish tricks did they come up with next? Or was it done by some other actors entirely? Mystère.

This time the breakthrough to modern bubbly took place in the Champagne region of France. This region is at the point where Northern and Southern Europe meet. In the late middle ages, it was one of the more prosperous places in Europe, a commercial crossroads with important fairs at cities like Troyes and Rheims. The cathedral at Rheims is one of the most stunning in Europe and the French kings were crowned there from Louis the Pious in 816 to Charles X in 1825. In fact, Rheims has been a city known to the English speaking world for so long that its name in French (Reims) has diverged from its older spelling which we still use in English. It is also where English Catholics in the Elizabethan and Jacobean periods published the Douay-Rheims translation of the Latin Vulgate.

Enter Pierre Pėrignon, son of a prosperous bourgeois family, who in 1668 joined the Benedictine monastery at St. Pierre de Hautvillers. The order by that time deigned to accept non-noble commoners as monks and would dub them dom, a title drawn from the Latin word for lord, dominus, to make up for their lack of a traditional aristocratic handle.

Dom Pėrignon was put in charge of wine making and wine storage, both critical to the survival of the monastery which had fallen on hard times with only a few monks left and things in a sorry state. The way the story is told in France, it was during a pilgrimage south and a stay with fellow Benedictines at the monastery of St. Hilaire in Limoux that he learned the ancestral method of making sparkling wine. However he learned of their techniques, he spent the rest of his life developing and perfecting the idea. By the time of his death in 1715, champagne had become the preferred wine at the court of Louis XIV and the wine of choice of the fashionable rich in London. Technically, Dom Pėrignon was a master at choosing the right grapes to blend to make the initial wine; then he abandoned the ancestral technique of interrupting the fermentation of the wine in the cuve or vat and let the wine complete its fermentation; next, to deal with the problem of dregs caused by the second fermentation in the bottle, he developed the elaborate practice of rotating each bottle by a quarter turn, a step repeated every day for two or more months and known as the remuage (click HERE for an illustration); it is said that professionals can do 40,000 bottles a day.

To all that, one must add that he found a solution to the “exploding bottle problem”; as the pressure of the CO2 that creates the bubbles builds up during the fermentation in the bottle, bottles can explode spontaneously and even set off a chain reaction. To deal with this, Dom Pėrignon turned to bottle makers in London who could make bottles that could withstand the build-up of all that pressure. Also, that indentation in the bottom of the bottle (the punt or kick-up in English, cul in French) was modified; better corks from Portugal too entered into it.

Putting all this together yielded the champagne method. Naturally, wine makers in other parts of France have applied this process to their own wines, making for some excellent bubbly; these wines used to carry the label “Mėthode Champenoise” or “Champagne Method.” While protection of the right to the term Champagne itself is even included in the Treaty of Versailles, more recently, a restriction was made to the effect that only wines from the Champagne region could even be labeled “Champagne Method.” So the other wines produced this way are now called crėmants (a generic term for sparkling wine). Thus we have the Crėmant d’Alsace, the Crėmant de Bourgogne, the Crėmant de Touraine and even the Crėmant de Limoux. All in all, these crėmants constitute the best value in French wines on the market. Other countries have followed the French lead and do not use the label “Champagne” or “Champagne Method” for sparkling wines; even the USA now follows the international protocol (although wines labeled Champagne prior to 2006 are exempt).

Admittedly, champagne is a marvelous beverage. It has great range and goes well with steak and with oysters. The sound of the pop of a champagne cork means that the party is about to begin.  Of course, one key thing is that champagne provides a very nice “high”; and it does that without resorting to high levels of alcoholic content. Drolly, at wine tastings and similar events, the quality of the “high” provided by the wine is never discussed directly – instead they skirt around it talking about legs and color and character and what not, while the key thing is the quality of the “high” and the percentage of alcohol in the wine. So how do you discuss this point in French itself? Mystère. In effect, there is no way to deal with all this in French, la Langue de Voltaire. The closest you can come to “high” is “ivresse” but that has a negative connotation; you might force the situation and try something like “douce ivresse” but that doesn’t work either. Tis a mystère without a solution, then. But it is the main difference between a $60 bottle of Burgundy and a $15 bottle of Pinot Noir – do the experiment.

There are some revisionist historians who try to diminish the importance of Dom Pėrignon in all this. But he has been elevated to star status as an icon for the Champagne industry. As an example, click HERE for a statue in his honor erected by Moȅt-Chandon; click HERE for an example of an advertisement they actually use to further sales.

So the secret to the ancestral method and the champagne method is that second fermentation in the bottle. But now we have popular alternatives to champagnes and crėmants in the form of Prosecco and Cava. If these very sparkling wines are not made with the champagne method, what is it then? Mystère.

In fact, the method used for these effervescent wines does not require that second fermentation in the bottle, a simplification made possible by the invention of stainless steel toward the end of the 19th century. This newer method carries out the secondary fermentation in closed stainless steel tanks that are kept under pressure. This simplifies the entire production process and makes the bubbly much less expensive to make. The process was first invented by Frederico Martinotti in Italy, then improved by Eugène Charmat in France – all this in the 1890’s. So it is known as the metodo italiano in Italy and the Charmat method most elsewhere. In the 1960’s things were improved further to allow for less doux, more brut wines. They are now extremely popular and considered a good value by the general wine-loving public.

Finally, there is the simplest method of all to make sparkling wine or cider – inject carbon dioxide directly into the finished wine or cider, much like making seltzer water from plain water with a SodaStream device. In fact, this is the method used for commercial ciders. Please don’t try this at home with a California chardonnay if you don’t have strong bottles, corks and protective wear.

Indeed, beer and wine have played an important role in Western Civilization; wine is central to rituals of Judaism and Christianity; the ancient Greeks and Romans even had a god of wine. In fact, beer and wine go back to the earliest stages of civilization. When hunter gatherers metamorphosed into farmers during the Neolithic Revolution and began civilization as we know it, they traded a life-style where they were taller, healthier and longer-lived for one in which they were shorter, less healthy, had a shorter life span and had to endure the hierarchy of a social structure with priests, nobles, chiefs and later kings.  On the other hand, archaeological evidence often points to the fact that one of the first things these farmers did do was to make beer by fermenting grains. Though the PhD thesis hasn’t been written yet, it is perhaps not unsafe to conclude that fermentation made this transition bearable and might even have been the start of it.




North America III

When conditions allowed, humans migrated across the Bering Land Bridge moving from Eurasia to North America thousands of years before the voyages of discovery of Columbus and other European navigators. That raises the question whether there were Native Americans who encountered Europeans before Columbus. If so, were these encounters of the first, second or third kind? Mystère.
For movement west by Europeans before Columbus, first we have the Irish legend of the voyage of St. Brendan the Navigator. Brendan and his 16 mates (St. Malo among them) sailed in a currach – an Irish fisherman’s boat with a wooden frame over which are stretched animal skins (nowadays they use canvas and sometimes anglicize the name to curragh). These seafarers reportedly reached lands far to the West, even Iceland and Newfoundland in the years 512-530 A.D. All this was presented as fact by nuns in parochial schools of yore, but, begorrah, there is archaeological evidence of the presence of Irish visitors on Iceland before any Viking settlements there. Moreover, in the 1970’s the voyage of St. Brendan was reproduced by the adventurer Tim Severin and his crew, which led to a best-selling book The Brendan Voyage, lending further credence to the nuns’ version of history. However, there is no account in the legends of any contact with new people; for contact with a mermaid, click HERE.
In the late 9th century, Viking men accompanied by (not so willing) Celtic women reached Iceland and established a settlement there (conventionally dated as of 874 A.D.). Out of Iceland came the Vinland Saga of the adventures of Leif Erikson (who converted to Christianity and so gained favor with the later saga compilers) and of his fearless sister Freydis Eriksdottir (who also led voyages to Vinland but who stayed true to her pagan roots). It has been established that the Norse did indeed reach Newfoundland and did indeed start to found a colony there; the site at L’Anse-aux-Meadows has yielded abundant archaeological evidence of this – all taking place around 1000 A.D.  The Norse of the sagas called the indigenous people they encountered in Vinland the Skraeling (which can be translated as “wretched people”). These people were not easily intimidated; there were skirmishes and more between the Skraeling (who did have bows and arrows) and the Norse (who did not have guns). In one episode, Freydis grabs a fallen Viking’s sword and drives the Skraeling attackers off on her own – click HERE for a portrait of Freydis holding the sword to her breast.
Generally speaking, the war-like nature of the Skraeling is credited with keeping the Vikings from establishing a permanent beach head in North America. So these were the first Native Americans to encounter Europeans, solving one mystery. But exactly which Native American group were the Skraeling? What is their migration story? Again mystère.
The proto-Inuit or Thule people, ancestors of the modern Inuit, emerged in Alaska around 1000 B.C. Led by their Sled Dogs, they “quickly” made their way east across Arctic Canada, then down into Labrador and Newfoundland. The proto-Inuit even made their way across the Baffin Bay to Greenland. What evidence there is supports the idea that these people were the fierce Skraeling of the sagas.
As part of the movement West, again according to the sagas, Norse settlers came to Greenland around 1000 A.D. led by the notorious Erik the Red, father both of Leif and Freydis. Settlers from Norway as well as Iceland joined the Greenland colony and it became a full blown member of medieval Christendom, what with churches, a monastery, a convent and a bishop. In relatively modern times, around 1200 A.D., the proto-Inuit reached far enough South in Greenland to encounter the Europeans there. Score one more for these intrepid people. These are the only two pre-Columbian encounters between Europeans and Native Americans that are well established.
In the end, though, the climate change of the Little Ice Age (which began around 1300 A.D.) and the Europeans’ impact on the environment proved too much and the Greenland colony died out sometime in the 1400’s. The proto-Inuit population with their sled dogs stayed the course (though not without difficulty) and survived. As a further example of the impact of the climate on Western Civilization, the Little Ice Age practically put an end to wine making in the British Isles; the crafty and thirsty Scots then made alcohol from grains such as barley and Scotch whisky was born.
The success of the proto-Inuit in the Arctic regions was based largely on their skill with the magnificent sled dogs that their ancestors had brought with them from Asia. The same can be said for the Aleut people and their Alaskan Malamute Dog. Both these wolf-like marvels make one want to read or reread The Call of the Wild; for a Malamute picture, click HERE.
We know the Clovis people and other populations in the U.S. and Mexico also had dogs that had come from Siberia. But today in the U.S. we are overwhelmed with dogs of European origin – the English Sheepdog, the Portuguese Water Dog, the Scottish Terrier, the Irish Setter, the French Poodle, the Cocker Spaniel, and on and on. What happened to the other dogs of the Native Americans? Are they still around? Mystère.
The simple answer is that, for the most part, in the region south of the Arctic, the native dogs were simply replaced by the European dogs. However, for the lower 48, it has been established recently that the Carolina Dog, a free range dingo, is Asian in origin. In Mexico, the Chihuahua is the only surviving Asian breed; the Xoloitzcuintli (aka the Mexican Hairless Dog) is thought to be a hybrid of an original Asian dog and a European breed.  (Some additional Asian breeds have survived in South America.)
Still it is surprising that the native North Americans south of the Arctic regions switched over so quickly and so completely to the new dogs. A first question that comes to mind is whether or not these two kinds of dogs were species of the same animal; but this question can’t be answered since dogs and wolves are already hard to distinguish – they can still interbreed and their DNA split only goes back at most 30,000 years. A more tractable formulation would be whether or not the Asian dogs and the European dogs are the issue of the same domestication event or of different domestication events. However, “it’s complicated.” One sure thing we know comes from the marvelous cave at Chauvet Pont d’Arc in central France where footprints of a young child walking side by side with a dog or wolf have been found which date back some 26,000 years. This would point to a European domestication event; however, genetic evidence tends to support an Asian origin for the dog. Yet another theory backed by logic and some evidence is that both events took place but that subsequently the Asian dogs replaced the original European dogs in Western Eurasia.
For one thing, the dogs that came with the Native Americans from Siberia were much closer to the dogs of an original Asian self-domestication that took place in China or Mongolia (according to PBS); in any case, they would not have gone through as intense and specialized a breeding process as would dogs in populous Europe, a process that made the European dogs more useful to humans south of the Arctic and more compatible with domestic animals such as chickens, horses and sheep – until the arrival of Europeans the Native Americans did not have domestic animals and did not have resistance to the poxes associated with them.
The role of dogs in the lives of people grows ever more important and dogs continue to take on new work roles – service dogs, search and rescue dogs, guide dogs, etc. People with dogs reportedly get more exercise, get real emotional support from their pets and live longer. And North America has given the world the wonderful Labrador Retriever. The “Lab” is a descendant of the St John’s Water Dog which was bred in the Province of New Foundland and Labrador; this is the only Canadian province to bear a Portuguese name – most probably for the explorer João Fernandes Lavrador who claimed the area for the King of Portugal in 1499, an area purportedly already well known to Portuguese, Basque and other fearless fishermen who were trolling the Grand Banks before Columbus (but we have no evidence of encounters of such fishermen with the Native Americans). At one point later in its development, the Lab was bred in Scotland by the Duke of Buccleuch, whose wife the Duchess was Mistress of the Robes for Queen Victoria (played by Diana Rigg in the PBS series Victoria – for the actual duchess, click HERE). From Labrador and Newfoundland, the Lab has spread all over North America and is now the most popular dog breed in the U.S. and Canada – and in the U.K. as well.

North America II

At various times in history, the continents of Eurasia and North America have been connected by the Bering Land Bridge which is formed when water levels recede and the Bering Sea is filled in (click  HERE for a dynamic map that shows changes in sea level over the last 21,000 years).

When conditions allowed, humans (with their dogs) migrated across the Bering Land Bridge moving from Eurasia to North America. It is not certain exactly when this human migration began and when it ended but a typical estimated range is from 20,000 years ago to 10,000 years ago. It has also been put forth that some of these people resorted to boats to ferry them across this challenging, changing area.

DNA analysis has verified that these new arrivals came from Siberia. It has also refuted Thor Heyerdahl’s “Kon Tiki Hypothesis” that Polynesia was settled by rafters from Peru – Polynesian DNA and American DNA do not overlap at all. On the other hand, there is DNA evidence that some of the trekkers from Siberia had cousins who lit off in the opposite direction and became the aboriginal people of New Guinea and of Australia!

The Los Angeles area is famous for its many attractions. Among them is the site of the La Brea Tar Pits, click HERE for a classic image. This is the resting place of countless animals who were sucked in to the primeval tar ooze here over a period of thousands and thousands of years. What is most striking is that so many of them have gone extinct, especially large animals such as the camel, the horse, the dire wolf, the giant sloth, the American lion, the sabre-toothed tiger, … .  In fact, with the exception of the jaguar, the musk ox, the moose, the caribou and the bison, all the large mammals of North America disappeared by 8,000 years ago. Humans arrived in North America not so many years before and we know they were successful hunters of large mammals in Eurasia. So, as in radio days, the $64 question is: did humans cause the extinction of these magnificent animals in North America? Mystère.

The last Ice Age lasted from 110,00 years ago to 11,700 years ago (approximately, of course).  During this period, glaciers covered Canada, Greenland and states from Montana to Massachusetts. In fact, the Bering Land Bridge was created when sea levels dropped because of the massive accumulation of frozen water locked in the glaciers. The glaciers didn’t just sit still; rather they moved South, receded to the North and repeated the cycle multiple times. In the process, they created the Great Lakes, flattened Long Island, and amassed that mound of rubble known as the Catskill Mountains – technically the Catskills are not mountains but rather a mass of monadnocks. In contrast, the Rockies and Appalachians are true mountains, created by mighty tectonic forces that thrust them upward. In any case, South of the glaciers, North America was home to many large mammal species.

As the Ice Age ended, much of North America was populated by the hunting group known as the Clovis People. Sites with their characteristic artifacts have been excavated all over the lower 48 and northern Mexico. For a picture of their characteristic spear head, click HERE. The term Clovis refers to the town of Clovis NM, scene of the first archaeological dig that uncovered this culture; luckily this dig preceded any work at the nearby town of Truth Or Consequences NM.

The Clovis people, who dominated North America south of the Arctic at the time of these mass extinctions, were indeed hunters of large and small mammals and that has given rise to the “Clovis overkill hypothesis” – that it was the Clovis people who hunted the horse and other large species to extinction. The hypothesis is supported by the fact that Clovis sites have been found with all kinds of animal remains – mammoths, horses, camels, sloths, tapirs and other species.

For one thing, these animals did not co-evolve with humans and had not honed defensive mechanisms against them – unlike, say, the Zebra in Africa (descendants of the original North American horse) who are aggressive toward humans; the Zebra have neither been hunted to extinction nor domesticated. In fact, the sociology of grazing animals such as the horse can work against them when pursued by hunters. The herd provides important defense mechanisms – mobbing, confusing, charging, stampeding, plus the presence of bulls. But since these animals employ a harem system and the herd has many cows per bull, the solitary males or the males in very small groups who are not part of the herd are natural targets for hunting. Even solitary males who use speed of flight as their first line of defense can fall victim to persistence hunting, a tactic known to have been used by humans – pursuing prey relentlessly until it is immobile with exhaustion:  a horse can run the 10 furlongs of the Kentucky Derby faster than a human in any weather but on a hot day a sweating human can beat a panting horse in a marathon. An example of persistence hunting in more modern times is stag hunting on horseback with yelping dogs – the hunts ends when the stag is exhausted and immobile and the coup de grace is administered by the Master of the Hunt. As for dogs, they were new to the North American mammals and certainly would have aided the Clovis people in the hunt.

Also there is a domino effect as an animal is hunted to extinction: the predator animals that depend on it are also in danger. By way of example, it is thought that the sabre-toothed tiger disappeared because its prey the mammoth went extinct.

Given all this, how did the bison and caribou survive?  In fact, for the bison, it was quite the opposite: there was a bison population explosion. Given that horses and mammoths have the same diet as bison, some scientists postulate that that competition with the overly successful bison drove the others out. Another thing is that bison live in huge herds while animals like horses live in small bands. It is theorized that the caribou, who also travel in massive herds, survived by pushing ever further north into the arctic regions where ecological conditions were less hostile for them and less hospitable to humans and others.

However, all this is happening at a dramatic period, the end of the Ice Age. So warming trends, the end of glaciation and other environmental changes can have contributed to this mass extinction: open spaces were replaced by forests reducing habitat, heavy coats of fur became a burden, … In fact, the end of the last Ice Age is also the end of the much longer Pleistocene period; this was followed by the much warmer Holocene period which is the one we are still in today. So the Ice Age and the movement of the glaciers suddenly ended; this was global warming to a degree that would not be seen again until the present time. The warming that followed the Ice Age would also have changed the ecology of insects, arachnids, viruses et al, with a potentially lethal impact on plant life and on mega fauna. Today we are witnessing a crisis among moose caused by the increase of the winter tick population which is no longer kept in check by cold winters. We are also seeing insects unleashed to attack trees. Along the East Coast, it is the southern pine beetle which has now reached New England – on its Shermanesque march north, this beetle has destroyed forests, woods, groves, woodlands, copses, thickets and stands of once proud pine trees. It is able to move north because the minimum cold temperature in the Mid-Altantic states has warmed by about 7 degrees Fahrenheit over the last 50 years. In Montana and other western states it is the mountain pine beetle and the endless fires that are destroying forests.

Clearly, the rapidly occurring and dramatic transformations at the end of the Ice Age could have disrupted things to the point of causing widespread extinctions – evolution did not have time to adjust.

And then there is this example where the overkill hypothesis is known not to apply. The most recent extinction of mammoths took place on uninhabited Wrangel Island in the Arctic Ocean off the Chukchi Peninsula in Siberia, only 4,000 years ago, and that event is not attributed to humans in any way. The principal causes cited are environmental factors and genetic meltdown – the accumulation of bad traits due to the small size of the breeding population

In sum, scientists make the case that climate change and environmental factors were the driving forces behind these extinctions and that is the current consensus.

So it seems that the overkill hypothesis is an example of the logical fallacy “post hoc ergo propter hoc” – A happened after B, therefore B caused A. By the way, this fallacy is the title of the 2nd episode of West Wing where Martin Sheen as POTUS aggressively invokes it to deconstruct his staff’s analysis of his electoral loss in Texas; in the same episode, Rob Lowe shines in a subplot involving a call-girl who is working her way through law school! Still, the circumstantial evidence is there – humans, proven hunters of mammoths and other large fauna, arrive and multiple large mammals disappear.

The situation for the surviving large mammals in North America is mixed at best. The bison are threatened by a genetic bottleneck (a depleted gene pool caused by the Buffalo Bill era slaughter to make the West safe for railroads), the moose by climate change and tick borne diseases, the musk ox’s habitat is reduced to arctic Canada, the polar bear and the caribou have been declared vulnerable species, the brown bear and its subspecies the grizzly bear also range over habitats that have shrunk. The fear is that human involvement in climate change is moving things along so quickly that future historians will be analyzing the current era in terms of a new overkill hypothesis.

North America I

Once upon a time, there was a single great continent – click HERE – called Pangaea. It formed 335 million years ago, was surrounded by the vast Panthalassic Ocean and only began to break apart about 175 million years ago. North America dislodged itself from Pangaea and started drifting west; this went on until its plate rammed into the Pacific plate and the movement dissipated but not before the Rocky Mountains swelled up and reached great heights.

As the North American piece broke off, it carried flora and fauna with it. But today we know that many land species here did not come on that voyage from Pangaea; even the iconic American bison (now the National Mammal) did not originate here. How did they get to North America? Something of a mystère.

Today the Bering Strait separates Alaska from the Chukchi Peninsula in Russia but, over the millennia, there were periods when North America and Eurasia were connected by a formation known as the Bering Land Bridge, aka Beringia. Rises and ebbs in sea level due to glaciation, rather than to continental drift, seem to be what creates the bridge. When the land bridge resurfaces, animals make their way from one continent to the other in both directions.

Among the large mammals who came from Eurasia to North America by means of the Bering Land Bridge were the musk ox (click HERE), the steppe mammoth, the steppe bison (ancestor of our bison), the moose, the cave lion (ancestor of the American lion) and the jaguar. The steppe mammoth and the American lion are extinct today.

Among the large mammals native to North America were the Columbian mammoth, the sabre-toothed tiger, and the dire wolf. All three are now extinct; for an image of the three frolicking together in sunny Southern California, click HERE .

The dire wolf lives on, however, in the TV series Game of Thrones and this wolf is the sigil of the House of Stark; so it must also have migrated from North America to the continent of Westeros – who knew !

Also among the large mammals native to North America are the short-faced bear, the tapir, the brown bear, and the caribou (more precisely, this last  is native to Beringia). The first two are sadly extinct today.

In school we all learned how the Spanish conquistadors brought horses to North America, how the Aztecs and Incas had never seen mounted troops before and how swiftly their empires fell to much smaller forces as a result. An irony here is that the North American plains were the homeland of the species Equus and horses thrived there until some 10,000 years ago. Indeed, the horse and the camel are among the relatively few animals to go from North America to the more competitive Eurasia; these odd-toed and even-toed ungulates prospered there and in Africa – even Zebras are descended from the horses that made that crossing.  The caribou also crossed to Eurasia where they are known as reindeer.

Similarly, South America split off from Pangaea and drifted west creating the magnificent range of the Andes Mountains before stopping.

The two New World continents were not connected until volcanic activity and plate tectonics created the Isthmus of Panama about 2.8 million years ago – some say earlier (click HERE). The movement of animals between the American continents via the Isthmus of Panama is called the Great American Interchange. Among the mammals who came from South America to North America were the mighty ground sloth (click HERE ).

This sloth is now extinct but some extant smaller South American mammals such as the cougar, armadillo, porcupine and opossum also made the crossing. The opossum and the ground sloth are both marsupials; before the Great American Interchange, there were no marsupials in North America as there are none in Eurasia or Africa.

The camel, the jaguar, the tapir, the short-faced bear and the dire wolf made their way across the Isthmus of Panama to South America. The camel is the ancestor of today’s llamas, vicuñas, alpacas and guanacos. The jaguar and tapir have found the place to their liking, the short-faced bear has evolved into the spectacled bear but the dire wolf is not found there today; it is not known if it has survived on the fictional continent of Essos.

The impact of the movement of humans and dogs into North America is a subject that needs extensive treatment and must be left for another day (post North America III). But one interesting side-effect of the arrival of humans has been the movement of flora in and out of North America. So grains such as wheat, barley and rye have been brought here from Europe, the soybean from Asia, etc.. In the other direction, pumpkins, pecans, and cranberries have made their way from North America to places all over the planet. Two very popular vegetables, that originated here and that have their own stories, are corn and the sweet potato.

North America gave the world maize. This word came into English in the 1600s from the Spanish maiz which in turn was based on an Arawak word from Haiti. The Europeans all say “maize” but why do Americans call it “corn”? Mystère.

In classical English, corn was a generic term for the locally grown grain – wheat or barley, or rye … . Surely Shakespeare was not thinking of maize when he included this early version of Little Boy Blue in King Lear where Edgar, masquerading as Mad Tom, recites

Sleepest or wakest thou, jolly shepherd?

Thy sheep be in the corn.

And for one blast of thy minikin mouth,

Thy sheep shall take no harm.

So naturally the English colonists called maize “Indian corn” and the “Indian” was eventually dropped for maize in general – although Indian corn is still in widespread use for a maize with multicolored kernels, aka flint corn. If you want to brush up on your Shakespeare, manikin is an archaic word that came into English from the Dutch minneken and that means small.

That other culinary gift of North America whose name has a touch of mystère is the sweet potato. In grocery stores, one sees more tubers labeled yam than one does sweet potato. However, with the rare exception of imported yams, all these are actually varieties of the sweet potato. The yam is a different thing entirely – it is a perennial herbaceous vine, while a sweet potato is in the morning glory family (and is the more nutritious vegetable). The yam is native to Africa and the term yam was probably brought here by West Africans.

The sweet potato has still another language problem: it was the original vegetable that was called the batata and brought back to Spain early in the 1500’s; later the name was usurped by that Andean spud which was also called a batata, and our original potato had to add “sweet” to its name making it a retronym (like “acoustic guitar,” “landline phone” and “First World War”).

Gold Coin to Bit Coin

The myth of Midas is about the power of money – it magically transforms everything in its path and turns it into something denominated by money itself. The Midas story comes from the ancient lands of Phrygia and Lydia, in western modern day Turkey, close to the Island of Lesbos and to the Ionian Greek city Smyrna where Homer was born. It was the land of a people who fought Persians and Egyptians, who had an Indo-European language, and who were pioneering miners and metal workers. It was the land of Croesus, the King of Sardis, the richest man in the world whose wealth gave rise to the expression “as rich as Croesus.” Click HERE for a map.
Croesus lived in the 6th century B.C., a century dominated by powerful monarchs such as of Cyrus the Great of Persia (3rd blog appearance), Tarquin the Proud of Rome, Hammurabi II of Babylon, Astyages of the Medes, Zedekiah the King of Judah, the Pharaoh Amasis II of Egypt, Hamilcar I of Carthage. How could the richest man in the world at that time be a king from a backwater place such as Sardis in Lydia? Mystère.
In the time of Achilles and Agamemnon in the Eastern Mediterranean as in the rest of the world, goods and services were exchanged by barter. Shells, beads, cocoa beans and other means were also used to facilitate exchange, in particular to round things out when the goods bartered didn’t quite match up in value.
So here is the simple answer to our mystère: The Lydians introduced coinage to the world, a first in history, and thus invented money and its magic – likely sometime in the 8th century B.C. Technically, the system they created is known as commodity money, one where a valuable commodity such as gold or silver is minted into a standardized form.
Money has two financial functions: exchange (buying and selling) and storage (holding on to your wealth until you are ready to spend it or give to someone else). So it has to be easy to exchange and it has to hold its value over time. The first Lydian coins were made from an alloy of gold and silver that was found in the area; later technology advances meant that by Croesus’ time, coins of pure gold or pure silver could be struck and traded in the marketplace. All this facilitated commerce and led to an economic surge which made Croesus the enduring personification of wealth and riches. Money has since has made its way into every nook and cranny of the world, restructuring work and society and creating some of the world’s newest professions along the way; money has evolved dramatically from the era of King Croesus’ gold coin to the present time and Satoshi Nakamoto’s Bitcoin.
The Lydian invention was adopted by the city states of Greece. Athens and other cities created societies built around the agora, the market-place, as opposed to the imperial societies organized around the palace with economies based on in-kind tribute – taxes paid in sacks of grain, not coin. These Greek cities issued their own coins and created new forms of political organization such as democratic government. This is a civilization far removed from the warrior world of the Homeric poems. The agora model spread to the Greek cities of Southern Italy such as Crotone in Calabria known for Pythagoras and his Theorem, such as Syracuse in Sicily renowned for Archimedes and his “eureka moment.” Further north in Rome, coinage was introduced to facilitate trade with Magna Graecia as the Romans called the southern region of Italy.
The fall of Rome came about in part because the economy collapsed under the weight of maintaining an overextended and bloated military. Western Europe then fell into feudalism which was only ended with the growth of towns and cities in the late middle ages: new trading networks arose such as the Hanseatic League, the Fairs of the Champagne region (this is before the invention of bubbly), and the revival of banking in Italy that fueled the Renaissance. The banks used bills of exchange backed by gold. So this made the bill of exchange a kind of paper money, but it only circulated within the banking community.
Banks make money by charging interest on loans. The Italian banks did not technically charge interest because back then charging interest was the sin of usury in the eyes of the Catholic Church – as it still is in Sharia Law. Rather they side-stepped this prohibition with the bills of exchange and took advantage of exchange rates between local currencies – their network could transfer funds from London to Lyons, from Antwerp to Madrid, from Marseilles to Florence, … . The Christian prohibition against interest actually goes back to the Hebrew Bible, but Jewish law itself only prohibits charging interest on loans made to other Jews. Although the reformers Luther and Calvin both condemned charging interest, the Protestant world as well as the Church of Rome eventually made peace with this practice.
In China also, paper exchange notes appeared during the Tang Dynasty (618 A.D. -907 A.D.) and news of this wonder was brought back to Europe by Marco Polo; perhaps that is where the Italian bankers got the idea of replacing gold with paper. Interestingly, the Chinese abandoned paper entirely in the mid 15th century during a bout of inflation and only re-introduced it in recent times.
Paper money that is redeemable by a set quantity of a precious metal is an example of representative currency. In Europe, the first true representative paper currency was introduced in Sweden in 1661 by the Stockholm Bank. Sweden waged major military campaigns during the Thirty Years War which lasted until 1648 and then went on to a series of wars with Poland, Lithuania, Russia and Denmark that only ended in 1658. The Stockholm bank’s mission was to reset the economy after all these wars but it failed after a few years simply because it printed much more paper money than it could back up with gold.
The Bank of England was created in 1694 in order to deal with war debts. It issued paper notes for the British pound backed by gold and silver. The British Pound became the world’s leading exchange currency until the disastrous wars of the 20th century.
The trouble with gold and silver is that supply is limited. The Spanish had the greatest supply of gold and silver and so the Spanish peso coin was the most widespread and the most used. This was especially the case in the American colonies and out in the Pacific. In the English speaking colonies and ex-colonies, pesos were called “pieces of 8” since a peso was equivalent to 8 bits, the bit being a Spanish and Mexican coin that also circulated widely. The story of the term dollar itself begins in the Joachimsthal region of Bohemia where the coin called the thaler (Joachim being the father of Mary, thal being German for valley) was first minted in 1517 by the Hapsburg Empire; it was then imitated elsewhere in Europe including in Holland where the daler was introduced and in Scotland where a similar coin took the name dollar. The daler was the currency of New Amsterdam and was used in the colonies. The Spanish peso, for its part, became known as “the Spanish dollar.” After independence, in the U.S. the dollar became the official currency and dollar coins were minted – paper money (except for the nearly worthless continentals of the Revolutionary War) would not be introduced in the U.S. until the Civil War. En passant, it can be noted that the early U.S. dollar was still thought of as being worth 8 bits and so “2 bits” became a term for a quarter dollar. (Love those fractions, 2/8 = 1/4). The Spanish dollar also circulated in the Pacific region and the dollar became the name of the currencies of Hong Kong, Australia, New Zealand, … .
During the Civil War, the Lincoln government introduced paper currency backed by a quantity of gold to help finance the war. Before long, Lincoln lowered this quantity because of the cost of the war, debasing the dollar in the process.
The money supply is limited by the amount of gold a government has in its vaults; this can have the effect of obstructing commerce. In the period leading up to World War I, in order to get more money into the economy, farmers in the American mid-west and west militated for silver to back the dollar at the ratio of 16 ounces of silver to 1 ounce of gold. The Wizard of Oz, written in support of this movement, is an allegory where the Cowardly Lion is really William Jennings Bryan, the Scarecrow is the American farmer, the Tin Man is the American worker; and the Wizard is Mark Hanna, the Republican political kingmaker; Dorothy’s wears magic silver shoes not ruby slippers in the book. Unlike the book, in the real world, the free silver movement failed to achieve its objective.
Admittedly, this is a long story – but soldier on, Bitcoin is coming up soon.
Needless to say, World War I had a terrible effect on currencies, especially in Europe where the German mark succumbed to hyper-inflation and money was worth less than the cost of printing it. This would have tragic consequences.
In the U.S. during the depression, the Roosevelt government made some very strong moves. It split the dollar into two – the domestic dollar and the international dollar; the domestic dollar was disconnected from gold completely while the dollar for international payments was still backed by gold but at $21 dollars the ounce, a significant devaluation. A paper currency, like Roosevelt’s domestic dollar, which is not backed by a commodity is called a fiat currency – meaning, in effect, that the currency’s value is declared by the issuer – prices then go up and down according to the supply of the currency and the demand for the currency. To increase the U.S. treasury’s supply of gold, as part of this financial stratagem, the government ordered the confiscation of all privately held gold (bullion, coin, jewelry, … ) with some small exceptions for collectors and dealers. Today, it is hard to imagine people being called upon to bring in the gold in their possession, have it weighed and then be reimbursed in paper money by the ounce.
Naturally, World War II also wreaked havoc with currencies around the world. The Bretton Woods agreement made the post-war U.S. dollar the currency of reference and other currencies were evaluated vis-à-vis the international (gold backed) U.S. dollar. So even the venerable British Pound became a fiat currency in 1946.
The impact of war on currency was felt once again in 1971 when Richard Nixon, with the U.S. reeling from the cost of the war in Vietnam, disconnected the dollar completely from gold making the almighty dollar a full fiat currency.
Soapbox Moment: The impact of war on a nation and the nation’s money is a recurring theme. Even Croesus lost his kingdom when he was killed in a battle with the forces of Cyrus the Great. Joan Baez and Marlene Dietrich both sang “When will they ever learn?” beautifully in English and even more plaintively in German, but men simply do not learn. Louis XIV said it all on his death bed: the Sun King lamented how he had wrecked the French economy and declared “I loved war too much.” Maybe we should all read or re-read Lysistrata by Aristophanes, the caustic playwright of 5th century Athens.
Since World War II, the world of money has seen many innovations, notably credit cards, electronic bank transfers and the relegation of cash to a somewhat marginal role as the currency of the poor and, alas, the criminal. Coupons and airline miles are examples of another popular form of currency, know as virtual currency; this form of currency has actually been around for a long time – C.W. Post distributed a one cent coupon with each purchase of Grape Nuts flakes as far back as 1895.
The most recent development in the history of money has been digital currency which is completely detached from coin, paper or even government – its most celebrated implementation being Bitcoin. A bitcoin has no intrinsic value; it is not like gold or silver or even paper notes backed by a precious metal. It is like a fiat currency but one without a central bank to tell us what it is worth. Logically, it should be worthless. But a bitcoin sells for thousands of dollars right now; it trades on markets much like mined gold. Why? Mystère.
A bitcoin’s value is determined by the marketplace: its worth is its value as a medium of exchange and its value as a storage medium for wealth. But Bitcoin has some powerful, innovative features that make it very useful both as a medium of exchange and as a medium of storage; its implementation is an impressive technological tour de force.
In 2008, a pdf file entitled “Bitcoin: A Peer-to-Peer Electronic Cash System” authored by one Satoshi Nakamoto, was published on-line; click HERE for a copy. “Satoshi Nakamoto” then is the handle of the founder or group of co-founders of the Bitcoin system (abbreviated BTC) which was launched in January 2009. BTC has four special features:
• Unlike the Federal Reserve System, unlike the Bank of England, it is decentralized. There is no central computer server or authority overseeing it. It employs the technology that Napster made famous, peer-to-peer networking; individual computers on the network communicate directly to one another without passing through a central post office; Bitcoin electronic transfers are not instantaneous but they are very, very fast compared to traditional bank transfers – SWIFT and all that.
• BTC guarantees a key property of money: the same bitcoin can not be in two different accounts and an account cannot transfer the same bitcoin twice – this is the electronic version of “you can’t spend the same dollar twice.” This also makes it virtually impossible to counterfeit a bitcoin. This is achieved by means of a technical innovation called the blockchain, which is a concise and efficient way of keeping track of bitcoins’ movements over time (“Bob sent Alice 100 bitcoins at noon GMT on 01/31/2018 … ”; it is a distributed, public account book – “ledger” as accountants like to say. A data compression method called hashing is employed to keep the size of the blockchain under control. Blockchain technology itself has since been adopted by tech companies such as IBM and One Network Enterprises.
• BTC guarantees that bitcoin transfers are secret, known only to the sender and the receiver. For this, in addition to hashing, it uses sophisticated cryptography protocols such as public key encryption; this is the method that distinguishes “https”from “http” in URL addresses and makes a site safe. Public key encryption is based on an interesting mathematical idea – the solvable problem that cannot be solved in our lifetimes even with the best algorithms running on the fastest computers; this is an example of the phenomenon mathematicians call “combinatorial explosion.” The receiver has created this set of problems or puzzles himself and so his private key gives him a way to decrypt the sender’s message. This impenetrability makes Bitcoin an example of a crypto-currency since transactions are decipherable only by the buyer and the seller. This feature clearly makes the system attractive to parties with a need for privacy and makes it abhorrent to tax collectors and regulators.
• New bitcoins are created by a process called mining that in some ways emulates mining gold. A new bitcoin is born when a computer manages to be the first to solve a terribly boring mathematical problem at the expense of a great deal of time, computer cycles and electricity; in the process of mining, as a side-effect, the BTC miners perform some of the grunt work of verifying that a new transaction can be added safely to the blockchain. Also, in analogy with gold, there is a limit of 21 million on the number of unit bitcoins that can be mined. This limit is projected to be reached around the year 2140; as is to be expected, this schedule is based on a clever strategy, one that reduces the rewards for mining over time.
The bitcoin can be divided into a very small unit called the satoshi. This means that $5 purchases, say, can be made. For example, using Gyft or eGifter or other system, one can use bitcoin for purchases in participating stores or even meals in restaurants. In the end, it is supply and demand that infuse bitcoins with value, the demand created by usefulness to people. It is easy enough to get into the game; for example, you can click HERE for one of many sites that support BTC banking etc
The future of Bitcoin itself is not easy to predict. However, digital currency is here to stay; there are already many digital currency competitors (e.g. Ethereum, Ripple) and even governments are working on ways to use this technology for their own national currencies. For your part, you can download the Satoshi Nakamoto paper, slug your way through it, rent a garage and start your own company.


At the very beginning of the 1600’s, explorers from England (Gosnold), Holland (Hudson and Block) and France (Champlain) nosed around Cape Cod and other places on the east coast of North America. Within a very short time, New England, New Netherland and New France were founded along with the English colony in Virginia; New Sweden followed soon. Unlike the early Spanish conquests and settlements in the Americas which were under the aegis of the King of Spain and legitimized by a papal bull, these new settlements were undertaken by private companies – the Massachusetts Bay Colony, the Dutch West India Company, the Compagnie des Cent-Associės de la Nouvelle France, the Virginia Company, the South Sweden Company.
New Sweden was short lived and was taken over by New Netherland; New Netherland in turn was taken over by the Duke of York for the English crown.
In terms of sheer area, New France was the most impressive. To get an idea of its range, consider taking a motor trip through the U.S.A. going from French named city to French named city, a veritable Tour de France:
Presque Isle ME -> Montpelier VT -> Lake Placid NY -> Duquesne PA -> Vincennes IN -> Terre Haute IN -> Louisville KY -> Mobile AL -> New Orleans LA -> Petite Roche (now Little Rock) AR -> Laramie WY -> Coeur d’Alène ID -> Pierre SD -> St Paul MN -> Des Moines IA -> Joliet IL -> Detroit MI
That covers most of the U.S. and you have to add in Canada. It is interesting to note that even after the English takeover of New Netherland (NY, NJ, PA, DE) in 1664, the English territories on the North American mainland still basically came down to the original 13 colonies of the U.S..
The first question is how did the area known as the Louisiana Territory get carved out of New France? Mystère.
To look into this mystery, one must go back to the French and Indian War which in Europe is known as the Seven Years War. This war, which started in 1756, was a true world war in the modern sense of the term with fronts on five continents and with many countries involved. This was the war in which Washington and other Americans learned from the French and their Indian allies how to fight against the British army – avoid open field battles above all. This was the war which left England positioned to take control of India, this was the war that ended New France in North America: with the Treaty of Paris of 1763, all that was left to France in the new world was Haiti and two islands in the Caribbean (Guadeloupe and Martinique) and a pair of islands off the Grand Banks (St. Pierre and Miquelon). The English took control of Canada and all of New France east of the Mississippi. Wait a second – what happened to New France west of the Mississippi? Here the French resorted to a device they would use again to determine the fate of this territory – the secret treaty.
In 1762, realizing all was lost in North America but still in control of the western part of New France, the French king, Louis XV (as in the furniture), transferred sovereignty over the Louisiana Territory to Spain in a secret pact, the Treaty of Fontainebleau. (For a map, click HERE) The British were not informed of this arrangement when signing the Treaty of Paris in 1763, apparently believing that the area would remain under French control. On the other hand, in this 1763 treaty, the Spanish ceded the Florida Territory to the British. This was imperial real estate wheeling-and-dealing at an unparalleled scale; but to the Europeans the key element of this war was the Austrian attempt to recover Silesia from the Prussians (it failed); today Silesia is part of Poland.
How did the Louisiana Territory get from being part of Spain to being part of the U.S.? Again mystère.
The Spanish period in the Louisiana Territory was marked by struggles over Native American slavery and African slavery. With the Treaty of Paris of 1783 which ended the American War of Independence, the Florida Territory which included the southern ends of Alabama and Mississippi was returned to Spain by the British. For relations between the U.S. and Spain, the important issue became free navigation on the Mississippi River. Claims and counterclaims were made for decades. Eventually the Americans secured the right of navigation down the Mississippi. So goods could be freely shipped on the Father of Waters on barges and river boats and the cargo could still pass through New Orleans before being moved to ships for transport to further locations. This arrangement was formalized by a treaty in 1795, know as Pinckney’s Treaty but one honored by the Spanish governor often in the breach.
The plot thickened in 1800 when France and Spain signed another secret treaty, the Third Treaty of San Ildefonso. This transferred control of the Louisiana Territory back to the French, i.e. to Napolėon.
Switching to the historical present for a paragraph, Napolėon’s goal is to re-establish New France in New Orleans and the rest of the Louisiana Territory. This ambition so frightens President Thomas Jefferson that, in a letter to Robert Livingston the ambassador to France, he expresses the fear that the U.S. will have to seek British protection if Napoleon does in fact take over New Orleans:
    “The day that France takes possession of New Orleans…we must marry ourselves to the British fleet and nation.”
This from the author of the Declaration of Independence!! So he instructs Livingston to try to purchase New Orleans and the surrounding area. This letter is dated April 18, 1802. Soon he sends James Monroe, a former ambassador to France who has just finished his term as Governor of Virginia, to work with Livingston on the negotiations.
The staging area for Napoleon’s scheme was to be Haiti. However, Haiti was the scene of a successful rebellion in the 1790’s against French rule led by Toussaint Louverture leading to the abolition of slavery in Haiti and the entire island of Hispaniola by 1801. Napoleon’s response was to send a force of 31,000 men to retake control. At first, this army managed to defeat the rebels under Louverture, to take him prisoner, and to re-establish slavery. Soon, however, the army was out-maneuvered by the skillful military tactics of the Haitians and it was decimated by yellow fever; finally, at the Battle of Vertières in 1803, the French force was defeated by an army under Jean-Jacques Dessalines, Louverture’s principal lieutenant.
With the defeat of the French in Haiti at the hands of an army of people of color, the negotiations in Paris over transportation in New Orleans turned suddenly into a deal for the whole of the Louisiana Territory – for $15 million. The Americans moved swiftly, possibly illegally and unconstitutionally, secured additional funding from the Barings Bank in London and overcame loud protests at home. The Louisiana Territory was formally ceded to the U.S. on Dec. 20, 1803.
Some numbers: the price of the Louisiana Purchase comes to less than 3 cents an acre; adjusted for inflation, this is $58 an acre today – a good investment indeed made by a man, Thomas Jefferson, whose own finances were always in disarray.
There is something here that is reminiscent of the novel Catch 22 and the machinations of the character Milo Minderbinder (Jon Voight in the movie) : Barings Bank profited from this large transaction by providing funds for the Napoleonic regime at a point in time when England was once more at war with France! What makes the Barings Bank stunt more egregious is that Napolėon was planning to use the money for an invasion of England (which never did take place). But, war or no war, money was being made.
The story doesn’t quite end there. The British were not happy with these secret treaties and the American purchase of the Louisiana Territory, but they were too occupied by the Napoleonic Wars to act. However, their hand was forced with the outbreak of the War of 1812. At their most ambitious, the British war aims were to restore the part of New France in the U.S. that is east of the Mississippi to Canada and to gain control of the Louisiana Territory to the west of the Mississippi. To gain control of the area in question east of the Mississippi, forces from Canada joined with Tecumseh and other Native Americans; this strategy failed. With Napolėon’s exile to Elba, a British force was sent to attack New Orleans in December 1814 and to gain control of the Louisiana Territory. This led to the famous Battle of New Orleans, to the victory which made Andrew Jackson a national figure, and to that popular song by Johnnie Horton. So this strategy failed too. It is only at this point that American sovereignty over the Louisiana Territory became unquestioned. It can be pointed out that the Treaty of Ghent to end this war had been signed before the battle; however, it was not ratified by the Senate until a full month after the battle and who knows what a vengeful and batty George III might not have done had the battle gone in his favor. It can be said that it was only with the conclusion of this war that the very existence of the U.S. and its sovereignty over vast territories were no longer threatened by European powers. The Monroe Doctrine soon followed.
The Haitians emerge as the heroes of this story. Their skill and valor forced the French to offer the entire Louisiana Territory to the Americans at a bargain price and theirs was the second nation in the Americas to declare its independence from its European overlords – January 1, 1804. However, when Haiti declared its independence, Jefferson and the Congress refused to recognize their fellow republic and imposed a trade embargo, because they feared the Haitian example could lead to a slave revolt here. Since then, French and American interference in the nation’s political life have occurred repeatedly, rarely with benign intentions. And the treatment of Haitian immigrants in the U.S. today hardly reflects any debt of gratitude this nation might have.
The Haitian struggle against the French is the stuff of a Hollywood movie, what with heroic figures like Louverture, Dessalines and others, political intrigues, guerrilla campaigns, open-field battles, defeats and victories, and finally a new nation. Hollywood has never taken this on (although Danny Glover appears to be working on a project), but in the last decade there have been a French TV mini-series (since repackaged as a feature film) and other TV shows about this period in Haitian history.
The Barens Bank continued its financially successful ways. At one point, the Duc de Richelieu called it “the sixth great European power”; at another point, it actually helped the Americans carry out deals in Europe during the War of 1812, again proving that banks can be above laws and scruples. However, its comeuppance finally came in 1995. It was then the oldest investment bank in the City of London and banker to the Queen, but the wildly speculative trades of a star trader in the Singapore office forced the bank to fail; it was sold off to the Dutch bank ING for £1. The villain in the piece, Nick Leeson, was played by Ewan McGregor in the movie Rogue Trader.
In the end, Napoleon overplayed his hand in dealing with Spain. In 1808, he forced the abdication of the Spanish king Carlos IV and installed his own brother as “King of Spain and the Indies” in Madrid. This led to a long guerrilla war in Spain which relentlessly wore down the troops of the Grande Armėe and which historians consider to have been the beginning of the end for the Little Corporal.


The modern world uses a number system built around 10, the decimal system. Things are counted and measured in tens and powers of ten. Thus we have 10 years in a decade, 100 years in a century and 1000 years in a millennium. On the other hand, the number 60 pops up in some interesting places; most notably, there are 60 minutes in an hour and 60 seconds in a minute.  The only challenge to this setup came during the decimal-crazed French Revolution which introduced a system with 10 hours in the day, 100 minutes in the hour and 100 seconds in the minute. Their decimal metric system (meters and kilos) prevailed but the decimal time system, despite its elegance, was soon abandoned. But why have there been 60 minutes in an hour and 60 seconds in a minute in a numerical world dominated by the number 10? And that for thousands of years. Mystère?

The short answer is Babylon. In the ancient world, Babylon was renowned as a center of learning, culture, religion, commerce and riches. It gave us the code of Hammurabi and the phrase “an eye for an eye”; it was conquered by Cyrus the Great and Alexander the Great. The Israelites endured captivity there and two books of the Hebrew Bible (Ezekiel and Daniel) were written there. The Christian Bible warns us of the temptations of the Whore of Babylon. It was the Babylonian system of telling time that spread west and became the standard in the Mediterranean world: 24 hours in the day, 60 minutes in the hour, 60 seconds in the minute. So far, so good; but still why did the Babylonians have 60 minutes in an hour?  Mystère within a mystère.

Here the answer is “fractions,” a very difficult topic that throws a lot of kids in school but one that will not throw the intrepid sleuths getting to the solution of this mystery. The good news is that one-half of ten is 5 and that one-fifth of ten is 2; the bad news is that one-fourth of ten, one-third of ten and one-sixth of ten are all improper fractions: 5/2, 10/3, 5/3; same for three-fourths, two-thirds and five-sixths. A number system based on 60 is called a “sexagesimal” system. If you have a sexagesimal number system, you need different notations for the numbers from 1 through 59 rather than just 1 to 9, but it makes fractions much easier to work with – the numbers 1,2,3,4,5,6 are all divisors of 60 and so one-half, one-quarter, one-third, one-fifth, and one-sixth of 60 are whole quantities, nothing improper about them. This also applies to two-thirds, three-quarters and other common fractions.

This practice of using a base different from 10 for a number system is alive-and-well in the computer era. For example, the base 16 hexadecimal system is used for the addresses of memory locations rather than the verbose binary number system that computers use for numerical computation. The hexadecimal system which uses 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F as its sixteen “digits,” is also used to describe colors on web-pages and you might come across something like <body bgcolor = “#FF0000”> that is instructing the page to make the background color red.

Having a good system for fractions is especially important if you are measuring quantities of products or land area: thus a quarter of a pound of ham, two-thirds  of an acre, … . The Babylonians of the period when the Book of Daniel was written did not invent the sexagesimal system out of whole cloth; rather they inherited it from the great Sumerian civilizations that preceded them. At the birth of modern civilization in Mesopotamia, Sumerian scribes introduced cuneiform writing (wedges and clay tablets) and then sexagesimal numbers for keeping track of accounts, fractions being important in transactions between merchants and buyers. At first, the notations for these systems would differ somewhat from city to city and also would differ depending on the thing being quantified. In the city of Uruk, 6000 years ago, there were at least twelve different sexagesimal number systems in use with differing names for the numbers from 1 to 60, each for working with different items: barley, malt, land, wheat, slaves and animals, beer, … . It is as though they were using French for one item and German for another; thus “cinq goats” and “fűnf tomatoes.” What this illustrates is that it requires an insight to realize that the five in “cinq goats” is the same as the five in “fűnf tomatoes.” Eventually, these systems became standardized and by Babylonian times only one system was in use.

The Sumerians gave us the saga, The Epic of Gilgamesh, whose account of the great flood is a brilliant forerunner of the version in Genesis. In fact, so renowned were the Sumerian cities that the Hebrew Bible tells us the patriarch Abraham (Richard Harris in the TV movie Abraham) came from the city of Ur, a city also known as Ur of the Chaldees; at God’s bidding, he left Ur and its pagan gods and moved west with his family and retinue to the Land of Canaan. The link to Ur serves as a hommage on the part of the Israelite scribes to the Sumerians, one designed to ascribe illustrious origins to the founder of the three Abrahamic religions.

In addition to being pioneers in literature, agriculture, time-telling, accounting, etc. the Sumerians pushed beyond the boundaries of arithmetic into more abstract mathematics. They developed methods we still use today for solving quadratic equations (x2 + b x = c) and their methods and techniques were imported by mathematicians of the Greco-Roman world. Moreover, very early on, the Sumerian scribes were familiar with the mighty Pythagorean Theorem: an ancient clay tablet, known as “ybc 7289,” shows that these scribes knew this famous and fundamental theorem at least two thousand years before Pythagoras. For a picture of the tablet, click HERE . The image shows a square with each side of length 1 and with a diagonal of length equal to the square root of 2, written in sexagesimal with cuneiform wedges (so this satisfies the Pythagorean formula a2 + b2 = c2 in its most simple form 1 + 1 = 2). In his Elements, Euclid gives a proof of the Pythagorean Theorem based on axioms and postulates. We do not know whether the Sumerians thought of this remarkable discovery as a kind of theorem or as an empirical observation or as something intuitively clear or just a clever aperçu or something else entirely. This tablet was on exhibit in NYC at the Institute for the Study of the Ancient World in late 2010; for the mathematically sensitive the viewing the tablet is an epiphany.

The Sumerians were also great astronomers – they invented the constellations we use today and the images associated with them – and their observations and techniques were used by geometers and astronomers from the Greco-Roman world such as Eratosthenes and Ptolemy. Indeed, the Babylonian practice of using sexagesimal numbers has persisted in geography and astronomy; so to this day, latitude and longitude are measured in degrees, minutes and seconds: thus a degree of north latitude is divided into 60 minutes and each minute is divided into 60 seconds. James Polk was elected to be the 11th president of the United States on the platform “Fifty-Four Forty or Fight” meaning that he would take on the British and push the Oregon territory border with Canada up to 54°40′ .  (Polk wisely settled for 49°00′).

The Sumerians were also great astrologers and soothsayers and it was they who invented the Zodiac that we still use today. If we think of the earth as the center of the universe, then of course it takes one year for the sun to revolve around the earth; as it revolves it follows a path across the constellations, each of which sits in an area called its “house.” As it leaves the constellation Leo and rises in front of the constellation Virgo, the sun is entering the house of Virgo. According to the horoscopes in this morning’s newspaper, the sun is in the house of Virgo from Aug 23 until Sept 22.

Recently, though, NASA scientists have noted that the Sumerian Zodiac we employ is based on observations of the relative movements of the sun, earth, planets and stars made a few thousand years ago. Things have changed since – the tilt of the earth’s axis is not the same, measurements have improved, calendars have been updated, etc.; the ad hoc Sumerian solution to keeping the number of signs the same as the number of months in the year no longer quite works. So the constellations are just not in the locations in the heavens prescribed by the ancients on the same days as in the current calendar; the sun might just not be in the house you think it’s in – if you were a Capricorn born in early January, you are now a Sagittarius. So far, the psychic world has pretty much ignored the implications of all this – people are just too attached to their signs.

It must be admitted that the NASA people did get carried away with the numbers and the science. They stipulated that there should actually be 13 signs, the new one being Ophiuchus, the Serpent Bearer (cf. Asclepius the Greek god of medicine and his snake entwined caduceus); this is a constellation that was known to the Sumerians and Babylonians but one which they finessed out of the Zodiac to keep the number of signs at 12. Click HERE for a picture. However, it is a sign of the Zodiac of Vedic Astrology and, according to contemporary astrologer Tali Edut, “It’s a pretty sexy sign to be!”

But why insist on 12 months and 12 signs, you may ask. Again mystère. This time the solution lies in ancient Egypt. The Egyptians started with a calendar based on 12 lunar months of 28 days each, then moved to 12 months of 30 days each with 5 extra days inserted at the end of the year. This solved the problem encountered world-wide of synchronizing the solar and lunar years (the leap year was later added). And this 12 month year took root. We also owe the 24 hour day to the Egyptians, who divided the day into 12 day hours and 12 night hours; at the outset, the length of an hour would vary with the season and the time of the day so as to insure that there were 12 hours of daylight and 12 hours of nighttime. The need for simplicity eventually prevailed and each hour became one-twenty-fourth of a day.

The number 12 is also handy when it comes to fractions and to this day it is the basis for many measuring systems: 12 donuts in a dozen, 12 dozen in a gross, 12 inches in a foot, 12 troy ounces in a troy pound. One that recently bit the dust, though, is 12 pence in a shilling; maybe Brexit will bring it back and make England Great Again.

Battle Creek

Battle Creek is a city of some 50,000 inhabitants in southwestern Michigan situated at the point where the Battle Creek River flows into the Kalamazoo River. The name Battle Creek traces back, according to local lore, to a skirmish between U.S. soldiers and Native Americans in the winter of 1823-1824.
At the beginning of the 20th century, Battle Creek was the Silicon Valley of the time: entrepreneurs, investors and workers poured in, companies were started, fortunes were made. As the local Daily Moon observed in 1902, “Today Battle Creek is more like a boom town in the oil region than a staid respectable milling center.” A new industry had taken over the town: “There are … large establishments some running day and night” and all were churning out the new product that launched this gold rush.
Even before the boom, Battle Creek was something of a manufacturing center producing such things as agricultural equipment and newspaper printing presses. But this was different. Battle Creek was now known as “Health City.” So what was this new miracle product? Mystère.
Well, it was dry breakfast cereal, corn flakes and all that. By 1902, more than 40 companies were making granulated or flaked cereal products with names like Vim, Korn-Krisp, Zest, X-Cel-O, Per-Fo, Flak-ota, Corn-O-Plenty, Malt-Too; each labeled as the perfect food.
And how did Battle Creek come to be the center of this cereal boom? Again mystère?
For this, things turn Biblical and one has to go back to the Book of Daniel and the Book of Revelation; the first is a prophetic book of the Hebrew Bible, the second a prophetic book of the New Testament. Together, the books offer clues as to the year when the Second Coming of Christ will take place. Belief in the imminence of this time is known as adventism. No one less than Isaac Newton devoted years to working on this. Newton came to the conclusion that it would be in the year 2060, although sometimes he thought it might be 2034 instead; superscientist though he was, Newton could have made a mistake – he also invested heavily in the South Sea Bubble.
The First Great Awakening was a Christian revival movement that swept England and the colonies in the 1700’s; among its leaders were John Wesley (of Methodism) and Jonathan Edwards (of “Sinners in the Hands of an Angry God”). During the Second Great Awakening in the first part of the 19th century, in the U.S. the adventist William Miller declared the year of the Second Coming to be 1844. When that passed uneventfully (known as the Great Disappointment), the Millerite movement splintered but some regrouped and soldiered on. One such group became the Jehovah’s Witnesses. Another became the Seventh Day Adventists.
The “Seventh Day” refers to the fact that these Adventists moved the Sabbath back to its original day, the last day of the week, the day on which the Lord rested after the Creation, the day observed by the Biblical Israelites and by Jews today, viz., Saturday. The early Gentile Christians, after breaking off from their Jewish roots, moved their Sabbath to Sunday because that was the day of rest for the non-Jewish population of the Roman Empire; they likely also wanted to distance themselves from Jews after the revolts of 66-73 AD and 132-136 AD, uprisings against the power of Rome.
The Seventh Day Adventists do not claim to know the year of the Second Coming but do live in anticipation of it. Article 24 of their statement of faith concludes with
  “The almost complete fulfillment of most lines of prophecy, together with the present condition of the world, indicates that Christ’s coming is imminent. The time of that event has not been revealed, and we are therefore exhorted to be ready at all times.”
In 1863, in Battle Creek, the church was formally established and had 3,500 members at the outset. So the plot thickens: “Battle Creek” did you say?
Their system of beliefs and practices extends beyond adventism. They are basically pacifists and eschew combat. Unlike the Quakers, they leave the decision to serve in the Armed Forces to the individual and, typically, those who are in the military serve as medics or in other non-combatant roles. They are also vegetarians; they proscribe alcohol and tobacco – coffee and tea as well; they consider the health of the body to be necessary for the health of the spirit.
It is their interest in health that leads to a solution to our mystery. As early as 1863, Adventist prophetess Ellen White had visions about health and diet. She envisaged a “water cure and vegetation institution where a properly balanced, God-fearing course of treatments could be made available not only to Adventists, but to the public generally.” Among her supporters in this endeavor were her husband James White and John and Ann Kellogg. The plot thickens again: “Kellogg” did you say? The Kelloggs were Adventists to the point that they did not believe that their son John Harvey or their other children needed a formal education because of the imminence of the Second Coming. In 1866, the Western Reform Institute was opened in Battle Creek realizing Ellen White’s vision. By then the Whites had taken an interest in the self-taught John Harvey Kellogg which eventually led to their sending him for medical training with a view to having a medical doctor at the Institute. He finished his training at the NYU Medical College at Bellevue Hospital in New York City in 1875. In 1876, John Harvey Kellogg became the director of the Institute and he would lead this institution until his death in 1943. In 1878, he was joined by his younger brother Will Keith Kellogg who worked on the business end of things.
John Harvey Kellogg threw himself into his work. He quickly changed the name to the Battle Creek Medical Surgical Sanitarium; he coined the term sanitarium to distinguish it from sanatorium to describe a place where one would learn to stay healthy. He described the Sanitarium’s system as “a composite physiologic method comprising hydrotherapy, phototherapy, thermotherapy, electrotherapy, mechanotherapy, dietetics, physical culture, cold-air cure, and health training.” Physical exercise was thus an important component of the system; somewhat inconsistently, sexual abstinence was also strongly encouraged as part of the program
Kellogg’s methods could be daring, if not extreme; what web sites most remember him for is his enema machine that involved yogurt as well as water and ingestion through the mouth as well as through the anus.
Through all this, vegetarianism remained a principal component of the program. The Kelloggs continually experimented with ways of making vegetarian foods more palatable and more effective in achieving the goals of the Sanatarium. In 1894, serendipity struck: they were working to produce sheets of wheat when they left some cooked wheat unattended; when they came back they continued processing it but produced flakes of wheat instead of sheets – these flakes could be toasted and served to guests at the Sanitarium. They filed for a patent in 1895 and it was issued in 1896. For an advertisement for Corn Flakes from 1919, click HERE .
John Harvey Kellogg showed this new process to patients at the Sanitarium. One guest, C. W. Post grasped its commercial potential and started his own company, a company that became General Foods. See what we mean by Silicon Valley level rewards – that was “General Foods,” which was the Apple Computer of the processed food industry, with an industrial name modeled after General Electric and General Motors. (The name is gone today; the company eventually merged with Kraft.) Post’s first cereal product in 1897 was Elijah’s Manna later rechristened Grape-Nuts Flakes – in the name, only Flakes is accurate as the ingredients are wheat and barley. But the Gold Rush was on.
In 1906 Will Keith Kellogg founded the Battle Creek Toasted Corn Flake Company; this company was later named Kellogg’s and, to this day, it is headquartered in Battle Creek and continues to bless the world with its corn flakes and other dry breakfast cereals
Through all this, the Sanitarium continued on and, in fact, prospered. Its renown spread very far and very wide and a remarkable set of patients spent time there. This list includes William Howard Taft, George Bernard Shaw, Roald Amundsen and Sojourner Truth.
Perhaps the simplest proof of the efficacy of the Kelloggs’ methods is that both brothers lived past 90. For another proof, let us go from the 19th and 20th centuries to the 21st century and let us move from Battle Creek Michigan to Loma Linda California.
Loma Linda is the only place in the U.S. that made it onto the list of Blue Zones – places in the world where people have exceptionally long life spans. (The other places are in Sardinia, Okinawa, Greece and Costa Rica). The reason is that this California area has a large population of Seventh Day Adventists and they live a decade longer than the rest of us. They follow the principles of their early coreligionists: a vegetarian diet, physical exercise, no alcohol, no tobacco – to which is added that sense of community and purpose in life that their shared special beliefs bring to them.
Playful Postscript:
For the unserious among us, much of the goings on at the Sanitarium could be the stuff of high comedy. In fact, it inspired the author T. Coraghessan Boyle to write a somewhat zany novel The Road to Wellville which was later made into a movie. The characters have fictitious names except for Dr. John Harvey Kellogg (played by Anthony Hopkins in the movie). The title itself comes from the booklet written by C.W. Post which used to be given out with each box of Grape-Nuts Flakes.
Tragic Postscript:
In the 1930’s a group broke off from the Seventh Day Adventist church and eventually became known as the Davidian Seventh Day Adventists. In turn, in the 1950’s there was a split among them and a new group, the Branch Davidians, was formed; so we are at two removes from the church founded in Battle Creek. In the 1982 Vernon Wayne Howell, then 22 years old, moved to Waco Texas and joined the Branch Davidians there; he subsequently changed his name to David Koresh: Koresh is the Hebrew ( כֹּרֶשׁ ) for Cyrus (the Great), the Persian king who is referenced in the Book of Daniel and elsewhere in the Hebrew Bible; Cyrus is the only non-Jew to be named a Messiah (a person chosen by God for a mission) in the Hebrew Bible (Isaiah 45:1) and it was he who liberated the Jews from the Babylonian Captivity. As David Koresh, Howell eventually took over the Waco Branch Davidian compound and turned it into a nightmarish cult. There followed the horrifying assault in 1993 and the deaths of so many. To make an unimaginable horror yet worse, this assault has become a rallying cry for paranoid militia people – its second anniversary was a motivation for Timothy McVeigh and Terry Nichols when they planned the Oklahoma City bombing.


Ernest Hemingway famously favored Anglo-Saxon words and phrases over Latin or French ones: thus “tell” and not “inform.”  Scholars, critics and Nobel Prize committees have analyzed passages such as
“What do you have to eat?” the boy asked.
“A pot of yellow rice with fish. Do you want some.”
“No, I will eat at home; do you want me to make the fire?”
“No, I will make it later on, or I may eat the rice cold.”
Not a single word from French or Latin; not a single subordinate clause, no indirect discourse, no adverbs, … .
Some trace this aspect of his style to Hemingway’s association with Gertrude Stein, Ezra Pound, Djuna Barnes and modernist writers. Others trace it to his first job as a cub reporter for the Kansas City Star and the newspaper’s style guide:  “Use short sentences. Use short first paragraphs. Use vigorous English. Be positive, not negative.”
However that may be, Hemingway was true to his code and he set a standard for American writing. Still, it is often impossible to say whether a word or phrase is Anglo-Saxon or not. For example, the multi-word sound has four meanings each derived from a different language: “sound” as in “Long Island Sound” (Norse), as in “of sound mind” (German), as in “sound the depths of the sea” (French), as in “sound of my voice” (Latin). To add to the confusion, the fell in “one fell swoop” is from the Norman French (same root as felon) though nothing sounds more Anglo-Saxon than fell. Among the synonyms pigeon and dove, it is the former that is French and the latter that is Anglo-Saxon. Nothing sounds more Germanic than skiff but it comes from the French esquif. So where can you find 100% Anglo-Saxon words lurking about? Mystère.
Trades are a good source of Anglo-Saxon words — baker, miller, driver, smith, shoe maker, sawyer, wainwright, wheelwright, millwright, shipwright; playwright doesn’t count, it’s a playful coinage from the early 17th century introduced by Ben Jonson. Barnyard animals also tend to have old English origins  — cow, horse, sheep, goat, chicken, lamb, … Body parts too — foot, arm, leg, eye, ear, nose, throat, head, … .
Professions tend to have Latin or French names — doctor, dentist, professor, scientist, accountant, … ; teacher, lawyer, writer and singer are exceptions though.
Military terms are not a good source at all; they are relentlessly French — general, colonel, lieutenant, sergeant, corporal, private, magazine, platoon, regiment, bivouac, caisson, soldier, army, admiral, ensign, marine and on and on.
However, there is a rich trove of Anglo-Saxon words to be found in the calendars of the Anglican and Catholic Churches. The 40 days of Lent begin on Ash Wednesday and the last day before Lent is Mardi Gras. Lent and Ash Wednesday are Anglo-Saxon in origin but Mardi Gras is a French term. Literally, Mardi Gras means Fat Tuesday, and the Tuesday before Lent is now often called Fat Tuesday in the U.S. But there is a legitimate, traditional Anglo-Saxon name for it, namely Shrove Tuesday. Here shrove refers to the sacrament of Confession and the need to have a clean slate going into Lent; the word shrove is derived from the verb shrive which means “to hear confession.” The expression “short shrift” is also derived from this root: a short confession was given to prisoners about to be executed. Another genuinely English version of Mardi Gras is Pancake Tuesday, which, like Mardi Gras, captures the fact that the faithful need to fatten up before the fasting of Lent. Raised Episcopalian and one with a strong attraction to the ritual and pageantry of Catholicism, Hemingway was in fact listed as a Catholic for his 2nd marriage, the one to Pauline Pfeiffer. Still Hemingway never got to use the high church Anglo-Saxon term Shrove Tuesday (or even Pancake Tuesday) in his writings. On the other hand, Hemingway always wrote “Holy Ghost” and never would have cottoned to the recent shift to the Latinate “Holy Spirit.”
Staying with high church Christianity — Lent goes on for forty days until Easter Sunday; the period of Eastertide begins with Palm Sunday which celebrates Christ’s triumphant entry into Jerusalem for the week of Passover. The Last Supper was a Passover Seder dinner. Thus, in Italian, for example, the word for Passover and the word for Easter are the same; if the context is not clear, one can distinguish them as “Pasqua” and “Pasqua Ebraica.” Something similar applies in French and Spanish. So how did English usage come to be so different? Mystère.
Simply put Easter Sunday is named for a Pagan goddess. In the Middle Ages, the author of the first history of the English people, the Venerable Bede, wrote that the Christians in England adopted the name the local Pagans were giving to a holiday in honor of their goddess of the spring Ēastre – you cannot get more Anglo-Saxon than that.
In addition to Eastertide, the Anglo-Saxon root tide is also used for other Christian holiday periods – Whitsuntide (Pentecost), Yuletide (Christmas), … . This venerable meaning of tide as a period of time is also the one that figures in the expression
    Neither tide nor time waits for no man.
Nothing to do with maritime tides whether high, low or neap. Hemingway would likely have avoided this phrase, though, because of its awkward, archaic double negative.
Sometimes French words serve to protect us from the brutal frankness of the Anglo-Saxon. Classic examples of this are beef and pork which are derived from the French boeuf and porc. When studying a German menu, it is always disconcerting to have to choose between “cow flesh” and “pig flesh.” Even Hemingway would have to agree.
An area where Hemingway is on more solid ground is that of grammar. The structure of English is basically Germanic. The Norman period introduced a large vocabulary of French and Latin words, but French had very little influence on English grammar. The basic reason for this is that Norman French used Latin as the language for the administration of the country; thus the Domesday Book and the Magna Carta were written in Latin. Since French was the language of the court, however, English legal vocabulary to this day employs multiple French words and phrases such as the splendid “Oyez, Oyez.”
While the grammar of English can be classified as Germanic, there are some key structural elements that are Celtic in origin. An important example is the “useless do” as in “do you have a pen?”  English is one of the only languages that inserts an extra verb, in this case do, to formulate a question; typically in other languages, one says “have you a pen?” or “you have a pen?” with a questioning tone of voice.  Another Celtic import is the progressive tense as in “I am going to the store,” which can express a mild future tense or a description of current activity. This progressive tense is an especial challenge for students in ESL courses.
Danish invaders such as the Jutes have also contributed to English structure – for example, there is the remarkably simple way English has of conjugating verbs: contrast the monotony of  “I love, you love, it loves, we love, you love, they love” with other languages.
When languages collide like this, one almost invariably emerges as the “winner” in terms of structure with the others’ contributing varying amounts of vocabulary and with their speakers’ influencing the pronunciation and music of the language. The English language that emerged finally at the time of Chaucer was at its base Anglo-Saxon but it had structural adaptations from Celtic and Norse languages as well as a vast vocabulary imported whole from Latin and French. This new language appeared suddenly on the scene in the sense that during all this time it wasn’t a language with a literature like the other languages of Britain – Old English, Latin, French, Welsh, Cornish, … ; so it just simmered for centuries but eventually the actual spoken language of the diverse population forced its way to the surface.
Hemingway himself spent many years in places where romance languages are spoken – Paris, Madrid, Havana, … . Maybe this helped insulate him from the galloping changes in American and English speech and writing and let him ply his craft in relative peace. How else could he have ended a tragic wartime love story with a sentence so perfect but so matter-of-fact as “After a while I went out and left the hospital and walked back to the hotel in the rain.”


Ireland is known as the Land of Saints and Scholars, as the Emerald Isle, as the Old Sod … . For all its faults, it is the only place in Europe that has never invaded a neighboring or distant land militarily. That doesn’t mean that the Irish weren’t always making war on one other. Still, Ireland itself has been invaded multiple times. But by whom and why? Mystère.
Part I: From Romans to Normans
First there was the invasion that failed to take place, invasion by the Romans. Julius Caesar invaded Britain (Britannia to the Romans, Land of the Celts) not once but twice (55 B.C. and 54 B.C). However, though the last Roman legion didn’t leave Britain until 404 A.D., the Romans never undertook an invasion of Ireland. Why not? Well, their name for it, Hibernia, meant “land of winter” which was perhaps reason enough to stay away; cf. the verb “hibernate.” This meant that at the time of the fall of the Roman Empire, Ireland had not been Romanized and Hellenized the way Gaul and other Celtic lands west of the Rhine or south of the Danube had been.
The first invasion from the East was by Christian missionaries from Britain who came with the purpose of converting the Irish, no mystery there – and by the mid 4th century, the Church had a foothold in Ireland. A major contribution of the Church was the introduction of Latin, the lingua franca of Europe, and even Greek, tools which would at last bring the literature and learning of the Greco-Roman world to Ireland.
In fact, a notable figure in Church History emerged at this time: Pelagius hailed from Ireland (what St. Jerome thought) or Britain (what others think); he was very well educated in both Latin and Greek and, around 380 A.D., went off to Rome to ply the trade of theologian. Pelagius opposed St. Augustine and his grim views on the fall of mankind and predestination (cf. Calvin); instead Pelagius rejected the doctrine of original sin and preached the need for the individual freely to find his own way to God and salvation – quite a modern viewpoint, really. Unfortunately, Augustine won out and became a saint and a city in Florida, while Pelagius was branded a heretic. However, Pelagius’ writings continued to be cited in the Irish Church during the Middle Ages, which points to a spirit of independence and which raises questions about the Church’s adherence to official Vatican teaching during that period.
In the 5th century, the pope in Rome took an interest in Ireland and a Gaul named Palladius was sent to be the first bishop of the Irish in 431. Patricius, a Brittano-Roman, the man known to us as St. Patrick, was sent the following year. It is tempting to think (and some do) that Palladius and Patricius were sent to combat the Pelagian heresy and shore up orthodoxy in the Irish church, orthodoxy being an issue that will come up yet again. Though this version of events does detract some from the glory of St. Patrick, it in no way suggests that he did not drive the snakes out of Ireland.
While the rest of Europe including Britain was plunged into the dark ages and subject to barbarian invasions, the relatively peaceful land of Ireland became a center of monasticism and learning – as deftly retold with scholarship and flair by Thomas Cahill in How the Irish Saved Civilization. From Ireland the brave souls of the Hiberno-Scottish Mission went to Scotland, St Kenneth among them, and later on to Anglo-Saxon Britain. At one point, the Pope Gregory the Great did send a mission from Rome as well to the South of England. The remarkable thing, as Cahill points out, is that the conversion of the Germanic tribes of Northern Europe that followed was done by missionaries from the British Isles and not from Rome. Thus, for example, the Apostle of the Germans, St Boniface was from Devon in England and his given name was Winifrid, Bonafatius being his Latin nom de guerre. The integration of the Anglo-Saxons and other German tribes into Christendom would prove critical in history; in particular, the changes wrought in the social structure of the people of modern day England, Holland, Germany and elsewhere converged with the embryonic European capitalism of the late Middle Ages – and the rest is history: The WEIRDest People in the World by Harvard Prof. Joseph Henrich.
[Adapertio integralis: et ille Thomas et Publius ipse alumni lycei Regis High School sunt, Anno Domini MCMLVIII]
By the year 800, however, Viking invaders began to arrive on Irish shores. One of their goals was to plunder monasteries, something they proved uncommonly skilled at; another more constructive goal was to set up settlements in Ireland. They founded towns that still bear Nordic names, such as Wexford and Waterford of crystal fame. These groups eventually merged with the native population and are known to historians as the Norse-Irish. In fact, some classic Irish surnames trace back to the Viking invasions, e.g. MacAulife (Son of Olaf) and MacManus (Son of Magnus).
The year 1066 is remembered for the Norman invasion of England under William of Normandy (Nick Brimble in The Conquerors TV series). The year 1171 is remembered for the Norman invasion of Ireland under Henry II (Peter O’Toole in both The Lion in Winter and Becket). The Normans declared Ireland to be under the rule of the English king, established a feudal system of fiefdoms, built castles, signed treaties, broke treaties, etc. Actually, Henry II already ruled over vast holdings; as Roi d’Angleterre, Duc d’Anjou and Duc de Normandie, he inherited Britain and large areas in France. And then thanks to his marriage to the most powerful woman in Europe, Elėanor d’Aquitaine (Katherine Hepburn in The Lion in Winter, Pamela Brown in Becket), his domains were expanded to include southwestern France. All this was called the Empire Angevin. So, land rich already, why did Henry need to launch this invasion? Mystère.
For one thing, Henry was instructed by Pope Adrian IV by means of a formal papal edict Laudabiliter to invade and govern Ireland; the goal was to re-enforce papal authority over the too autonomous Irish Church to restore orthodoxy. Adrian was the only Englishman ever to be Supreme Pontiff ; his being English does suggest that his motives in this affair were “complex.” For another thing, the deposed King of the Irish Province of Leinster, Dairmait Mac Murchada, had come to England in 1166 to seek Henry’s help in winning back his realm and, surely, solidarity among kings was motivation enough! Perhaps though, it was simply to give Henry’s barons more land to lord over – remember it was his son King John, who would be forced to sign the Magna Carta by angry barons at Runnymede in 1215. In any case, by the late 15th century the area under English control was the region around Dublin known as the English Pale. The term pale is derived from an old French word for pike and basically means a stockade; by extension it means any defended delimited area. The expression “beyond the pale” comes from the fact that it was dangerous for the English to venture beyond that area.
Interestingly, a side-effect of the Norman invasions of Ireland was the spread of the patronymic “Fitz” which derives from the French “fils de” meaning “son of” or “Mac” in Gaelic. Thus we have the Irish names FitzGerald, FitzSimmons, FitzPatrick, etc. (The name FitzRoy, meaning “natural son of the king,”  is not Irish as such but rather Anglo-Norman in origin.
One more thing: in the 14th century, there was the brief and unsuccessful incursion into Ireland by Edward, the brother of Robert the Bruce (Angus Macfayden in Braveheart, Sandy Welch in The Bruce). This was a Scottish attempt to create a second front in their own struggle against the Anglo-Normans by attacking the English Pale.


Part II: From Tudors to Modern Times
The Tudor invasions of Ireland under Henry VIII and Elizabeth I broke through the English Pale and brought the entire island under English rule, but not necessarily under English control as rebellions constantly broke out. However, the process of extirpation of Gaelic culture, language and laws was begun and the system of plantations and settlements took more and more land away from the Irish population. This was now colonialism in the full modern sense of the term. But why was this so important to the British crown – mystėre?
For one thing, the English gained control over the oak forests (once dear to the Druids) that covered the island; the deforestation of Ireland provided the British merchant fleet and the Royal Navy with the timber needed for Britannia to rule the waves. The conquest of Ireland was the first step in the creation of the British Empire, the settlements in Virginia and New England being next steps that followed quickly.
This is not a “feel good” story. Rebellion and brutal repression are a constant theme. Catholicism became nationalism for the Irish and became a wedge used by the British to isolate and subjugate the population. In the early 17th century, King James I introduced laws and regulations that flagrantly favored Protestants and penalized Catholics. Things only worsened with the infamously brutal military campaign waged in Ireland by Oliver Cromwell in mid-century.
Interestingly, during the wars in Ireland in the 17th century, Irish earls and soldiers retreated to France to serve in the French army and continue the struggle against the British. But that is how the great Bordeaux wine, Chateau Lynch-Bages, the fine cognac, Hennessy, and the stately Avenue MacMahon in Paris all got those Irish names – another mystery solved en passant.
With the overthrow of the last Catholic English King James II and the installation of William and Mary on the throne in 1689, there began the period of the Protestant Ascendancy, a ruling clique of the right kind of Protestant (no Presbyterians!) that was in control of Ireland for over two hundred years. The system lasted into the 20th century. We pass over in silence the malevolence that this regime and Robert Peel’s government in London showed toward the population of Ireland during the Great Famine of the 1840’s.
There was a long-standing literary tradition in Ireland going back to the pre-Christian era and the prose epic Táin Bó Cúailnge (The Tain), click HERE . As English took root, Irish prose and poetry followed eventually leading to eminent writers both Protestant (Oscar Wilde) and Catholic (James Joyce). In fact, for writers like Joyce it was an imperative to show that Irish authors could bring the English language to new heights.
After WWI, Ireland was partitioned into the Irish Free State and the British province of Northern Ireland which remained part of the U.K; the end of the Free State came with the founding of the Irish Republic (1937). Relations between Ireland and the UK remained difficult during the seventy years of “The Troubles,” but then there was the combination of the Good Friday accords in the 1990’s and common membership in the European Union which brought about peace – Brexit threatens to jeopardize this delicate balance.
Though it has played a role in world history, Hibernia is not that large or populous a land. To put things in perspective, the population was about 3 million in 1800 and it is just over 5 million today.
During the Protestant Ascendancy and until recently, millions of Irish emigrated to the United States and to places throughout the British Empire, including England and Scotland themselves. Their descendants and the Irish in Ireland have found a way of coping in an Anglo-Saxon world, as full-fledged citizens of the countries they live in. So in the diaspora, we have John Lennon and Paul McCartney, we have Georgia O’Keefe and Margaret Sanger nėe Higgins, … . The Irish Republic itself has emerged as a very modern European state and a force in industry, culture and politics – separation of Church and State, hi-tech operations, musician/activist Paul David Hewson (aka Bono) and so on. Ulster can claim Nobel Prize winning poet Seamus Heaney and John Stewart Bell, the physicist and author of the revolutionary “Bell’s Theorem” for which he would most certainly have received a Nobel Prize had he not died unexpectedly in 1990.
But, as sociologists have noted, once a population is conquered and suppressed, they are blamed for their own history and a prejudice against them can linger on, however subtly. For example, in the English language today, almost all the commonly used terms that are Irish in origin have a negative connotation of mischief or worse: blarney, malarkey, shenanigans, paddy wagon, hooligan, limerick, donnybrook, leprechaun, shillelagh, phony, banshee. The only exceptions that come to mind are colleen and shamrock. However, sociologists have also noted that in the U.S. it is an advantage to have an Irish name when running for political office – thereby providing one solution to the mystery of what’s in a name.


Countries are sometimes named for tribes (France, England, Poland), sometimes for rivers (India, Niger, Congo, Zambia), sometimes for a crop (Malta, honey), sometimes for a city in Italy (Venezuela, Venice), sometimes for the name Marco Polo brought back from China (Japan), sometimes for a geographic feature (Montenegro), sometimes with a portmanteau word (Tanzania = Tanganyika + Zanzibar).
On the other hand, there are countries named for an actual historical personage, some twenty-six at last count, and many of these trace back to the European voyages of discovery.
Several island nations are thus named for Christian saints: Sao Tome e Principe, St Kitts and Nevis, Saint Lucia, Saint Vincent and the Granadines, the Dominican Republic.
Voyages out in the Indian Ocean and the Western Pacific Ocean led to other countries being named for Europeans. The island nations of Mauritius and the Seychelles are named for a Dutch political figure (Prince Maurice van Nassau) and a French Minister of Finance (Jean Moreau de Sėchelles), respectively. The archipelago of the Philippines is named for King Philipp II of Spain.
In the Americas, there are four mainland countries named as a result of those voyages and named for actual people: Bolivia (Simon Bolivar), Columbia (Christopher Columbus), El Salvador (Jesus the Messiah) and, of course, the United States of America (Amerigo Vespucci).
Amerigo Vespucci – who dat? How did it come to pass that the two continents of the New World are named for a Florentine intellectual and not for Christoforo Colombo, the hard-working, dead-reckoning, God-fearing mariner from Genoa? Mystère.
To start, Columbus believed that he had reached the Indies, islands off the east coast of Asia. In elementary school, we all learn that Columbus believed the world was round and that gave him the courage to strike out west in search of a route to the Indies. But Columbus wasn’t alone in this belief; it was widespread among mariners at the time and even went back to the ancient Greeks. The mathematician Eratosthenes of Alexandria, known as “the Father of Geography,” came up with an ingenious way of measuring the earth’s circumference and gave a remarkably good estimate. Later, in the middle ages, scholars in Baghdad improved on Eratosthenes’ result; that work was included in a treatise by the geographer Alfraganus that dates from 833 A.D. and this estimate is referenced in Imago Mundi a Latin text Columbus had access to. However, a kind of Murphy’s Law intervened; at some point in the chain of texts and translations, due presumably to a clash between the longer Arabic mile (7091 ft.) and the shorter Roman mile (4846 ft.), confusion arose. The self-taught Columbus was thus led to believe that the world was much smaller than it really was; in 1492 he was truly convinced that he had reached the outskirts of Asia. In fact, scholars argue that Columbus would never have undertaken his voyage west had he not thought the route to Asia was much shorter than it is. It is likely that, to the end, Columbus held firm that he had reached the Indies; in any case, he certainly didn’t say otherwise until it was too late.
Enter a young, well-connected Florentine. Amerigo Vespucci was working in the 1490’s in Spain for the Medici banking empire. After news of Columbus’ first voyage reached Europe, many navigators sailed west from Europe to report back to crowns and banks on the possibility of new riches. Among them, John Cabot (aka Giovanni Caboto), himself an agent of the Medici bank in England, who captained a voyage in 1497 that reached the Canadian mainland. Vespucci, for his part, participated in several of these voyages of discovery out of Portugal and Spain. During this time, he sent a letter to his onetime class mate Lorenzo di Pierfrancesco de’ Medici in Florence detailing some of his adventures. This letter was translated into Latin, the lingua franca of Europe, and given the title Novus Mundus. It was published in Florence in 1502 (or early 1503); the letter went viral and was translated and reprinted throughout Europe. It states plainly that Vespucci had seen a New World:
 “… in those southern parts I have found a continent more densely peopled and           abounding in animals than our Europe or Asia or Africa …”
He describes encounters with indigenous peoples along the east coast of today’s South America and recounts his travels all the way down to Argentina. Vespucci’s proclamation is to be contrasted with Columbus’ assurance that he had reached the Indies themselves. A second text, known as Lettera al Soderini was published shortly after in Italian and it too was translated and read all over Europe. Scholars continue to debate, however, whether the published texts, especially this second one, were the actual letters of Vespucci or exaggerated accounts written up by others based on his letters.
Enter a young, brilliant German mapmaker, Martin Waldseeműller, who held a position at the cartography school founded by the Duke of Nancy at Saint- Diė-des-Vosges in the Lorraine area of Eastern France. Waldseeműller headed up a team to produce a map of the world that would take the latest discoveries into account. Inspired by Vespucci’s letters, he and his Alsatian colleague Martin Ringmann boldly named most of what is now South America in honor of Vespucci labeling it America. To that end, they took the Latin form of Amerigo which is Americus and made it feminine to accord with the Latin feminine nouns Europa, Africa and Asia.
Click HERE for the map, published in 1507, that introduced America to the world. Just think though, the name Amerigo is not Latin in origin but is derived from the Gothic name Heinrich. Waldseeműller and Ringmann were both German speakers. So had Waldseeműller and Ringmann resorted to their native German rather than Latin, we would be living in the United States of Heinrich-Land. That was close, wasn’t it?
Waldseeműller was less bold in the maps he made a bit later in 1513, labeling the area he had called “America” simply as “Terra Incognita” as he was likely criticized for the bold stroke of 1507. He also added the information that the new discoveries were due to Columbus of Genoa on behalf of the monarchs of Spain. The myth of Queen Isabella pawning her jewels to finance Columbus has the role of establishing the primacy of the monarchs in underwriting the early voyages of discovery while in fact it was more the Medici and other banks who funded the explorations; in Columbus’ case it was the financier Luis de Santángel – a converso, by which is meant a Jew who converted to Christianity (this was the time of the Inquisition). Later voyages were financed by companies themselves such as the Dutch East India Company and the Massachusetts Bay Colony.
Vespucci, for his part, went on to an important career in Spain where he was appointed to the position of Pilot Major of the Indies, in charge of voyages of discovery (piloto mayor de Indias).
Enter a young, brilliant Flemish mapmaker with a mathematical orientation. Gerardus Mercator’s world map of 1538 depicted the two new world continents as distinct from Asia and labeled them North America and South America. This stuck. Later in 1569, he introduced the Mercator Projection which was a major boon to navigation. The disadvantage of this projection is that land masses towards the poles look too big, e.g. Greenland. The great advantage is that it gets compass directions and angles right. In sailor-speak, a straight line on the projection is a “rhumb line” at sea. So if the flat map says sail directly North West, use your compass to find which direction is North West and you’ll be pointed the right way. This is an example of craft anticipating science. Mathematicians had to work hard to get a proper understanding of the geometric insights underlying the Mercator Projection; in math-speak, this kind of projection is called a “conformal mapping.” Conformal mappings have important applications still today in fields such as the General Theory of Relativity – something about “flattening out space-time.” The upshot is that Mercator’s authority helped establish North America and South America as the names of the newly discovered continents.


The world is divided into innumerable islands and seven continents. These seven have Latinate names which were created various ways and which then were adopted and perpetuated by European mapmakers.

So, to start, how did these names of the three continents known to the ancient Western World come down to us: Europe, Africa and Asia? Are they autochthonous, pre—Hellenic goddesses, as Athena is to Athens? Mystère.

To begin with Europe itself, according to myth Europa was a Phoenician princess, the daughter of the King of Sidon. Disguised as a white bull, Zeus managed to whisk her off to Crete where he seduced her and then made her Queen of the island. Her name was first used as the name of the island and then came to designate the Greek speaking world and eventually the whole of Europe. She had three sons by Zeus. One was the Minos of the Labyrinth and Minotaur fame (the bull motif being important in the Minoan culture of Crete); one was Sarpedon whose descendant of the same name is a heroic warrior in the Iliad who is slain by Patroclus; the third was Rhadamanthus who became a judge of the dead, in charge (according to Vergil) of punishing the unworthy.

The origin of the name Africa is subject to scholarly debate. The simplest theory is that it came from the name of a North African people known to the Carthaginians, themselves colonists from Phoenicia; from Carthage the name made its way into the Greco-Roman world.

The name Asia comes from Herodotus, the early Greek traveler and historian who in turn took it from the Hittite name Assuwa which simply designated the east bank of the Aegean Sea; in Herodotus’ usage, it meant the land to the east of the Greece and Egypt, notably Persia. It eventually came to mean the eastern part of the Eurasian land mass.

By the way, autochthonous was the winning word at the 2004 National Spelling Bee.

The four remaining continents were not known to the ancients but were charted and named as a result of voyages of discovery and mapmaking from 1490’s to the 1890’s, five hundred long years.

First, how is it that North America and South America are not named for Christopher Columbus, truly a mystère.

The continents of North and South America take their names from the feminine form America of the Latin version Americus of the first name of Amerigo Vespucci. Vespucci famously wrote “I have found a continent” in a letter that was published in 1502 as a tract and entitled Novus Mundus in Latin. The appellation for South America was used as early as 1507 by the mapmaker Martin Waldseeműller and was later employed by cartographers for both continents, notably by Gerardus Mercator (of Mercator Projection fame).

This leads us to the question of how that next magnificent continent unknown to the European Old World got its name, viz. Australia. Was this the name the indigenous people gave to the land? Mystère.

In the ancient world, there was a widespread belief that the there must be masses of land south of the equator to balance all that land north of the equator. If you think of the world as flat, this makes sense – something has to keep it from tipping over. So in some sense these lands were “known” to the Greeks, Romans and later Europeans. In Latin borealis means northern and we have the aurora borealis (aka the Northern Lights); in Latin, australis means southern and in the Southern Hemisphere, we have the aurora australis (aka the Southern Lights). This belief in undiscovered southern lands persisted into the modern era and was promulgated by eminent mapmakers such as Mercator: in his famous globes and projections, the Southern Pacific contains a land mass labeled Terra Incognita Australis. Here is a link to a sample map of the era with a huge land mass in the south.

That there was, in fact, “new” land down under in the southern part of the Pacific was known to Western mariners and explorers from the 1500’s on. And, of course, the aboriginal Australians had been there for some 50,000 years making their culture the longest continuously lived culture in the world, by a lot. Portuguese explorers most likely reached Australia but it was on the Spanish side of the Line of Demarcation, that imaginary line drawn by Pope in 1493 (and revised in 1494) that divided the world between the two pioneering imperialist powers, Spain and Portugal. In the early 1600’s, the Dutch explorer Willem Janszoon encountered indigenous people in the Northwest; later in the 1640’s, working for the Dutch East India Company, Abel Tasman probed the western and southern coasts of Australia named the region New Holland, discovered Tasmania, then missed the left turn at Hobart and sailed on to New Zealand. The Dutch didn’t follow up on the charting of New Holland, presumably because the area lacked readily obtainable riches such as gold or spices and did not lend itself to European style agriculture and settlement.

Enter Captain Cook and his famous voyages to the South Seas with reports of albatrosses and other creatures, adventures that, in particular, inspired the Rime of the Ancient Mariner – although the grim myth of the albatross is of Coleridge’s own making. In 1770, Cook explored the South East coast of Australia, entered Sydney harbor at Botany Bay and named the area New South Wales. The colonization began in 1788.

It was in 1803 that a British navigator Matthew Flinders (who had sailed with Captain Bligh but luckily on the Providence, not on the Bounty) became the first European to circumnavigate the island continent and to establish that New Holland and New South Wales were both part of a single island land mass. He used the name Terra Australis on his charts and in his book, A Voyage to Terra Australis. And this later became simplified as Australia; as put by Flinders

“Had I permitted myself any innovation upon the original term, it would      have been to convert it into AUSTRALIA; as being more agreeable to the ear, and an assimilation to the names of the other great portions of the earth.”

Just think, if Flinders had used his native English instead of Latin, Down Under would be called South-Land today!

And just think, if Flinders had sailed on the Bounty, Marlon Brando and Clark Gable would have played Mr. Flinders instead of Mr. Christian in the movie versions of Mutiny on the Bounty!

Last but not least (in area it is slightly bigger than Europe and Australia), there is the new continent of Antarctica. Did some mapmaker just coin it or does it have a history? Mystère.

The term Antarctica, meaning “the opposite of the Arctic,” does have a history going back at least to Aristotle. In maps and in literature, it was often used to designate a given “land to the South.” It was also used by Roman and medieval writers and mapmakers; even Chaucer wrote about the “antarctic pol” in a technical treatise he authored. Throughout the 19th century, expeditions, sealers and whalers from Russia, the United States and Great Britain probed ever further south; the felicitously named Mercator Cooper, who sailed out of Sag Harbor NY, is credited as the first to reach the Antarctic land mass in 1854. The first map known to use Antarctica as the name of the continent itself dates to 1890 and was published by the Scottish mapmaker John George Bartholomew. Bartholomew held an appointment from the crown and so used the title “Cartographer to the King”; this doubtless emboldened him to invoke cartographer’s privilege and name a continent, harking back to Waldseeműller and Mercator.


The Brooklyn NY subway map shows a predilection for heroes of the American War of Independence: 13 subway stops in all. This total is to be contrasted with Manhattan’s paltry 2, Boston’s measly 2 and Philadelphia’s disgraceful 0. This expression of patriotism in Brooklyn has its roots in the way streets and avenues were named back before 1898 when the Brooklyn Eagle had writers like Walt Whitman and when Brooklyn was still a proud city with a big league baseball team of its own and not a mere “outer borough.”
In Brooklyn, two signers of the Declaration of Independence are so honored: Benjamin Franklin with 3 stops and Charles Carroll with 1 stop; in addition, three other founding fathers are also so honored: John Jay with 1 stop, George Washington with 2 stops, and Alexander Hamilton with 3 stops (as Fort Hamilton).
But then there are the idealistic European aristocrats from three different countries who fought valiantly along with the Americans – a Frenchman (Lafayette), a German (DeKalb) and a Pole (Kosciuszko ) – and who are also honored with eponymous subway stops; in fact, DeKalb has 2 stops in his own name and Kosciusko also has a bridge named for him.
The Marquis de Lafayette served as Washington’s aide-de-camp and later, as a field officer, he played a key role in blocking forces led by Cornwallis until American and French forces could position themselves for the war-ending siege at Yorktown, VA. In 1917, upon arriving in France, Gen. Pershing, the head of the American Expeditionary Force in France, famously said “Lafayette, we are here.”
But who are the other two heroes with subway stations in Brooklyn? Mystère.
The Baron Jean DeKalb  hailed from Bavaria and had a long career in the Bavarian Regiment of the French Army; for an image click HERE .  Before the French Revolution and the introduction of the citizens’ army, the kings and princes of Europe relied on mercenaries (e.g. the Hessians at Trenton) and foreign regiments to supplement their standing armies; so DeKalb’s career path was not all that unusual for the day. In 1763, he was ennobled with the rank of Baron for his valor on the field of battle, then married well in France and installed his family in a chateau that still stands not far from Paris at Milon-La-Chapelle.
DeKalb first came to the colonies in 1768 on a spying mission for Louis XV’s government and then came back again in 1777 to join Washington’s army with the rank of Major General. He served at Valley Forge and in 1780 he led his division south to the Carolinas to join the force under General Horatio Gates, a hero of the Battle of Saratoga. Gates faced Cornwallis at the Battle of Camden in South Carolina on Aug 16, 1780 and suffered a disastrous defeat. DeKalb died a few days later from multiple wounds received during the battle; his epitaph at the Bethesda Presbyterian Church graveyard in Camden reads “Here lie the remains of Baron DeKalb – A German by birth, but in principle, a citizen of the world.”
Tadeusz Kosciuszko was a Polish nobleman, born at a time when Poland was being partitioned by encroaching foreign powers; for an image, click HERE .  He was a brilliant military engineer, a hero of the Battle of Saratoga who was responsible for some key decisions that led to victory. Subsequently, he was entrusted with the task of fortifying West Point. It was his plans for the fortifications that Benedict Arnold (yet another hero of the Battle of Saratoga) tried to sell to the British.
But Kosciuszko and Lafayette both survived the war, went back to Europe and lived well into the 19th century. Did they retire to their estates or did they carry the torch of liberty back with them? Mystère.
In truth, both men did play significant roles in revolutions to come – but revolutions that did not quite have the “happy ending” of the American Revolution. Both men were jailed for their activism but both men kept the faith to the end.
Upon returning to Poland, Kosciuszko became involved in the struggle with Russia to keep part of Poland independent. He led an uprising there in 1794 against the occupiers, only to be defeated and imprisoned by the army of Catherine the Great. At that point in time, Poland became completely partitioned among the Prussians, Austrians and Russians and would not re-emerge as an independent country until 1919, the 13th of Woodrow Wilson’s 14 points.
Kosciuszko was freed by Catherine’s son and successor, the czar Paul I. He then came back to the United States and renewed a friendship with Thomas Jefferson. During the American Revolution, Kosciuszko had taken a stand for the abolition of slavery and back in Poland, he called for the liberation of the serfs. In America, in 1798 he put together a will which placed Jefferson in charge of the American estate he had from Congress as a war hero; Jefferson was to use the funds from the estate to buy freedom for slaves and to provide for their education. Kosciuszko eventually went back to Europe and lived in Switzerland until his death in 1817. Before his death, he wrote to Jefferson urging him to carry out the terms of his will. But, Jefferson delegated others to take this on and the will was never executed as planned though the struggle over it reached the U.S. Supreme Court three times. As with so many things Jeffersonian, there is a debate about his role in this matter: on the one hand, the historian Annette Gordon-Reed called the will “a litigation disaster waiting to happen”; on the other hand, the biographer Christopher Hitchens wrote that Jefferson “coldly declined to carry out his friend’s dying wish.”
After the American War of Independence, Lafayette went back to France and soon became involved in the events that led to the French Revolution of 1789. After the fall of the Bastille on July 14, he was put in command of the Revolution’s National Guard, its security force. Lafayette was a co-author (aided, in particular, by Jefferson ) of the seminal Declaration of the Rights of Man and of the Citizen and it was he who presented it to the National Assembly in August of 1789. But, in short time he fell afoul of the radical revolutionaries and fled France in 1792, only to be captured and jailed by the Austrians. He was later liberated at Napoleon’s behest, came back to France, but would not participate in Napoleon’s imperial government. After the latter’s fall and the restoration of the Bourbon monarchy in the person of Charles X, he served as a liberal member of the Chamber of Deputies, the new parliament. Charles X became increasingly autocratic and when the king moved to dissolve the Chamber of Deputies in July 1830, the Parisians cried “aux barricades” and launched the July Revolution. Lafayette took a leadership role and was once again named head of the National Guard.
However, Lafayette used his influence not to create a new republic but to bring to the throne a liberal monarch in the person of Louis-Philippe, a man who had spent time in the USA and who Lafayette believed shared his democratic views. Lafayette and his fellow citizens were soon disillusioned. Things came to a head in June 1832; after Lafayette’s oration at the funeral of an opponent of Louis-Philippe, angry Parisians once again erected barricades. Despite Lafayette’s call for calm, what is known as the June Revolution led to armed and bloody confrontation with the forces of the king.  It is this June Revolution that is the background for the Broadway show Les Miz. The musical is based on the novel Les Misérables by Victor Hugo who was an actual witness to the events of this revolution.
Lafayette was outraged by the bloody suppression of the June Revolution of 1832 and other acts of brutality by the state; at his death in 1834, Lafayette was still struggling for the rights of man.
By the way, Louis-Philippe, the last King of France, was finally dethroned by the Revolution of 1848.

Inventors: TV and FM

Who invented the light bulb; answer, Edison. Who invented the cotton gin; answer Eli Whitney. Who invented radio; answer, Marconi. Who invented television; mystère.
By television, we mean the black-and-white “boob tube” of the late 1940’s and 1950’s – the medium Marshall McLuhan wrote about, not the color-rich flat screen marvel of today. In those early days, one struggled with test-patterns and shaky images, oriented antennas on roof tops, tuned the vertical hold and horizontal hold with surgical precision; but one never had the problems with the sound that one had with AM radio, which often fell victim to static and interference. The reason the TV sound was so good is that it used FM radio technology. So who invented FM radio? Another mystère. 
And why were we all listening to Superman and the Hit Parade on AM radio if FM was the better medium – that too is a mystère.
It is strange that something as pervasive as TV or FM doesn’t have a heroic story of some determined young engineer struggling against all odds to go where only he or she thinks they can go – an origin myth. Even relatively recent Silicon Valley innovations such as the Hewlett Packard oscillator and the Apple personal computer have that “started in a garage” story.  It turns out that for TV and FM, each does have its hero inventor story, but these stories are complicated by patent battles, international competition, corporate intrigue and in the end personal tragedy – no happy endings, which is probably why there has never been a Hollywood bio-pic for either hero inventor.
In the 20’s and 30’s television systems were being developed in the US, the UK, Germany, and elsewhere. In the UK, the Scottish inventor John Logie Baird developed an electro-magnetic system. In Germany, Manfred von Ardenne pioneered a system based on the cathode-ray tube (the picture tube, a key element in electronic television) and the 1936 Olympics were broadcast on TV to sites all over Germany; however, this system did not have a modern TV camera but used a more primitive scanning technique.
Meanwhile back in the USA, Philo Farnsworth, a young Mormon engineer who was born in a log cabin in Beaver, Utah and who grew up on a ranch in Rigby, Idaho, was tackling the subatomic physics underlying television. From his lab in San Francisco, Farnsworth filed patents as early as 1927. He gave the first public demonstration of an all-electronic TV system with live camera at the Franklin Institute in Philadelphia on August 25, 1934; this is our TV of 1950. So television does have a classical origin story with Philo Farnsworth the hero of the piece. However, from the time of his earliest patents, Farnsworth encountered fierce opposition from the Radio Corporation of America (RCA). This company, originally known as American Marconi, was founded in 1899 as a subsidiary of British Marconi; after World War I, at the behest of the military, company ownership was transferred to American firms and the company was rechristened. By this time, RCA had launched NBC, the first AM radio network, and was keen to control the development of radio and the emerging technology of television. Vladimir Zworykin was a Russian émigré who had studied in St. Petersburg with Prof. Boris Rosing, an early television visionary. In 1923 Zworykin filed a patent while working at Westinghouse in Pittsburgh and this patent had a certain overlap with Farnsworth’s work; Zworykin then moved to RCA and RCA used this patent, its financial clout and its powerful legal teams to bludgeon Farnsworth for years, tying him up in endless court battles.
Finally though, RCA was forced to recognize Farnsworth’s rights and peace of a sort was made when RCA licensed Farnsworth’s technology and later demonstrated television at the 1939 World’s Fair to great acclaim. However, War World II intervened and commercial television was put on a back burner until after the war. But by then, Farnsworth’s patents were about to expire and RCA simply waited them out. Although he continued working on challenging projects, in the end Farnsworth was depressed and drinking heavily and died in debt in 1971.
The story of FM radio and its inventor, Edwin Howard Armstrong, has a similar arc to it. Armstrong was born in 1890 in New York City and grew up in suburban Yonkers. He graduated with an engineering degree from Columbia University and eventually had his own lab there – but with an arrangement that left him ownership of his patents. Armstrong did important work on vacuum tube technology that eliminated the need to wear headphones to listen to a radio – this work also led to ferocious patent battles with Lee De Forest, the inventor of the original vacuum tube. Continuing to work to improve radio, Armstrong was granted patents for FM in 1933.
Working with RCA, Armstrong demonstrated the viability of this new technology by broadcasting from the Empire State Building. However RCA had its existing network of AM stations and this new technology was not compatible with RCA’s AM radios and would require that the listeners have a new kind of receiver. So, although Armstrong did set up an FM broadcasting operation, RCA let commercial FM radio just sit here; then WWII came and everything new that did not contribute directly to the war effort was delayed. After the war, legal skirmishes with RCA continued until Armstrong’s patents expired in 1950 – this has an eerily familiar ring to it. In fact, it gets worse: RCA convinced the FCC to re-standardize the FM frequency allocations and this had the planned effect of disrupting the successful FM network that Armstrong had established on the old band, further delaying the spread of the FM medium. Tragically, confronted by growing financial problems and exhausted by legal battles, Armstrong committed suicide in 1954.
RCA went on to play a role in color television and to enjoy a reign as a leading television manufacturer. But it did have its comeuppance and it self-destructed in the 1970’s – thrashing around trying to find new ways to increase revenue and even cooking the books. The company as such was ultimately disbanded in 1986, though the brand name is still used by Sony, Voxx and others.
Even with FM receivers becoming more widespread, something which continued to keep listeners tuned to AM radio throughout the 1950s was the fact that AM and FM programming were not decoupled: wealthier stations would buy FM licenses and simply simulcast the same programming on AM and FM. There were station identifications such as this one
“This is Bob Hope; you are tuned to the call letters of the stars – WMGM, AM and FM in New York”
and WMGM would broadcast the same shows on both media. In the 1960’s the FCC limited this practice and it was the newly liberated FM that introduced the nation to the Motown sound and to the marvelous folk-inspired music of Bob Dylan, Joni Mitchell, Joan Baez, Leonard Cohen and so many others.
What about sound in cinema? It developed at the same time and used some of the radio technology. What ’30’s era film did not have an acknowledgement of RCA? Think of big clumsy cameras that only move forward and back and could pivot but not move sideways, with a sound band on the film that needed to be added, and the early cameras made so much noise they had to be enclosed (back when the sound was direct), and imagine trying to track Fred and Ginger. One famous 3 minute shot without splicing needed 47 takes, and Ginger’s feet were bleeding in her shoes at the end. Remember the RKO globe topped by a radio tower?
Jim Talin



Why is it that Frenchmen all seem to have these double names: Jean-Louis, Jean-Jacques, Jean-Claude, Jean-Francois, Pierre-Marie, and more. Likewise on the distaff side, we have Marie-Claude, Marie-Therese, Marie-Antoinette, Marie-Paul, Anne-Marie, and more. Why all the hyphenated names? Why the dually gendered names? Aren’t there enough simple names to go around? Mystère.

Well, it turns out that until very recently there simply were not that many names to go around to meet the needs of 50 million Frenchmen (to use Cole Porter’s statistics).

This story begins at the time of the French Revolution. The Revolution brought forth many things – the metric system, the military draft and the citizen army, the Marseillaise, … . There was also the end of slavery in the French colonies (until undone under Napoleon) and there was a new calendar. This calendar set the Year I to begin during our 1790; it divided the year into twelve parts, corresponding more or less to the signs of the Zodiac. These replaced the standard Roman months (6 pagan gods and rituals, 2 dictators and 4 numbers) with seasonal names; thus Aries became Germinal (when seeds sprout) and Libra became Brumaire (when the fog rolls in). The system lasted until the 11th of Nivôse (Capricorn, when snow falls) of the Year XIII (1802).

In France, the day 18 Brumaire has a sinister ring (much like the Ides of March); it is, in fact, a synonym for coup d’ėtat because this is day on which Napoleon (in the Year VIII) staged the military coup that ended the Revolution and led to the ill fated empire. It is in his essay The 18 Brumaire of Louis Bonaparte that Karl Marx quotably says history repeats itself, “the first time as tragedy, the second as farce.”

During the period when Napoleon was the Consul but not yet Emperor the legislative body in France was the Consulat. This body succeeded the Directoire which itself followed the Convention (and la Terreur) which replaced the Assemblėe Nationale of the short-lived Constitutional Monarchy which ended the absolute monarchy of the Ancien Rėgime. The Consulat, apparently to put a stop to the new practice of using first names for children that were inspired by the Revolution itself, enacted the Law of the 11th of Germinal of the Year XI: henceforth only a name from a religious or other official calendar or a name from ancient history could be used. The registrar (officier de l’ėtat civil) could refuse any name he or she considered unacceptable.

So for Catholic France, this meant that only the names of saints who had a feast day on the liturgical calendar were acceptable. All this was inconsistent with the aggressive anti-clericalism of the Revolution but very consistent with the traditionally top-heavy structure of French governance.

In 1800, the population of France was about 30 million and growing; in contrast the U.S. population was only about 5 million though growing at faster rate. Even with a smaller population than France until about 1880, throughout the 19th century Protestant Americans used names from the Hebrew Bible (Abraham, Ahab, Rebecca, Rachel) to supplement the supply of Anglo-Norman names (William, Alice, …) and Anglo-Saxon names (Edward, Maud, …). This elegant solution was not available to the French.

While the Revolution swept across France, there were areas of resistance to it, among them Brittany.  This region was always independent in spirit – what with its own Celtic language and music and very traditional Catholicism. The royalist Chouans fought a bloody civil war against the Revolution, protesting military conscription, the secularization of the clergy and, doubtless, the new calendar. All this is recounted in Balzac’s novel Les Chouans as well as in historical romances, movies and TV shows. In the present day, in the world of fashion there is the stylish chapeau chouan, inspired by the impressive clickable big-brimmed hats the Chouans wore. By the way, Balzac, a Taurus, was born on the 1st of Prairial in the year VII at the very end of the Eighteenth Century.

Despite coups d’ėtat, more revolutions, the disastrous defeat in the Franco-Prussian war, the Paris Commune, the Pyrrhic victory of WWI and the ignominy of WWII, that law going back to the Revolution stayed in effect.

So even after WWII, for a child to “exist” in the eyes of the French government and to benefit from schooling, the national health service, etc., his or her name still had to be acceptable to that local official according to the Law of the 11th of Germinal of the Year XI. However, trouble was brewing; the Goareng family in Brittany had twelve children and gave them all old Breton names; but only six of these were acceptable to the French registrars; so in the eyes of the government, six of these twelve children did not exist. In the tradition of the Chouans, the family took on the centralized French state and fought back through the French courts and United Nations courts and finally won satisfaction at the European Court in the Hague in 1964. Following that, the French government agreed in 1966 to extend the list of acceptable first names to include traditional regional names (Breton names, Basque names, Alsatian names, … ).

This satisfied the Goareng family but the struggle had so weakened the position of the French government that the law was modified again in the year CLXXXII (1981); finally the dam broke in the year CXCIV (1993) when this whole business was ended. In the time since, many once unacceptable names have been registered. By way of example, Nolan is now a popular boy’s name and Jade is a top girl’s name.



Joshua and Jesus

In the Hebrew Bible Joshua succeeds Moses as the leader of the Israelites and leads the invasion of the Land of Caanan. Most spectacularly, in the Book of Joshua, with the aid of trumpets and the Lord’s angels, he conquers the walled city of Jericho – an event recounted in the wonderful spiritual “Joshua Fit the Battle of Jericho.”
In Greek speaking Alexandria, there was a large Jewish community and around 250 B.C. a translation into Greek of the first books of the Hebrew Bible was made by Jewish scholars. This translation is called the Septuagint (meaning 70) because seventy different scholars translated the text independently; according to the Babylonian Talmud and other sources, when their translations were compared, they were all identical down to the last iota.
In Hebrew the name Joshua is יֵשׁוּעַ (Yeshu’a); in the Greek of the Septuagint, it becomes Ιησους roughly pronounced as “ee-aye-soos.” The word for Messaiah in Hebrew is מָשִׁיחַ (Mashiach); in the Greek of the Septuagint, it becomes Χριστός (the annointed one).
The gospels of the New Testament were written in Greek in the latter part of the first century A.D. In the Greek text, this name of Jesus is Ιησους ; but this is the same as the Septuagint’s name for Joshua. Is Jesus’ real name “Joshua”? Should the Greek Χριστός be rendered as “Messaiah” in English? Mystères.
The short answer is Yes; Jesus and Joshua have the same Hebrew name. Had the New Testament Gospels been written in Hebrew or Aramaic, Jesus Christ would have been called Yeshua Hamashiach by his disciples or, in English, Joshua the Messaiah.
That Jesus and Joshua shared the same Hebrew name is not new news. In fact, Moishe Rosen, the founder of Jews for Jesus,  authored Y’ESHUA, the Jewish Way to Say Jesus, published by the Moody Bible Institute in 1982. Also in the 1980s, there appeared the “Joshua” novels of Joseph F. Grinzone which are about a Christ-like figure, a carpenter, who touches people’s lives with his example, his teachings and his miracles; the author chose the name Joshua exactly because it is an alternative reading of the name Jesus in the Gospels.
But, it’s complicated. In moving names from one language to another, typically a letter is replaced by its closest relative in the target language and adjustments are made for the sake of grammar or sound; that said, the two words might not sound at all alike or look at all alike. Because the Greek name Ιησους is sounded out as (something like) ee-ay-soos (close to the Spanish pronunciation), it became Jesus in Latin. One more thing: if you capitalize the first three letters of Ιησους, you have I H S for iota, eta, sigma; these are the letters that form the Christogram that traditionally adorns vestments and altar cloths in Roman Catholic churches. Another popular Christogram is Xmas, where the Χ is the Greek chi which is a symbol for Christ (Χριστός). These Christograms and the Kyrie are last links back to the early Greek Christian church.
The good news of the gospels reached Rome from the Greek speaking eastern part of the Mediterranean. The first Christians in the Latin speaking part of the Roman Empire did not translate “Ιησους Χριστός” from the Greek much less from a Hebrew or Aramaic original source; instead they imported the Greek name all of a piece. This had the effect of making the name “Jesus Christ” special. Otherwise, the name would have been shared with others: Joshua of the Book of Joshua is called Joshua the Messiah in the Hebrew Bible and others are called the Messiah as well – even the king of Persia, Cyrus the Great, is called Messiah in Isaiah 45:1.
When one is raised in a Christian religion in an English speaking country, the name “Jesus Christ” is magical and absolutely unique, a name that no one else has had nor will ever have. Would saying “Joshua the Messiah” have made the Son of Man simply too human, simply one Joshua among many? And would that have interfered with our understanding of the Mystery of the Incarnation and of the Mystery of the Holy Trinity? Or would it have enhanced our understanding?
So the distinction between the names of Jesus and Joshua began in early Western Christianity. And the distinction has persisted. In St. Jerome’s translation of the Bible into Latin, the Old Testament name for Joshua is rendered as “Josue” and, of course, the New Testament name of Jesus is “Jesus.” St. Jerome had access both to the Hebrew text and to the Septuagint, so he was likely aware of this “inconsistency.” His translation, written at the end of the 4th century was the standard one in Western Christendom until the Reformation. Similarly, Jerome did not render the Greek Χριστός as “the Messiah” or even “the anointed one.” Indeed “Christus” had been the standard rendering of Χριστός in Latin since the very beginning; this is also attested to by the way the pagan Roman writer Tacitus referred to the Christians and Christus at the beginning of the 2nd Century in his Annals: he recounts how Nero blamed them for setting the fire that burned Rome in 64 A.D. and then ordered a persecution. Not at all fair of Nero, but it is a testament to the very rapid spread of Christianity across the Roman Empire.
There was a point in time when the name Jesus could have been replaced with the name Joshua in English (or vice-versa for that matter but no one has yet suggested replacing Joshua with Jesus in English translations of the Old Testament). The Anglican authors of the King James Bible had access to the Hebrew Bible, to the Septuagint, to Saint Jerome’s work, and to the Greek New Testament; they, like St. Jerome, kept the name Jesus because, presumably, it was what people knew and loved. The same can be said of Χριστός which they also did not translate from the Greek, but rather they followed St. Jerome’s example.
But the plot thickens. Is Jesus the only one to have the name Ιησους in the New Testament? After all, there are multiple people named Mary or John or James. Again, mystère.
But here the answer depends on which edition of the gospels you are reading. In the latest edition of the New Revised Standard Version (Oxford University Press, 2010), Matthew 27:16–17 is rendered as follows:
 At that time they had a notorious prisoner                                                                           whose name was Jesus Barabbas.                                                                                                 So after they had gathered, Pilate asked them,                                                                 “Whom do you want me to release to you:                                                                             Jesus Barabbas, or Jesus who is called the Messiah?”
Note that Barabbas has the first name Jesus in this text. In St. Jerome’s translation and in the original King James from 1611 A.D, “Jesus Barabbas” is simply “Barabbas.” In other words, Barabbas does not have a first name in these classic translations and the name Jesus is reserved exclusively for Jesus the Messiah. This practice of dropping Barabbas’ first name goes back, at least, to the third century: the Church father Origen (d. 254 A.D.) declared that Jesus must have been inserted in front of the name Barabbas by a heretic. Origen was certainly not alone in this; until recent scholarship went back to original texts on this point, only Jesus the Messiah was named Jesus in the New Testament. As is to be expected, this practice of omitting the first name was continued by Swedish novelist Par Lägerkvist in his Nobel Prize winning novel Barabbas (1950) and in the film version (1961) where Anthony Quinn plays the title role.
Of course, the crowd then cries “Give us Barabbas” – the most inflammatory passage in the gospels for Christian-Jewish relations. This account of events appears in all four gospels but Saint Matthew adds “His blood be upon us and our children.” To some, all this feels staged; could it have been included to deflect guilt from the Romans, placing the blame for the Crucifixion on the Jews?
After all, Christianity at the time of the gospels was becoming dominated by Gentiles and Hellenized Jews of the Diaspora (like St. Paul himself); the empire they lived in was the Roman Empire, and Jewish rebellions in the Holy Land were repeatedly being put down by those same Romans; crucifixion was employed by the Romans as punishment for crimes against the state and not to settle squabbles among coreligionists of a conquered population – so if the ministries of John the Baptist and later Jesus did have a social-political dimension that was problematic for the Romans, Christians could side-step this by re-positioning the Crucifixion as an intra-Jewish affair.
Or are the Evangelists trying to prove to Christians that the Jews had broken their covenant with the Lord and that this covenant now belonged to the Christians.
This second interpretation is the standard Christian one and historically was the basis of the Catholic Church’s position that those who practiced Judaism would not enter Paradise; this position was only revised after WWII and the Holocaust. Pope Benedict XVI added that the Jewish people are not responsible for the death of Christ. Other Christian Churches have also updated their thinking. Still the roots of anti-Semitism in the Christian world do not come down to these passages in the New Testament. But on the positive side, Jews in the U.S. themselves say that America, which has never had an established religion as such, has been an exceptionally good place for them. Still the terrible outburst of anti-Semitism that recently killed 11 people in a synagogue in Pittsburgh tells us how far we all still have to go despite the post-War efforts by Christian churches; the rot is deep.


California is called the Golden State and lives up to its billing. It is truly a magical place – the coast line, the rivers and bays, the sierras, the deserts, the redwoods, the gold rush, the marvelous climate and on and on. Its name could be Spanish. But its name is very unlike those of other states that were once part of New Spain and which have Spanish names. Colorado, Florida, Nevada, and Montaña are all recognizable Spanish words, while California is quite different.  Maybe it is Latin like Hibernia and Britannia (Ireland and Britain). Mystère.
What is marvelous is that California is named for a mythical island of Amazons created in a fantasy adventure novel The Adventures of Esplandian, written by Garci Rodriguez de Montalvo and published in Seville in 1510. How the author Montalvo came up with the name for this island is a matter of serious scholarly debate. In any case, early Spanish explorers thought California (Baja and Alta) to be an island and named it for the island of the novel. The island in the novel is as golden as California itself:
… there exists an island called California very close to a side of the Earthly Paradise; and it was populated by black women, without any man existing there, because they lived in the way of the Amazons. … Their weapons were golden and so were the harnesses of the wild beasts that they were accustomed to taming so that they could be ridden, because there was no other metal in the island than gold.
The beasts they tamed were griffins (half-lion, half-eagle) and the griffins were unleashed on the enemy in battle. The novel pits the Christian hero against Muslim Turkish foes. The queen of California was the beautiful Calafia (aka Califia) who leads her Amazons into battle on the side of the Turks. Needless to say, she is defeated by Esplandian, but then is whisked off to Constantinople (which somehow is still in Christian hands) where she converts to Christianity and marries one of the Christian knights. For the period, this was a happy ending.
Cervantes makes The Adventures of Esplandian one of the books on Chivalry that contributed to Don Quixote’s madness and, in fact, it is the very first of Don Quixote’s books chosen to be burned by the priest and doctor who are trying to “cure” him.
It is fitting that the land of California with its dream factories of Hollywood and Silicon Valley should be named for a magical island of fantasy. Field research confirms that Californians are blissfully unaware of all this; there are exceptions, though.  There is a mural from 1926 of Queen Califia and her Amazon warriors at the Mark Hopkins Hotel in San Francisco
that is worth visiting. Much more recently, the late French-American artist Niki de Saint Phalle created a sculpture garden in Escondido in honor of California’s queen.

Plymouth and Cape Cod

Massachusetts is notorious for its peculiar pronunciations, especially of place names. For example, there are Gloucester (GLOSS-TAH) and Worcester (WOO-STAH). And there are confounding inconsistencies:  on Cape Cod, there are opposing pronunciation rules for Chatham (CHATUM) and Eastham (EAST-HAM), two towns within minutes of each other; closer to Boston, the president John Quincy (QUIN-SEE) Adams was born in Quincy (QUINZ-EE). No threat of “a foolish consistency” here.

However, we all know (thanks to the Thanksgiving holiday) not to pronounce Plymouth as PLY-MOUTH but rather to say PLIM-UTH ( /plɪməθ/ in the dictionary). But, a short hop from Plymouth, just over the Sagamore Bridge, there are the Cape Cod towns of Yarmouth and Falmouth. How should we pronounce these town names? Does the Plymouth pattern apply? One would think so since all the towns are close to one another but then there is the example of Chatham and Eastham. So should these Cape Cod towns be pronounced YAH-MUTH and FOUL-MOUTH, whatever?

In point of fact, the Plymouth rule does apply to both names and MUTH wins out and MOUTH loses again. But how can it be that these three towns have such special names, all ending in “mouth”? Mathematically, it’s not really possible that this could happen by pure chance. Mystère.

Since all these towns are on the coast, a first guess is that “mouth” could refer to the place where a mighty river reaches the sea; however, none of these MUTH towns qualify. Towns in New England are typically named for a town or city in Olde England and this turns out to be the case here. An internet search and the mystery is solved: back in England all three namesake towns are at the mouths of rivers. And the names of those rivers?  The Yare, the Fal and (this is the tricky one) the Plym. The Yare flows into the North Sea; the Fal and the Plym go to the English Channel.

Field research has shown that Cape Codders from Yarmouth and Falmouth both are blissfully unaware of all this. For Plymouth, more research is required.

In the end, Falmouth and Yarmouth might have all those beautiful beaches, but Plymouth has that rock .

Weymouth and Dartmouth are left as exercises for the reader.