The Third Person III: The Shekinah

To help track the Jewish origins of the Christian Holy Spirit, there is a rich rabbinical literature to consider, a literature which emerged as the period of Biblical writing came to an end in the centuries just before the advent of Christianity. Already in the pre-Christian era, the rabbis approached the Tanakh (the Hebrew Bible) with a method called Midrash for developing interpretations, commentaries and homilies. Midrashic practice and writings had a real influence on the New Testament. St. Paul himself studied with the famous rabbi Gamaliel (Acts 22: 3) who in turn was the grandson of the great Talmudic scholar Hillel the Elder, one of the greatest figures in Jewish history; his simple and elegant formulation of the Golden Rule is often cited today:

    “What is hateful to you, do not do to your fellow: this is the whole      Torah; the rest is the explanation; go and learn”

Hiller the Elder is also famous for his laconic leading question

    “If not now, when?”

For an image of Hillel, click HERE

In the writings of St Paul, this passage from the Epistle to the Galatians (5:22-23) is considered an example of Midrash:

    But the fruit of the Spirit is love, joy, peace, longsuffering, gentleness, goodness, faith,

    Meekness, temperance: against such there is no law.

The virtues on this list are called the Fruits of the Holy Spirit and the early Christians understood this Midrash to be a reference to the Holy Spirit. Much like the way the Seven Pillars of Wisdom in the Book of Isaiah provide an answer to Question 177 in the Baltimore Catechism, so these verses of Paul provide the answer to Question 719:

Q. Which are the twelve fruits of the Holy Ghost?

A. The twelve fruits of the Holy Ghost are Charity, Joy, Peace, Patience, Benignity, Goodness, Long-suffering, Mildness, Faith, Modesty, Continency, and Chastity.

The numerically alert will have noted that St Paul only lists nine such virtues. But, it seems that St. Jerome added those three extra fruits to the list when translating Paul’s epistle from Greek into Latin and so the list is longer in the Catholic Bible’s wording of the Epistle!

It is also possible that Jerome was working with a text that already had the interpolations. The number 12 does occur in key places in the scriptures – the 12 sons of Jacob whence the 12 tribes of Israel, the 12 apostles, …. . Excessive numerological zeal on the part of early Christians plus a certain prudishness could well have led to the insertion of modesty, continence and chastity into the list. In the Protestant tradition and in the Greek Orthodox Church, the number still stands at 9, but that is still numerically pious since there are 9 orders of Angels as well.

For some centuries before the time of Christ, the Jewish population of Biblical Palestine was not Hebrew speaking. Instead, Aramaic, another Semitic language spoken all over the Middle East, had become the native language of the people; Hebrew was preserved, of course, by the Scribes, Pharisees, Essenes, Sadducees, rabbis and others directly involved with the Hebrew texts and with the oral tradition of Judaism.

The Targums were paraphrases of passages from the Tanakh together with comments or homilies that were recited in synagogues in Aramaic, beginning some time before the Christian era. The leader, the meturgeman, who presented the Targum would paraphrase a text from the Tanakh and add commentary, all in Aramaic to make it more comprehensible and more relevant to those assembled in the synagogue.

Originally, the Targums were strictly oral and writing them down was prohibited. However, Targumatic texts appear well before the time that Paul was sending letters to converts around the Mediterranean. In fact, a Targumatic text from the first century BC, known as the Targum of Job, was discovered at Qumran, the site of the Dead Sea Scrolls.

By the time of the Targums, Judaism had gone through centuries of development and change – and it was still evolving. In fact, the orthodoxy of the post-biblical period demanded fresh readings even of the Bible itself. Indeed, the text of the Tanakh still raises theological problems – in particular in those places where the text anthropomorphizes God (Yahweh); for example, there is God’s promise to Moses in Exodus 33:14

    And he [The Lord] said, My presence shall go with thee, and I will give thee rest.

In the Targums and in the Talmudic literature, the writers are careful to avoid language which plunks the Yahweh down into the physical world. The Targums tackle this problem head on. They introduce a new force in Jewish religious writing, the Shekinah. This Hebrew noun is derived from the Hebrew verb shakan which means “to dwell” and the noun form Shekinah is translated as “The one who dwells” or more insistently as “The one who indwells”; it can refer both to the way God’s spirit can inhabit a believer and to the way God can occupy a physical location.

The verb “indwell” is also often used in English language discussions and writing about the Judaic Shekinah and about the Christian Holy Spirit. This word goes back to Middle English and it was rescued from obsolescence by John Wycliffe, the 14th century reformer who was the first to translate the Bible into English from the Latin Vulgate of St. Jerome. It is popular today in prayers and in Calls to Worship in Protestant churches in the US; e.g.

    “May your Holy Spirit surround and indwell this congregation now and forevermore.”

In the Targums, the term Shekinah is systematically applied as a substitute for names of God to indicate that the reference is not to God himself; rather the reference is shifted to this agency, aspect, emanation, viz. the Shekinah. Put simply, when the original Hebrew text says “God did this”, the Targum will say something like “The Shekinah did this” or “The Lord’s Shekinah did this.”

In his study Targun and Testament, in analyzing the example of Exodus 33:14 above, Martin McNamara translates the Neofiti Targum’s version of the text this way:

    “The glory of my Shekinah will accompany you and will prepare a resting place for you.”

Here is an example given in the Jewish Encyclopedia (click HERE ) involving Noah’s son Japeth. Thus, in Genesis 9:27, we have

    May God extend Japheth’s territory; may Japheth live in the tents of Shem, and may Canaan be the slave of Japheth.

In the Onkelos Targum, the Hebrew term for God “Elohim” in Genesis is replaced by “the Lord’s Shekinah” and the paraphrase of the meturgeman becomes (roughly)

    “May the Lord’s Shekinah extend Japheth’s territory; may Japheth live in the tents of Shem, and may Canaan be the slave of Japheth.”

For a 16th century French woodcut that depicts this son of Noah, click HERE .

Another function of the Shekinah is to represent the presence of God in holy places. In Jewish tradition, the Spirit of God occupied a special location in the First Temple. This traces back to Exodus 25:8 where it is written

    And let them make Me a sanctuary; that I may dwell among them.

Again following the Jewish Encyclopedia’s analysis, the Onkelos Targum paraphrases this declaration of Yahweh’s as

    “And they shall make before Me a sanctuary and I shall cause My Shekinah to dwell among them.”

This sanctuary will become the Temple built by Solomon in Jerusalem. Indeed, in many instances, the Temple is called the “House of the Shekinah” in the Targums.

Jesus was often addressed as “Rabbi” in the New Tesyament; in John 3:2, the Pharisee Nicodemus says “Rabbi, we know that you are a teacher who has come from God”. Today, the role of Midrash and the Targums on Jesus’ teachings and on the language of the four Gospels is a rich area of research. Given that the historical Jesus was an Aramaic speaker, the Targums clearly would be a natural source for homilies and a natural methodology for Jesus to employ.

Let us recapitulate and try to connect some threads in the Jewish literature that lead up to the Christian Holy Spirit: The Essene view of the Holy Spirit as in-dwelling is very consistent with the Shekinah and with the Christian Holy Spirit. The Essene personification of Wisdom as a precursor to the Holy Spirit is as well. The Targumatic and Talmudic view of the indwelling Shekinah as the manifestation of God in the physical world has much in common with the Christian view of the Holy Spirit; in point of fact, this is the primary role of the Holy Spirit as taught in Sunday Schools and in Parochial Schools.

One more thing: the term “Holy Spirit” itself only appears three times in the Tanakh and there it is used much in the way Shekinah is employed in the rabbinical sources. The term is used often, however, by the Essenes in the Dead Sea Scrolls. Then it appears in the Talmudic literature where it is associated with prophecy as in Christianity: in Peter’s 2nd Epistle (1:21) we have

For the prophecy came not in old time by the will of man: but holy men of God spake as they were moved by the Holy Ghost

Moreover, in Talmudic writings, the terms Holy Spirit and Shekinah eventually became interchangeable – for scholarship, consult Raphael Patai, The Hebrew Goddess.

What is also interesting is that in Hebrew the words Wisdom (chokmâh), Spirit (ruach) and Shekinah are all feminine. However, grammatical gender is not the same as biological gender – a flagrant example is that in German “young girl” is neuter (das Mädchen). So, we could not infer from grammatical gender alone that the Holy Spirit was a female force.

However, in the Wisdom Literature, Wisdom/Sophia is a female figure. Following Patai again, one can add to that an assertion by Philo of Alexandria, the Hellenized Jewish philosopher who lived in the first half of the 1st century: in his work On the Cherubim, Philo flatly states that God is the husband of Wisdom. Moreover, in the Talmud and the Kaballah, the Shekinah has a female identity which is developed to the point that in the late medieval Kaballah, the Shekinah becomes a full-fledged female deity.

So there are two female threads, Wisdom/Sophia and the Shekinah, coming from the pre-Christian Jewish literature that are both strongly identified with the Holy Spirit of nascent Christianity, identifications which persisted as Christianity spread throughout the Roman Empire. Wisdom/Sophia of the Wisdom Literature looks to be an importation from the Greek culture which dominated the Eastern Mediterranean Sea in the Hellenistic Age, an importation beginning at the end of the biblical era in Judaism. The Shekinah, though, only appears in the Talmudic and Targumatic literatures. So the next question is what is the origin of the concept of the Shekinah. And then, are the Shekinah and Wisdom/Sophia interconnected? Mystères. More to come.

Esther, Trump and Blasphemy

Israeli Prime Minister Benjamin Netanyahu recently (March 21, 2019) asserted that Donald Trump’s support for Israeli annexation of the Golan-Heights has anti-Iranian biblical antecedents. The annexation would strengthen Israel’s position vis-à-vis pro-Iranian forces in Syria. Calling it a “Purim Miracle,” Netanyahu cited the Book of Esther where purportedly Jews killed Persians in what is today Iran rather than the other way around as the Persian viceroy Haman planned. This came to pass thanks to Esther’s finding favor with the Persian King named Ahasuerus, the ruler considered today by scholars to be Xerxes, the grandson of King Cyrus the Great; in need of a new queen Xerxes selected the Jewish orphan Esther as the most beautiful of all the young women in his empire. High ranking government officials like Mike Pence and Mike Pompeo rally to the idea that Trump was created by God to save the Jewish people. Indeed, BaptistNews.com reports that, when asked about it in an interview with the Christian Broadcasting Network, Secretary of State Pompeo said it is possible that God raised up President Trump, just as He had Esther, to help save the Jewish people from the menace of Iran, as Persia is known today. Pompeo added “I am confident that the Lord is at work here.”

However, the account in the Book of Esther is contested by scholars since there are there are no historical records to back up the biblical story; and the Cinderella elements in the narrative should require outside verification.  Moreover, the Book of Esther itself is not considered canonical by many (Martin Luther among them) and parts of it are excluded from the Protestant Bible. These are details though – the key point is that the Feast of Purim is an important event, celebrating as it does the special relationship between God and His Chosen People; bringing Donald Trump into this is simply blasphemy.

For its part, the reign of Xerxes is well documented – he was the Persian invader of Greece whose forces prevailed at Thermopylae but were then defeated at the Battle of Salamis. According to Herodotus, Xerxes watched battles perched on a great throne and, at Thermopylae he “thrice leaped from the throne on which he sat in terror for his army.”

Moreover, there is well-documented history where the Persians came to the aid of the Jews. Some background: the First Temple in Jerusalem was built during the reign of King Solomon (970-931 BC); the Temple was destroyed by the Babylonians in 598 BC and a large portion of the population was exiled to Babylon. With the conquest of Babylon in 539 BC by the Persian King Cyrus the Great, the Babylonian Captivity came to an end, Jews returned to Jerusalem and began the construction of the Second Temple (515 BC). It was also Cyrus himself who urged the rebuilding of the Temple and, for his efforts on behalf of the Jews, Cyrus is the only non-Jew considered a Messiah in the Hebrew Bible (Isaiah 45:1). So here we have a reason for Israelis and Iranians to celebrate history they share.

The Third Person II: The Wisdom Literature

The rabbinical term for the Hebrew Bible is the Tanakh; the term was introduced in the Middle Ages and is an acronym drawn from the Hebrew names for the three sections of the canonical Jewish scriptures: Torah (Teachings), Neviim (Prophets) and Ketuvim (Writings). The standard Hebrew text of the Tanakh was compiled in the Middle East in the Middle Ages and is known as the Masoretic Text (from the Hebrew word for tradition).

Since the Holy Spirit does not appear in the Tanakh as a standalone actor and is only alluded to there three times, the question arises whether the Holy Spirit plays a role in other pre-Christian Jewish sources. Mystère.

In 1947, Bedouin lads discovered the first of the Dead Sea Scrolls in a cave (which one of them had fallen into) at the site of Qumran on the West Bank of the River Jordan near the Dead Sea. These texts were compiled in the centuries just before the Christian era by a monastic Jewish group called the Essenes and they include copies of parts of the Tanakh. On the other hand, the texts of the Dead Sea Scrolls also include non-biblical documents detailing the way of life of the Essenes and their special beliefs. In the scrolls, there are multiple mentions of the Holy Spirit and the role of the Holy Spirit there has much in common with the later Christian concept: the Essenes believed themselves to be holy because the Holy Spirit dwelt within each of them; indeed, from a scroll The Community Rule, we learn that each member of the group had first to be made pure by the Holy Spirit. Another interesting intersection with early Christianity is that the Essenes celebrated the annual renewal of their covenant with God at the Jewish harvest feast of Shavuot which also commemorates the day when God gave the Torah to the Israelites establishing the Mosaic covenant. The holiday takes place fifty days after Passover; in the Greek of the New Testament, this feast is called Pentecost and it is at that celebration that the Holy Spirit establishes a covenant with the Apostles.

Wisdom, aka Holy Wisdom, emerges as a concept and guiding principle in the late Biblical period. Wisdom is identified with the Christian Holy Spirit, for example, through the Seven Pillars of Wisdom; thus, after relocating to North America and leaping ahead many centuries to 1885 and the Baltimore Catechism, it is Isaiah 11:2 that provides the answer to Question 177:

Q. Which are the gifts of the Holy Ghost?
A. The gifts of the Holy Ghost are Wisdom, Understanding, Counsel, Fortitude, Knowledge, Piety and Fear of the Lord.

Isaiah 11:3 and Proverbs 9:10 make it clear that, among these Seven Pillars of Wisdom, Fear of the Lord is the most fundamental of these gifts – for once, we can’t blame this sort of thing on Catholics and Calvinists !

Wisdom as a personification plays an important role in the Wisdom Literature, a role that also links pre-Christian Jewish writings to the Christian Holy Spirit. The Book of Proverbs, itself in the Tanakh, is part of this literature. But there is something surprising going on: in Hebrew grammar, the gender of the word for Wisdom, Chokmâh, is feminine; in the Wisdom Literature, Wisdom has feminine gender, not only grammatically but sexually as well. Indeed, in Chapter 8 of Proverbs, Wisdom puts “forth her voice”

1 Doth not wisdom cry? And understanding put forth her voice?
2 She standeth in the top of high places, by the way in the places of the paths.
3 She crieth at the gates, at the entry of the city, at the coming in at the doors.
4 Unto you, O men, I call; and my voice is to the sons of man.

and declaims that she was there before the Creation

22 The LORD possessed me in the beginning of his way, before his works of old.
23 I was set up from everlasting, from the beginning, or ever the earth was.
24 When there were no depths, I was brought forth; when there were no fountains abounding with water.
25 Before the mountains were settled, before the hills was I brought forth.

So the author of this part of the Book of Proverbs (4th century BC) clearly sees Wisdom as a kind of goddess.

The Tanakh, which corresponds basically to the Protestant Old Testament, excludes Wisdom books that are included in the Catholic Bible such as the Book of Sirach (aka Book of Ecclesiasticus) and the Wisdom of Solomon (aka Book of Wisdom). These texts, however, develop this “goddess” theme further.

The Book of Sirach, which dates from the late 2nd century BC, has these verses in the very first chapter where this preternatural female note is struck cleanly:

5 To whom has wisdom’s root been revealed? Who knows her subtleties?
6 There is but one, wise and truly awe-inspiring, seated upon his throne:
7 It is the LORD; he created her, has seen her and taken note of her.

The meme of Wisdom as a feminine goddess-like being also occurs in Psalm 155 one of the Five Aprocryphal Psalms of David, texts which date from the pre-Christian era:

5 For it is to make known the glory of Yahweh that wisdom has been given;
6 and it is for recounting his many deeds, that she has been revealed to humans:

and

12 From the gates of the righteous her voice is heard, and her song from the assembly of the pious.
13 When they eat until they are full, she is mentioned, and when they drink in community

What is more, this theme also appears in the Essene Wisdom texts of the Dead Sea Scrolls, e.g. the Great Psalms Scroll and Scroll 4Q425. In fact, the latter begins with a poem to Wisdom in the idiom of the Beatitudes

“Blessed are those who hold to Wisdom’s precepts
and do not hold to the ways of iniquity….
Blessed are those who rejoice in her…
Blessed are those who seek her …. “

Then too the word for “Wisdom” in the Greek of the Wisdom of Solomon is “Sophia”, the name for a mythological female figure and a central female concept in Stoicism and in Greek philosophy more generally. Indeed, “philosophy” itself means “love of Sophia.” For a 2nd century statue of Sophia, click HERE . For a painting by Veronese, click HERE .

Most dramatically, Chapter 8 of the Wisdom of Solomon begins

1 Wisdom reacheth from one end to another mightily: and sweetly doth she order all things.
2 I loved her, and sought her out from my youth, I desired to make her my spouse, and I was a lover of her beauty.
3 In that she is conversant with God, she magnifieth her nobility: yea, the Lord of all things himself loved her.
4 For she is privy to the mysteries of the knowledge of God, and a lover of his works.

So these sources all imply that the Holy Spirit derives from a female precedent.

What is more, in Christian Gnosticism, Sophia becomes both the Bride of Christ and the Holy Spirit of the Holy Trinity. This movement denied the virgin-birth on the one hand and taught that the Holy Spirit was female on the other. The Gnostic Gospel of St. Phillip is one of the texts found in 1945 at Nag Hammadi in Egypt and the manuscript itself dates from the 2nd century; in this Gospel the statement of the Angel of the Lord to Joseph in Matthew 1:20

“the child conceived in her is from the Holy Spirit”

is turned on its head by the argument that this is impossible because the Holy Spirit is female:

“Some said Mary became pregnant by the holy spirit. They are wrong and do not know what they are saying. When did a woman ever get pregnant by a woman?”

Not unsurprisingly, Gnosticism was branded as heretical by stalwart defenders of orthodoxy such as Tertullian and Irenaeus. Tertullian, “the Father of Western Theology,” was the first Christian author known to use the Latin term “Trinitas” for the Triune Christian God – naturally, the thought of a female Third Person was anathema to him, leading as it would to a blasphemous mėnage à trois.

In the Hebrew language literature, Wisdom/Sophia as a personification enters the Tanakh relatively late in the game in the Book of Proverbs and then somewhat later in the Aprocrypha and in the Dead Sea Scrolls. She appears as Sophia in the Greek language Wisdom of Solomon of the late 1st century BC; so Wisdom/Sophia appears to be an influence from the Hellenistic world and its Greek language, religion and philosophy. But why does this begin to happen at the end of the Biblical period? Is Wisdom/Sophia filling a vacuum that was somehow created in Jewish religious life? But the literature search does not end here. In the pre-Christian period there also emerged rabbinical practices such as Midrash and writings such as the Jerusalem Talmud and the Targums. Are further threads leading to the Holy Spirit of Christianity to be found there? Further examples of links to a female diety? Links to Wisdom/Sophia herself? Mystères. More to come.

The Third Person I : The Holy Spirit

To Christians, the Holy Spirit (once known as the Holy Ghost in the English speaking world) is the Third Person of the Holy Trinity, along with God the Father and God the Son.
Indeed for Protestants and Catholics, the Nicene Creed reads
“We believe in the Holy Spirit, the Lord, the giver of life,
who proceeds from the Father and the Son,
who with the Father and the Son is worshiped and glorified,
who has spoken through the prophets.”
Or more simply in the Apostles’ Creed
” I believe in the Holy Spirit.”
The phrase “and the Son” does not appear in all versions of the Nicene Creed and it was a key factor in the Great Schism of 1054 A.D that separated the Greek Orthodox Church from the Roman Catholic Church. A mere twenty years later in another break with the Orthodox Church, Pope Gregory VII instituted the requirement of celibacy for Catholic priests. It makes one think that, had the schism not taken place, the Catholic Church would not have made that move from an all male priesthood to the celibate all male priesthood which is plaguing it today.
The earliest Christian texts are the Epistles of St. Paul: his first Epistle dates from AD 50, while the earliest gospel (that of St. Mark) dates from AD 66-70. However, since the epistles were written after the events described in the gospels, they come later in editions of the New Testament.
Paul wrote that first epistle, known as 1 Thessalonians, in Greek to converted Jews of the diaspora and the other new Christians in the Macedonian city of Thessaloniki (Salonica) on the Aegean Sea. Boldly, at the very beginning of the letter, Paul lays out the doctrine of the Holy Trinity: in the New Revised Standard Version, we have
      1 To the church of the Thessalonians in God the Father and the Lord Jesus Christ: Grace to you and peace.
     2 We always give thanks to God for all of you and mention you in our prayers, constantly
    3 remembering before our God and Father your work of faith and labor of love and steadfastness of hope in our Lord Jesus Christ.
    4 For we know, brothers and sisters beloved by God, that he has chosen you,
    5 because our message of the gospel came to you not in word only, but also in power and in the Holy Spirit and with full conviction; just as you know what kind of persons we proved to be among you for your sake.
    6 And you became imitators of us and of the Lord, for in spite of persecution you received the word with joy inspired by the Holy Spirit.
Judaism is famously monotheistic and we understand that when Paul refers to “God” in his epistle, he is referring to Yahweh, the God of Judaism – “God the Father” to Christians. Likewise, the reference to “Jesus Christ” is clear – Jesus was a historical figure. But Paul also expected people in these congregations to understand his reference to the Holy Spirit and to the power associated with the Holy Spirit.
So who were in these congregations that Paul was writing to who would understand what Paul was trying to say? Mystère.
By the time of Christ, Jews had long established enclaves in many cities around the Mediterranean including, famously, Alexandria, Corinth, Athens, Tarsus, Antioch and Rome itself. Under Julius Caesar, Judaism was declared to be a recognized religion, religio licita, which formalized its status in the Empire. Many Jews like Paul himself were Roman citizens. In Augustus’ time, the Jews of Rome even made it into the writings of Horace, one of the leading lights of the Golden Age of Latin Literature: for one thing, he chides the Jews of Rome for being insistent in their attempts at converting pagans – something that sounds unusual today, but the case has been made that proselytism is a natural characteristic of monotheism, which makes sense when you think about it.
Estimates for the Jewish share of the population of the Roman Empire at the time of Christ range from 5% to 10% – which is most impressive. (For a cliometric analysis of this diaspora and of early Christianity, see Cities of God by Rodney Stark.) These Hellenized, Greek speaking Jews used Hebrew for religious services and readings. Their presence across the Roman Empire was to prove critical to the spread of Christianity during the Pax Romana, a spread so rapid that already in 64 A.D. Nero blamed the Christians for the fire that destroyed much of Rome, the fire that he himself had commanded.
During the Hellenistic Period, the three centuries preceding the Christian era, Alexandria, in particular, became a great center of Jewish culture and learning – there the first five books of the Hebrew Bible were translated into Greek (independently and identically by 70 different scholars according to the Babylonian Talmud) yielding the Septuagint and creating en passant the Greek neologism diaspora as the term for the dispersion of the Jewish people. Throughout the Mediterranean world, the Jewish people’s place of worship became the synagogue (a Greek word meaning assembly).
In fact, Greek became the lingua franca of the Roman Empire itself. St. Paul even wrote his Epistle to the Romans in Greek. The emperor Marcus Aurelius wrote the twelve books of his Meditations in Greek; Julius Caesar and Mark Anthony wooed Cleopatra in Greek. In Shakespeare’s Julius Caesar, the Senator and conspirator Casca reports that Cicero addressed the crowd in Greek adding that he himself did not understand the great orator because “It was Greek to me” – here Shakespeare is putting us on because Plutarch reports that Casca did indeed speak Greek. As for Cicero, for once neither defending a political thug (e.g. Milo) nor attacking one (e.g. Cataline), he delivered his great oration on behalf of a liberal education, the Pro Archaia, to gain Roman citizenship for his tutor, Archias a Greek from Antioch.
The spread of Christianity in the Greek speaking world was spearheaded by St. Paul as attested to by his Epistles and by the Acts of the Apostles. Indeed, Paul’s strategy in a new city was first to preach in synagogues. Although St Paul referred to himself as the Apostle to the Gentiles, he could still better be called the Apostle to the Urban Hellenized Jews, Jews like himself. Where he did prove himself an apostle to the Gentiles was when Paul, in opposition to some of the original apostles, declared that Christians did not have to follow Jewish dietary laws and that Christians need not practice the Semitic tribal practice of male circumcision; Islam, which also originated in the Semitic world, enforces both dietary laws and male circumcision.
The theology of the Holy Spirit is called pneumatology from pneuma the Greek word for spirit (or breath or wind) and the Septuagint’s translation of the Hebrew ruach. Scholars consider his pneumatology as central to Paul’s thinking.
In fact, Paul refers to the Holy Spirit time and again in his writings and in Paul’s Epistles the Holy Spirit is an independent force; the same applies to the Gospels: the Holy Spirit is a participant at the Annunciation, at the baptism of Christ, at the Temptation of Christ. In the Acts of the Apostles, it is the Holy Spirit who descends on the Apostles in the form of tongues of fire when they are gathered in Jerusalem for the Jewish harvest feast of Shavuot which takes places fifty days after Passover; in the Greek of the New Testament, this feast is called Pentecost and it is at this celebration that the Holy Spirit gives the Apostles the Gift of Tongues (meaning languages) and launches them on their careers as fishers of men.
For a Renaissance painting of the baptism of Jesus with the Holy Spirit present in the form of a dove, a work of Andrea del Verrocchio and his student Leonardo da Vinci, click HERE . According to the father of art history Giorgio Vasari, after this composition Verrocchio resolved never to paint again for his pupil had far surpassed him! While we’re dropping names, click HERE .for Caravaggio’s depiction of Paul fallen from his horse after Jesus revealed Himself to him on the Road to Damascus.
Although important in the New Testament, reference to the “Holy Spirit” only occurs three times in the Old Testament and it is never used as a standalone noun phrase as it is in the New Testament; instead, it is used with possessive pronouns that refer to Yahweh such as “His Holy Spirit” (Isaiah 63:10,11) and “Thy Holy Spirit” (Psalms 51:11); for example, in the King James Bible, this last verse reads
    Cast me not away from thy presence; and take not thy holy spirit from me.
This is key – from the outset, in Christianity, the Holy Spirit is autonomous, part of the Godhead, not just a messenger of God such as an angel would be. And Paul and the evangelists assume that their readers know what they are writing about; they don’t go into long explanations to explain who the Holy Spirit is or where the Holy Spirit is coming from.
The monotheism of Judaism has a place only for Yahweh, the God of the chosen people. But Christianity and its theology started in the Jewish world of the first century A.D.; so the concept of the Holy Spirit must have its roots in that world even though it is not there in Biblical Judaism. The last books of the Hebrew Bible date to the Fifth Century B.C. Jewish religious culture was as dynamic as ever in the post-Biblical period leading up to the birth of Christianity and beyond. New texts were written in Aramaic as well as in Greek and in Hebrew, creating a significant body of work.
So the place to start to look for the origin of the Holy Spirit is in the post-Biblical literature of Judaism. More to come.

Liberal Semantics

The word “liberal” originated in Latin, then made its way into French and from there into English. The Oxford English Dictionary gives this as its primary definition
“Willing to respect or accept behaviour or opinions different from one’s own; open to new ideas.”
However, it also has a political usage as in “the liberal senator from Massachusetts.” This meaning and usage must be relatively new: for one thing, we know that “liberal” was not given a political connotation by Dr. Samuel Johnson in his celebrated dictionary of 1755:
    Liberal, adj. [liberalis, Latin, libėral, French]
1. Not mean; not low in birth; not low in mind.
2. Becoming a gentleman.
3. Munificent; generous; bountiful; not parcimonious.
So when did the good word take on that political connotation? Mystère.
We owe the attribution of a political meaning to the word to the Scottish Enlightenment and two of its leading lights, the historian William Robertson and the political economist Adam Smith. Robertson and Smith were friends and correspondents as well as colleagues at the University of Edinburgh; they used “liberal” to refer to a society with safeguards for private property and an economy based on market capitalism and free-trade. Roberts is given priority today for using it this way in his 1769 book The History of the Reign of the Emperor Charles V. On the other hand, many in the US follow the lead of conservative icon Friedrich Hayek who credited Smith based on the fact that the term appears in The Wealth of Nations (1776); Hayek wrote The Road to Serfdom (1944), a seminal work arguing that economic freedom is a prerequisite for individual liberty.
Today, the related term “classical liberalism” is applied to the philosophy of John Locke (1632-1704) and he is often referred to as the “father of liberalism.” His defense of individual liberty, his opposition to absolute monarchy, his insistence on separation of church and state, and his analysis of the role of “the social contract” provided the U.S. founding fathers with philosophical tools crucial for the Declaration of Independence, the Articles of Confederation and ultimately the Constitution. It is this classical liberalism that also inspired Simon Bolivar, Bernardo O’Higgins and other liberators of Latin America.
In the early 19th century, the Whig and Tory parties were dominant in the English parliament. Something revolutionary happened when the Whigs engineered the passage of the Reform Act of 1832 which was an important step toward making the U.K. a democracy in the modern sense of the term. According to historians, this began the peaceful transfer of power from the landed aristocracy to the emergent bourgeois class of merchants and industrialists. It also coincided with the end of the Romantic Movement, the era of the magical poetry of Keats and Shelley, and led into the Victorian Period and the well intentioned poetry of Arnold and Tennyson.
Since no good deed goes unpunished (especially in politics), passage of the Reform Act of 1832 also led to the demise of the Whig Party: the admission of the propertied middle class into the electorate and into the House of Commons itself split the Whigs and the new Liberal Party emerged. The Liberal Party was a powerful force in English political life into the 20th century. Throughout, the party’s hallmark was its stance on individual liberties, free-markets and free-trade.
Accordingly, in the latter part of the 19th century in Europe and the US, the term “liberalism” came to mean commitment to individual freedoms (in the spirit of Locke) together with support of free-market capitalism mixed in with social Darwinism. Small government became a goal: “That government is best that governs least” to steal a line from Henry David Thoreau.
Resistance to laissez-faire capitalism developed and led to movements like socialism and labor unions. In the US social inequality also fueled populist movements such as that led by William Jennings Bryan, the champion of Free Silver and other causes. Bryant, a brilliant orator, was celebrated for his “Cross of Gold” speech, an attack of the gold standard, in which he intoned
    “you shall not crucify mankind upon a cross of gold.”
He was a national figure for many years and ran for President on the Democratic ticket three times; he earned multiple nicknames such as The Fundamentalist Pope, the Boy Orator of the Platte, The Silver Knight of the West and the Great Commoner.
At the turn of the century, in the US public intellectuals like John Dewey began to criticize the basis of laissez-faire liberalism as too individualistic and too threatening to an egalitarian society. President Theodore Roosevelt joined the fray, led the “progressive” movement, initiated “trust-busting” and began regulatory constraints to rein big business in. The Sixteenth Amendment which authorized a progressive income tax made it through Congress and the state legislatures during this presidency.
At this time, the meaning of the word “liberal” took on its modern political meaning: “liberal” and “liberalism” came to refer to the non-socialist, non-communist political left – a position that both defends market capitalism and supports infrastructure investment and social programs that benefit large swaths of the population; in Europe the corresponding phenomenon is Social Democracy, though the Social Democrats tend to be more to the left and stronger supporters of the social safety net, not far from the people who call themselves “democratic socialists” in the US today.
On the other hand, the 19th century meaning of “liberalism” has been taken on by the term “neo-liberalism” which is used to designate aggressive free-market capitalism in the age of globalization.
In the first term of Woodrow Wilson’s presidency, Congress passed the Clayton Anti-Trust Act as well as legislation establishing the Federal Reserve System and the progressive income tax. Wilson is thus credited with being the founder of the modern Democratic Party’s liberalism – this despite his anti-immigrant stance, his anti-Catholic stance and his notoriously racist anti-African-American stance.
The great political achievement of the era was the 19th Amendment which established the right of women to vote. The movement had to overcome entrenched resistance, finally securing the support of Woodrow Wilson and getting the necessary votes in Congress in 1919. Perhaps, it is this that has earned Wilson his standing in the ranks of Democratic Party liberals.
Bryan, for his part a strong supporter of Wilson and his liberal agenda in the 1912 election, then served as Wilson’s first Secretary of State, resigning over the handling of the Lusitania sinking. His reputation has suffered over the years because of his humiliating battle with Clarence Darrow in the Scopes “Monkey” Trial of 1925 (Fredric March and Spencer Tracy resp. in “Inherit the Wind”); at the trial, religious fundamentalist Bryan argued against teaching human evolution in public schools. It is likely this has kept him off the list of heroes of liberal politics in the US, especially given that this motion picture, a Stanley Kramer “message film,” was an allegory about the McCarthy era witch-hunts. Speaking of allegories, a good case can be made that the Wizard of Oz is an allegory about the populist movement and the Cowardly Lion represents Bryan himself – note, for one thing, that in L. Frank Baum’s book Dorothy wears Silver Shoes and not Ruby Slippers!
The truly great American liberal was FDR whose mission it was to save capitalism from itself by enacting social programs called for by socialist and labor groups and by setting up regulations and guard rails for business and markets. The New Deal programs provided jobs and funded projects that seeded future economic growth; the regulations forced capitalism to deal with its problem of cyclical crises, panics and depressions. He called for a “bank holiday,” kept the country more or less on the gold standard by issuing an executive order to buy up nearly all the privately held the gold in the country (hard to believe today), began Social Security and unemployment insurance, instituted centralized controls for industry, launched major public works projects (from the Lincoln Tunnel to the Grand Coulee Dam), brought electricity to farms, archived the nation’s folk music and folklore, sponsored projects which brought live theater to millions (launching the careers of Arthur Miller, Orson Welles, Eliza Kazan and many others) and more. This was certainly not a time of government shutdowns.
In the post WWII period and into the 1960s, there were even “liberal Republicans” such as Jacob Javits and Nelson Rockefeller; today “liberal Republican” is an oxymoron. The most daring of the liberal Republicans was Earl Warren, the one-time Governor of California who in 1953 became Chief Justice of the Supreme Court. In that role, Warren created the modern activist court, stepping in to achieve justice for minorities, an imperative which the President and the Congress were too cowardly to take on. But his legacy of judicial activism has led to a politicized Supreme Court with liberals on the losing side in today’s run of 5-4 decisions.
Modern day liberalism in the U.S. is also exemplified by LBJ’s Great Society which instituted Medicare and Medicaid and which turned goals of the Civil Rights Movement into law with the Civil Rights Act of 1964 and the Voting Rights Act of 1965.
JFK and LBJ were slow to rally to the cause of the Civil Rights Movement (Eleanor Roosevelt was the great liberal champion of civil rights) but in the end they did. Richard Nixon and the Republicans then exploited anti-African-American resentment in the once Democratic “solid South” and implemented their “Southern strategy” which, as LBJ feared, has turned those states solidly Republican ever since. The liberals’ political clout was also gravely wounded by the ebbing of the power of once mighty labor unions across the heartland of the country. Further, the conservative movement was energized by the involvement of ideologues with deep pockets like the Koch brothers and by the emergence of charismatic candidates like Ronald Reagan. The end result has been that only the West Coast and the Northeast can be counted on to elect liberal candidates consistently, places like San Francisco and Brooklyn
What is more, liberal politicians have lost their sense of mission and have failed America in many ways since that time as they have moved further and further to the right in the wake of electoral defeats, cozying up to Wall Street along the way. For example, it was Bill Clinton who signed the bill repealing the Glass-Steagall Act undoing one of the cornerstones of the New Deal; he signed the bill annulling the Aid to Families With Dependent Children Act which also went back to the New Deal; he signed the bill that has made the US the incarceration capital of the world, the Violent Crime Control and Law Enforcement Act.
Over the years, the venerable term “liberal” itself has been subjected to constant abuse from detractors. The list of mocking gibes includes tax-and-spend liberal, bleeding heart liberal, hopey-changey liberal, limousine liberal, Chardonnay sipping liberal, Massachusetts liberal, Hollywood liberal, … . There is even a book of such insults and a web site for coming up with new ones. And there was the humiliating defeat of the liberal standard bearer Hillary Clinton in 2016.
So battered is it today that “liberal” is giving way to “progressive,” the label of choice for so many of the men and women of the class of 2018 of the House of Representatives. Perhaps, one hundred years is the limit to the shelf life of a major American political label which would mean “liberal” has reached the end of the line – time to give it a rest and go back to Samuel Johnson’s definition?

Conservative Semantics

Conservatism as a political philosophy traces its roots to the late 18th century: its intellectual leaders were the Anglo-Irish Member of Parliament Edmund Burke and the Scottish economist and philosopher Adam Smith.

In his speeches and writings, Burke extolled tradition, the “natural law” and “natural rights”; he championed social hierarchy, an established church, gradual social change and free markets; he excoriated the French Revolution in his influential pamphlet Reflections on the Revolution in France, a defense of monarchy and the institutions that protect good social order.

Burke is also well known in the U.S. for his support for the colonists in the period before the American Revolution notably in his Speech on Conciliation with the Colonies (1775) where he alerts Parliament to the “fierce spirit of liberty” that characterizes Americans.

Adam Smith, a giant figure of the Scottish Enlightenment, was the first great intellectual champion of laissez-faire capitalism and author of the classic The Wealth of Nations (1776).

Burke and Smith formed a mutual admiration society. According to a biographer of Burke, Smith thought that “on subjects of political economy, [Burke] was the only man who, without communication, thought on these subjects exactly as he did”; Burke, for his part, called Smith’s opus “perhaps the most important book ever written.” Their view of things became the standard one for conservatives throughout the 19th century and well into the 20th.

However, there is an internal inconsistency in traditional conservatism. The problem is that, in the end, laissez-faire capitalism upends the very social structures that traditional conservatism seeks to maintain. The catch-phrase of the day among pundits has become “creative destruction”; this formula, coined by the Austrian-American economist Joseph Schumpeter, captures the churning of capitalism which systematically creates new industries and new social institutions that replace the old – e.g. Sears by Amazon, an America of farmers by an America of city dwellers. Marx argued that capitalism’s failures would lead to its demise; Schumpeter argued that capitalism has more to fear from its triumphs: ineluctably the colossal success of capitalism hollows out the social institutions and mores which nurture capitalism such as church-going and the Protestant Ethic itself. Look at Western Europe today where capitalism is triumphant but where church attendance is reduced to three events: “hatch, match and dispatch,” to put it the playful way Anglicans do.

The Midas touch is still very much with us: U.S. capitalism tends to transform every activity it comes upon into a money-making version of itself. Thus something once innocent and playful like college athletics has been turned into a lucrative monopoly: the NCAA rules over a network of plantations staffed by indentured workers and signs billion dollar television contracts. Health care, too, has been transformed into a money-making machine with lamentable results: Americans pay twice as much for doctors’ care and prescription drugs as those in other advanced industrialized countries and the outcomes are grim in comparison: infant mortality and death in childbirth are off the charts in the U.S. and life expectancy is low compared to those other countries.

On the other hand, a modern capitalist economy can work well for its citizens. We have the examples of Scandanavia and of countries like Japan and Germany. Economists like Thomas Piketty write about the “thirty glorious” years after 1945 when post WWII capitalism built up a solid, prosperous middle class in Western Europe. Add to this what is known as the “French paradox” – the French drink more than Americans, smoke more, have sex more and still live some years longer. To make things worse, their cuisine is better, their work week is shorter and they take much longer vacations – one more example of how a nation can make capitalism work in the interest of its citizenry.

In American political life, in the 1930s, the label “conservative” was grabbed by forces opposed to FDR and the New Deal. Led by Senator Josiah W. Bailey of North Carolina, democrats with some Republican support published “The Conservative Manifesto,” a document which extolled the virtues of free enterprise, limited government and the balance of power among the branches of government.

In the post-war period the standard bearer of conservatism in the U.S. was Republican Senator Robert Taft of Ohio who was anti-New-Deal, anti-union, pro-business and who, as a “fiscal conservative,” stood for reduced government spending and low taxes; he also stood for a non-interventionist foreign policy. His conservatism harked back to Burke’s ideals of community: he supported Social Security, a minimum wage, public housing and federal aid to public education.

However, the philosophy of the current “conservative” political leadership in the U.S. supports all the destructive social Darwinism of laissez-faire capitalism, reflecting the 17th century English philosopher Thomas Hobbes and his dystopian vision much more than either Burke or Smith. Contemporary “conservatism” in the U.S. is hardly traditional conservatism. What happened? Mystère.

A more formal manifesto of Burkean conservatism, The Conservative Mind, was published in 1953 by Russell Kirk, then a professor at Michigan State. But conservative thought was soon co-opted and transformed by a wealthy young Texan whose family money came from oil prospecting – in Mexico and Venezuela! William F. Buckley, like Kirk a Roman Catholic, was founder and long time editor-in-chief of the seminal conservative weekly The National Review. Buckley is credited with (or accused of) transforming traditional Burkean conservatism into what goes by the name of “conservatism” in the U.S. today; he replaced the traditional emphasis on community with his libertarian view point “individualism” and replaced Taft’s non-interventionism with an aggressive Cold War political philosophy – the struggle against godless communism became the great moral cause of the “conservative movement.” For a portrait of the man, click HERE .

To his credit, Buckley kept his distance from fringe groups such as the John Birch Society; Buckley also eschewed Ayn Rand and her hyper-individualistic, atheistic philosophy of Objectivism; a man of letters himself, Buckley was likely appalled by her wooden prose – admittedly Russian and not English was her first language, but still she was no Vladimir Nabokov. On the other hand, Buckley had a long friendship with Norman Mailer, the literary icon from Brooklyn, the opposite of Buckley in almost every way.

Buckley as a cold war warrior was very different from libertarians Ron Paul and Rand Paul who both have an isolationist philosophy that opposes military intervention. On the other hand, all three have expressed similarly eccentric views on race presumably justified by their shared libertarian concept of the right of individuals to do whatever they choose to do even if it includes discrimination against others. For example, Rand Paul has stated that he would have voted against the Civil Rights Act of 1964 which outlawed the Jim Crow Laws of the segregationist states “because of the property rights element … .”

In the 1960s, 70s and 80s, Buckley’s influence spread. The future president Ronald Reagan was weaned off New Deal Liberalism through reading The National Review; in turn Buckley became a supporter of Reagan and they appeared together on Buckley’s TV program The Firing Line. The “conservative movement” was also propelled by ideologues with deep pockets and long term vision like the Koch brothers – for a interesting history of all this, see Nancy MacLean’s Democracy in Chains.

To the Buckley conservatives today, destruction of social institutions, “creative” or otherwise, is somehow not a problem and militarism is somehow virtuous.

As for destruction, among the social structures that have fallen victim recently to creative destruction is the American middle class itself, as income inequality has grown apace. This process began at the tail end of the 1960s and has been accelerating since Ronald Reagan’s presidency as Keynesian economics has given way to “supply side” economics; moreover, the guardrails for capitalism imposed by the New Deal have come undone: the Glass-Steagall Act has been repealed, the labor movement has been marginalized, and high taxes on the wealthy have become a thing of the past – contrast this with the fact that Colonel Tom Parker, the manager of Elvis Presley, considered it his patriotic duty to keep The King in the 90% tax bracket back in the day.

As for militarism, despite VE Day and VJ Day, since the 1950s, the U.S. has been engaged in an endless sequence of wars – big (Korea) and small (Grenada), long (Vietnam) and short (the First Gulf War), visible (Afghanistan) and invisible (Niger), loud (Iraq) and quiet (Somalia), … . All of which has created a situation much like the permanent state of war of Orwell’s 1984.

Moreover, since Buckley’s time, the American “conservatives” have even moved further right: reading Ayn Rand (firmly atheist and pro-choice though she was) in high-school or college has become a rite of passage, e.g. ex-Speaker Paul Ryan. An interventionist, even war-mongering, wing of the “conservative movement” has emerged, the “neo-conservatives” or “neo-cons.” Led by Dick Cheney, they were the champions of George W. Bush’s invasion of Iraq and they applaud all troop “surges” and new military interventions.

As David Brooks recently pointed out in his New York Times column (Nov. 16, 2018), the end of the Cold War deprived the “conservative movement” of its great moral cause, the struggle against godless communist collectivism. And what was a cause has morphed into expensive military adventurism. Indeed, the end of the Cold War failed to yield a “peace dividend” and the military budget today threatens the economic survival of the nation – the histories of France, Spain and many other countries bear witness to how this works itself out, alas! In days of yore, it would have been the fiscal restraint of people known as conservatives that kept government spending in check; today “conservative” members of Congress continue to sound like Robert Taft on the subject of government spending when attacking programs sponsored by their opponents, but they do not hesitate to drive the national debt higher by over-funding the military and pursuing tax cuts for corporations and the wealthy. Supply-side economics cleaves an ever widening income gap, the least conservative social policy imaginable. Then too these champions of the free market and opponents of government intervention rushed to bail out the big banks (but not the citizens whose homes were foreclosed on) during the Great Recession of 2008. All this leads one to think that this class of politicians is serving its donor class and not the working class, the middle class, the upper middle class or even much of the upper class.

Perhaps semantic rock bottom is reached when “conservative” members of Congress vote vociferously against any measure for environmental conservation. But this is predictable given the lobbying power of the fossil fuel industry, a power so impressive that even the current head of the Environmental Protection Agency is a veteran lobbyist for Big Coal. Actually for these conservatives, climate change denial is consistent with their core beliefs: fighting the effects of global warming effectively will require large-scale government intervention, significantly increased regulation of industry and agriculture as well as binding international agreements – all of which are anathema to conservatives in the U.S. today.

Still there where the word “conservative” is most misapplied is in matters judicial. One speaks today of “conservative majorities” on the Supreme Court but these majorities have proved themselves all too ready to rewrite laws and overturn precedent in 5-4 decisions in an aggressive phase of judicial activism.

So for those who fear that corruption of its language is dangerous for the U.S. population, this is the worst of times: “liberal” which once designated a proponent of Gilded Age laissez-faire capitalism is now claimed by the heirs of the New Deal and the Great Society; “conservative” which once designated a traditionalist is now the label for radical activists both political and judicial. “Liberal” is yielding to “progressive” now. However, the word “conservative” has a certain gravitas to it and “conservatism” has taken on the trappings of a religious movement complete with patron saints like Ronald Reagan and Margaret Thatcher; “conservative” is likely to endure, self-contradictory though it has become.

The Roberts Court

 

In 2005, with the retirement of Justice Rehnquist, John Roberts was named to the position of Chief Justice by Republican president George W. Bush. Another change in Court personnel occurred in 2008 when Sandra Day O’Connor retired and was replaced by Justice Samuel Alito. With Roberts and Alito, the Court had an even more solid “conservative” majority than before – the result being that more than ever in a 5-4 decision a justice’ vote would be determined by the party of the president who appointed him or her.

It was Ronald Reagan who named the first woman justice to the Supreme Court with the appointment of Sandra Day O’Connor in 1981. It was also Ronald Reagan who began the practice of Republican presidents’ naming ideological, conservative Roman Catholics to the Supreme Court with the appointment of Antonin Scalia in 1986. This practice on the part of Republican presidents has indeed been followed faithfully as we have to include Neil Gorsuch in this group of seven – though an Episcopalian today, Gorsuch was raised Catholic, went to parochial school and even attended the now notorious Georgetown Prep. Just think: with Thomas and Gorsuch already seated, the Brett Kavanaugh appointment brings the number of Jesuit trained justices on the Court up to three; this numerologically magic number of men trained by an organization famous for having its own adjective, plus the absence of true WASPs from the Supreme Court since 2010, plus the fact that all five of the current “conservative” justices have strong ties to the cabalistic Federalist Society could all make for an interesting conspiracy theory – or at least the elements of a Dan Brown novel.

It is said that Chief Justice Roberts is concerned about his legacy and does not want his Court to go down in history as ideological and “right wing.” However, this “conservative” majority has proven radical in their 5-4 decisions, decisions for which they then have full responsibility.

They have put gun manufacturers before people by replacing the standard interpretation of the 2nd Amendment that went back to Madison’s time with a dangerous one by cynically appealing to “originalism” and claiming the authority to speak for Madison and his contemporaries (District of Columbia v. Heller 2008)

Indeed, with Heller there was no compelling legal reason to play games with the meaning of the 2nd Amendment – if the over 200 years of interpretation of the wording of the amendment isn’t enough, if the term “militia” isn’t enough, if the term “bear arms” isn’t enough to link the amendment to matters military in the minds of the framers, one can consult James Madison’s original text:

    “The right of the people to keep and bear arms shall not be infringed; a well armed, and well regulated militia being the best security of a free country: but no person religiously scrupulous of bearing arms, shall be compelled to render military service in person.” [Italics added].

The italicized clause was written to reassure Quakers and other pacifist religious groups that the amendment was not forcing them to serve in the military, but it was ultimately excluded from the final version for reasons of separation of church and state. This clause certainly indicates that the entirety of the amendment, in Madison’s view, was for the purpose of maintaining militias: Quakers are not vegetarians and do use firearms for hunting. Note too that Madison implies in this text and in the shorter final text as well that “the right to bear arms” is a collective “right of the people” rather than an individual right to own firearms.

The radical ruling in Heller by the five “conservative” justices has stopped all attempts at gun control, enriched gun manufacturers, elevated the National Rifle Association to the status of a cult and made the Court complicit in the wanton killings of so many.

The “conservative” majority of justices has overturned campaign finance laws passed by Congress and signed by the President by summoning up an astonishing, ontologically challenged version of the legal fiction that corporations are “persons” and imbuing them with new First Amendment rights (Citizens United v. FEC 2008).

Corporations are treated as legal “persons” in some court matters, basically so that they can pay taxes and so that the officers of the corporation are not personally liable for a corporation’s debts. But, there was no compelling legal reason to play Frankenstein in Citizens United and create a new race of corporate “persons” by endowing corporations with a human-like right to free speech that allows them to spend their unlimited money on U.S. political campaigns; this decision is the first of the Roberts Court’s rulings to make this list of all-time worst Supreme Court decisions, a list (https://blogs.findlaw.com/supreme_court/2015/10/13-worst-supreme-court-decisions-of-all-time.html ) compiled for legal professionals. It has also made TIME magazine’s list of the two worst decisions in the last 60 years and likely many other such rankings. The immediate impact of this decision has been a further gap between representatives and the people they are supposed to represent; the political class was at least somewhat responsive to the voters, now they are only responsive to the donor class. This likely works well for the libertarians and conservatives who boast “this is a Republic, not a Democracy.”

These same five justices have continued their work by

  • usurping Congress’ authority and undoing hard-fought for minority protections from the Voting Rights Act by adventuring into areas of history and politics that they clearly do not grasp and basing the decision on a disingenuous view of contemporary American race relations (Shelby County v. Holder 2013),

  • doubling down on quashing the Voting Rights Act five years later in a decision that overturned a lower court ruling that Texas’ gerrymandered redistricting map undercut the voting power of black and Hispanic voters (Texas Redistricting Case 2018)

  • breaching the separation of Church and State by ascribing “religious interests” to companies in a libertarian judgment that can justify discrimination in the name of person’s individual freedoms, the “person” in this case being a corporation no less (Burwell v. Hobby Lobby Stores 2014)

  • gravely wounding the labor movement by overturning the Court’s own ruling in a 1977 case, Abood v. Detroit Board of Education, thus undoing years of established Labor Law practice (Janus v. AFSCME 2018) – a move counter to the common law principle of following precedent.

These six decisions are examples of “self-inflicted wounds” on the part of the Roberts Court and can be added to a list begun by Chief Justice Charles Evans Hughes, a list that begins with Dred Scott. The recent accessions of Neil Gorsuch and Brett Kavanaugh to seats on the Court may well make for even more decisions of this kind.

This judicial activism is indeed as far away from the dictionary meaning of “conservatism” as one can get. Calling these activist judges “conservative” makes American English a form of the “Newspeak” of Orwell’s 1984. The Court can seem to revel in its arrogance and its usurpation of power: Justice Scalia would dismiss questions about Bush v. Gore with “Get over it” – a rejoinder some liken to “Let them eat cake” – and he refused to recuse himself in cases involving Dick Cheney, his longtime friend (Bush v. Gore, Cheney v. U.S. District Court).

The simple fact is that the courts have become too politicized. The recent fracas between the President and the Chief Justice where the President claimed that justices’ opinions depended on who appointed them just makes this all the more apparent.

Pundits today talk endlessly on the topic of how “we are headed for a constitutional crisis,” in connection with potential proceedings to impeach the President. But we are indeed in a permanent constitutional crisis in any case. Thus, there is a clear majority in the country that wants to undo the results of Citizens United and of Heller in particular – both are decisions which shot down laws enacted by elected representatives. Congressional term limits are another example; in 1995 with U.S. Term Limits Inc. v. Thornton, the Court nullified 23 state laws instituting term limits for members of Congress, thereby declaring that the Constitution had to be amended for such laws to pass judicial review.

In the U.S. the Congress is helpless when confronted with this kind of dilemma; passing laws cannot help since the Court has already had the final say, a true Catch-22. This is an American constitutional problem, American Exceptionalism gone awry. In England and Holland, for example, the courts cannot apply judicial review to nullify a law; in France, the Conseil Constitutionel has very limited power to declare a law unconstitutional; this was deliberately engineered by Charles de Gaulle to avoid an American style situation because per DeGaulle “la [seule] cour suprême, c’est le peuple” (The only supreme court is the people).

So what can the citizen majority do? The only conceivable recourse is to amend the Constitution; but the Constitution itself makes that prospect dim since an amendment would require approval by a supermajority in both houses of Congress followed by ratification by two-thirds of the states; moreover, states with barely 4% of the population account for over one-third of the states, making it easy and relatively inexpensive for opposition to take hold – e.g. the Equal Rights Amendment. Also, those small, rural states’ interests can be very different from the interests of the large states – one reason reform of the Electoral College system is an impossibility with the Constitution as it is today. Only purely bureaucratic measures can survive the amendment ratification process. Technically, there is a second procedure in Article V of the Constitution where a proposal to call a Constitutional Convention can be initiated by two-thirds of the states but then approval of an amendment by three-fourths of the states is required; this procedure has never been used. Doggedly, the feisty group U.S. Term Limits (USTL), the losing side in that 1995 decision, is trying to do just that! For their website, click https://www.termlimits.com/  .

What has happened is that the Constitution has been gamed by the executive and judicial branches of government and the end result is that the legislative branch is mostly reduced to theatrics. Thus, for example, while the Congress is supposed to have the power to make war and the power to levy tariffs, these powers have been delegated to the office of the President. Even the power to make law has, for so many purposes, been passed to the courts where every law is put through a legal maze and the courts are free to nullify the law or change its meaning invoking the interpretation du jour of the Constitution and/or overturning legal precedents, all on an as needed basis.

This surge in power of the judiciary was declared to be impossible by Alexander Hamilton in his Federalist 78 where he argues approvingly that the judiciary will necessarily be the weakest branch of the government under the Constitution. But oddly no one is paying attention to the rock-star founding father this time. For example, this Federalist Paper of Hamilton’s is sacred scripture for the assertive Federalist Society but they seem silent on this issue – but that is not surprising given that they have become the gatekeepers for Republican presidents’ nominees for the Supreme Court.

Americans are simply blinded to problems with the Constitution by the endless hymns in its praise in the name of American Exceptionalism. Many in Europe also argue that the way the Constitution contributes to inaction is a principal reason that voter participation in the U.S. is far lower than that in Europe and elsewhere. Add to that, the American citizen’s well founded impression that it is the money of corporations, billionaires and super-PACs in cahoots with the lobbyists of the Military Industrial Complex, Big Agriculture, Big Pharma, Big Oil and Big Banks that runs the show and you have a surefire formula to induce voter indifference. Even the improved turnout in the 2018 midterm elections was unimpressive by international standards.

This is not a good situation; history has examples of what happens when political institutions are no longer capable of running a complex nation – the French Revolution, the fall of the Roman Republic, the rise of Fascism in post WWI Europe … .

Bush v. Gore

In 1986, when Warren Burger retired, Ronald Reagan promoted Associate Justice William Rehnquist to the position of Chief Justice and nominated Antonin Scalia to fill Rehnquist’s seat. This created a solid conservative kernel on the Court consisting of the five justices Rehnquist, Scalia, Thomas, O’Connor and Kennedy; there was also Justice John Paul Stevens (appointed by Gerald Ford) who was considered a “moderate conservative.” On occasion O’Connor or Kennedy could become a swing vote and turn things in another direction and Stevens too voted against the conservative majority on some important decisions.
While more conservative than the Burger Court, the Rehnquist Court did not overthrow the legacy of the Warren Court; on the other hand, it promoted a policy of “New Federalism” which favored empowering the states rather than the federal government.
This philosophy was applied in two cases that weakened Roe v. Wade, the defining ruling of the Burger Court.
Thus in Webster v. Reproductive Health Services (1989), the Court upheld a Missouri law that restricted the way state funds could be used in connection with counseling and other aspects of abortion services; this ruling allowed states to legislate in ways thought to have been ruled out by Roe.
As a second example, we have their ruling in Planned Parenthood v. Casey (1992) which also weakened Roe by giving much more power to the states to control access to abortion. Thus today in states like Mississippi, there is virtually no such access. All this works against the poor and the less affluent as women need to travel far, even out of state, to get the medical attention they seek.
Then the Rehnquist Court delivered one of the most controversial, politicized decisions imaginable with its ruling in Bush v. Gore (2000). With this decision, the Court came between a state supreme court and the state’s election system and hand-delivered the presidency to Republican George W. Bush.
After this case, the Court made other decisions that generated some controversy, but in these it came down, relatively speaking, on the liberal side in ruling on anti-sodomy laws, on affirmative action and on election finance. However, Bush v. Gore is considered one of the worst Supreme Court decisions of all time. For a list that includes this decision, Dred Scott, Plessy v. Ferguson and ten others, click HERE ; for a TIME magazine piece that singles it out as one of the two worst decisions since 1960 (along with Citizens United v. FEC), click HERE .
Naturally, the 5-4 decision in Bush v. Gore by the Court’s conservative kernel is controversial because of the dramatic end it put to the 2000 presidential election. There are also legal and procedural aspects of the case that get people’s dander up.
To start there is the fact that in this decision the Court overruled a state supreme court on the matter of elections, something that the Constitution itself says should be left to the states.
For elections, Section 4 of Article 1 of the U.S. Constitution leaves the implementation to the states to carry out, in the manner they deem fit – subject to Congressional oversight but not to court oversight:
    “The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof; but the Congress may at any time by Law make or alter such Regulations, except as to the Places of chusing (sic) Senators.”
N.B. In the Constitution, the “Senators” are an exception because at that time the senators were chosen by the state legislatures and direct election of senators by popular vote did not come about until 1913 and the 17th Amendment.
From the time of the Constitution, voting practices have varied from state to state. In fact, at the outset free African-Americans with property could vote in Maryland and that lasted until 1810; women of property could vote in New Jersey until 1807; in both cases, the state legislatures eventually stepped in and “restored order.”
In the Constitution, it is set out that the electors of the Electoral College should be named by each state legislature and not voted for by the people at all – Hamilton and Madison were most fearful of “mob rule.” The only founding father who expressed some admiration for the mass of U.S. citizenry was, not surprisingly, Jefferson who famously asserted in a letter to Lafayette that “The yeomanry of the United States are not the canaille [rabble] of Paris.”
Choosing Electors by popular vote was established nation-wide, however, by the 1820s; by Section 4 of Article 1 above, it is each state’s responsibility to implement its own system for choosing electors; there is no requirement for things to be uniform. In fact, today Nebraska and Maine use congressional district voting to divide up their electors among the candidates while all the other states use plurality voting where the presidential candidate with the most votes is awarded all the electoral votes from that state. In yet another break with the plurality voting system that the U.S. inherited from England, the state of Maine now employs ranked choice voting to elect Congressional representatives – in fact, in 2018 a candidate in a House Congressional race in Maine with fewer first place votes but a larger total of first and second place votes emerged the victor in the second round of the instant runoff.
So from a Constitutional point of view, the Supreme Court really did not have the authority to take the case Bush v. Gore on. In his dissent, Justice John Paul Stevens decried this usurpation of state court power:
    [The court displayed] “an unstated lack of confidence in the impartiality and capacity of the state judges who would make the critical decisions if the vote count were to proceed”.
Moreover, Sandra Day O’Connor said as much when some years later in 2013 she expressed regret over her role in Bush v. Gore telling the Chicago Tribune editorial board: “Maybe the court should have said, ‘We’re not going to take it, goodbye.’ ”
Taking this case on added to the Courts history of “self inflicted wounds” to use the phrase Chief Justice Charles Evans Hughes applied to bad decisions the Court just did not have to make the way they did for any compelling legal reason.
The concurring justices admitted that their decision was not truly a legal ruling but rather an ad hoc way of making a problem go away when they said that the ruling in Bush v. Gore should not be considered a precedent for future cases:
    “Our consideration is limited to the present circumstances, for the problem of equal protection in election processes generally presents many complexities.”
Another odd thing was that the ruling did not follow the usual practice of having one judge write the deciding opinion with concurring and dissenting opinions from the other justices. Instead, they followed a technique known as a per curiam decision which is usually reserved for 4-4 hung court when no actual decision is being made. It is a technique for dodging responsibility for the decision and not assigning credit for a decision to a particular justice. As another example of how this method of laying down a decision is employed, in the state of Florida their Supreme Court often issues per curiam decisions in death penalty cases.
Borrowing a trope from the Roman orator Cicero, we pass over in silence the revelation that three of the majority judges in this case had reason to recuse themselves by not mentioning the fact that Justice Thomas’ wife was very active in the Bush transition team even as the case was before the Court, by leaving out the fact that Justice Scalia’s son was employed by the very law firm that argued Bush’s case before the Court, by omitting the fact that Justice Scalia and vice-presidential candidate Dick Cheney were longtime personal friends and by skipping over the fact that according to The Wall Street Journal and Newsweek, Justice O’Connor had previously said that a Gore victory would be a disaster for her because she would not want to retire under a Democratic president!
For an image of Cicero practicing his craft before an enthralled Roman Senate, click HERE .
So we limit ourselves to quoting Harvard Professor Alan Dershowitz who summed things up this way:
    “The decision in the Florida election case may be ranked as the single most corrupt decision in Supreme Court history, because it is the only one that I know of where the majority justices decided as they did because of the personal identity and political affiliation of the litigants. This was cheating, and a violation of the judicial oath.”
Another villain in the piece is Governor Jeb Bush of Florida whose voter suppression tactics implemented by Secretary of State Katherine Harris disenfranchised a significant number of voters. In the run up to this election, according to the Brennan Center for Justice at NYU, some 4,800 eligible African-American Florida voters were wrongly identified as convicted felons and purged from the voting rolls. Given that 86% of African-American voters went for Gore over Bush in 2000, one can do the math and see that Gore would likely have won if but 20% of these African-American voters had been able to cast ballots.
Yet another villain in the piece and in the recurring election problems in Florida is the plurality voting system that the state uses to assign all its votes for its electors to the candidate who wins the most votes (but not necessarily the majority of votes). This system works poorly in cases where the elections are as tight as again and again they prove to be in Florida. In 2000, had Florida been using ranked-choice voting (to account for votes for Nader and Buchanan) or congressional district voting (as in Maine and Nebraska), there would have no recount crisis at all – and either way Gore would in all probability have won enough electoral votes to secure the presidency and the matter never would have reached the Supreme Court.
Sadly, the issues of the presidential election in Florida in 2000 are still very much with us – the clumsiness of plurality voting when elections are close, the impact of voter suppression, antiquated equipment, the role of the Secretary of State and the Governor in supervising elections, … . The plot only thickens.

The Warren Court, Part B

In the period from 1953-1969, Earl Warren became the most powerful Chief Justice since John Marshall as he led the Court through a dazzling series of rulings that established the judiciary as a more than equal partner in government – an outcome deemed impossible by Alexander Hamilton in his influential paper Federalist 78 and an outcome deemed undesirable for the separation of powers in government by Montesquieu in his seminalThe Spirit of the Laws. The impact of this Court was so dramatic that it provoked a nationwide call among conservatives for Warren’s impeachment. (Click  HERE).
As the Cold War intensified, the historical American separation of Church and State was compromised. In the name of combating godless communism, the national motto was changed! From the earliest days of the Republic, the motto had been “E Pluribus Unum,” which is the Latin for “Out of Many, One”; this motto was adopted by an Act of Congress under the Articles of Confederation in 1782. In 1956, the official motto became “In God We Trust” and that text now appears on all U.S. paper currency. Shouldn’t they at least have asked what deist leaning Washington, Jefferson, Franklin and Hamilton would have thought before doing this – after all their pictures are on the bills?
Spurred on by the fact that the phrase “under God” appears in most versions of the Gettysburg Address,
          that this nation, under God, shall have a new birth of freedom
groups affiliated with organized religion like the Knights of Columbus successfully campaigned for Congress to insert this phrase into the Pledge of Allegiance; this was done in 1954. Interestingly, “under God” does not appear in Lincoln’s written text for the cemetery dedication speech but was recorded by listeners who were taking notes. Again, this insertion in the Pledge was justified at the time by need to rally the troops in the struggle against atheistic communism. In particular, in the Catholic Church a link was established between the apparitions of The Virgin at Fatima in the period from May to October of 1917 and the October Revolution in Russia in 1917 – though the Revolution actually took place on November 6th and 7th in the Gregorian Calendar; with the Cold War raging, the message of Fatima became to say the rosary for the conversion of Russia, a directive that was followed fervently by the laity, especially school children during the 1950s and 1960s. In addition, the “Second Secret” of Our Lady of Fatima was revealed to contain the line “If my requests are heeded, Russia will be converted, and there will be peace.” When the Soviet Union fell at the end of 1991, credit was ascribed to Ronald Reagan and other political figures; American Catholics of a certain age felt slighted indeed when their contributing effort went unrecognized by the general public!!
The course was reversed somewhat with the Warren Court’s verdict in Engle v. Vitale (1962) when the Court declared that organized school prayer violated the separation of Church and State. A second (and better known) decision followed in Abington School District v. Schempp (1963) where the Court ruled that official school Bible reading also violated the separation of Church and State. This latter case is better known in part because it involved the controversial atheist Madalyn Murray O’Hair who went on to make an unsuccessful court challenge to remove “In God We Trust” from U.S. paper currency. Ironically, the federal courts that thwarted this effort cited Abington School District where Justice Brennan’s concurring opinion explicitly stated that “the motto” was simply too woven into the fabric of American life to “present that type of involvement which the First Amendment prohibits.” In the U.S., “God” written with a capital “G” refers specifically to the Christian deity; so a critic deconstructing Brennan’s logic might argue that Brennan concedes that worship of this deity is already an established religion here.
The Warren Court also had a significant impact on other areas of rights and liberties.
With Baker v. Carr (1962) and Reynolds v. Sims (1964), the Court codified the principle of “one man, one vote.” In the Baker case, the key issue was whether state legislative redistricting was a matter for state and federal legislatures or whether it came under the authority of the courts. Here, the Court overturned its own decision in Colegrove v. Green (1946) where it ruled that such redistricting was a matter for the legislatures themselves with Justice Frankfurter declaring “Courts ought not to enter this political thicket.” The majority opinion in the Baker ruling was written by Justice Brennan; Frankfurter naturally dissented. In any case, this was a bold usurpation of authority on the part of the Supreme Court, something hard to undo even should Congress wish to do so. Again we are very far from Marbury v. Madison; were that case to come up today, one would be very surprised if the Supreme Court didn’t instruct Secretary of State Madison to install Marbury as Justice of the Peace in Washington D.C.
With Gideon v. Wainwright (1963) the Court established the accused’s right to a lawyer in state legal proceedings. This right is established for defendants vis-à-vis the federal government by the Bill of Rights with the Fifth and Sixth Amendments; this case extended that protection to defendants in dealings with the individual states.
With Miranda v. Arizona (1966), it mandated protection against self-incrimination – the “Miranda rights” that a plaintiff must be informed of. A Virginia state law banning interracial marriage was struck down as unconstitutional in Loving v. Virginia (1967), a major civil rights case on its own.
The Gideon and Miranda rulings were controversial, especially Miranda, but they do serve to protect the individual citizen from the awesome power of the State, very much in the spirit of the Bill of Rights and of the Magna Carta; behind the Loving case is an inspiring love story and, indeed, it is the subject of a recent motion-picture.
Warren’s legacy is complex. On the one hand, his Court courageously addressed pressing issues of civil rights and civil liberties, issues that the legislative and executive branches would not deal with. But by going where Congress feared to tread, the delicate balance of the separation of powers among the three branches of government has been altered, irreparably it appears.
The Warren Court (1953-1969) was followed by the Burger Court (1969-1986).
Without Earl Warren, the Court quickly reverted to making decisions that went against minorities. In San Antonio Independent School District v. Rodriguez (1973), the Court held in a 5-4 decision that inequities in school funding did not violate the Constitution; the ruling implied that discrimination against the poor is perfectly compatible with the U.S. Constitution and the right to an education is not a fundamental right. This decision was based on the fact that the right to an education does not appear in the Constitution echoing the logic of Marbury. Later some plaintiffs managed to side-step this ruling by appealing directly to state constitutions. We might add that all 5 concurring justices in this 5-4 ruling were appointed by a Republican president – a pattern that is all too common today; judicial activism fueled by political ideology is a dangerous force.
The following year in another 5-4 decision Milliken v. Bradley (1974), the Court further weakened Brown by overturning a circuit court’s ruling. With this ruling, the Court scratched a plan for school desegregation in the Detroit metropolitan area that involved separate school districts, thus preventing the integration of students from Detroit itself with those of adjacent suburbs like Grosse-Pointe. The progressive stalwarts Marshall, Douglas and Brennan were joined by Byron White in their dissent; the 5 concurring justices were all appointed by Republican presidents. The decision cemented into place the pattern of city schools with black students and surrounding suburban schools with white students.
The most controversial decision made by the Burger Court was Roe v. Wade (1973). This ruling invoked the Due Process Clause of the 14th Amendment and established a woman’s right to privacy as a fundamental right and declared that abortion could not be subject to state regulation until the third trimester of pregnancy. Critics, including Ruth Bader Ginsburg, have found fault with the substance of the decision and its being “about a doctor’s freedom to practice his profession as he thinks best…. It wasn’t woman-centered. It was physician-centered.” A fresh attempt to overturn Roe and subsequent refinements such as Planned Parenthood v. Casey (1992) is expected, given the current ideological makeup of the conservative majority on the Court and the current Court’s propensity to overturn even recent rulings.
Today to the overweening power of the Court has been added a political dimension in that in 5-4 decisions there continues to be, with rare exceptions, that direct correlation between a justice’s vote and the party of the president who appointed that justice. To that add the blatantly partisan political shenanigans we have seen on the part of the Senate Majority leader in dealing with Supreme Court nominations and add the litmus test provided by the conservative/libertarian Federalist Society. The plot thickens. Affaire à suivre.

The Warren Court, Part A

Turning to the courts when the other branches of government would not act was the technique James Otis and the colonists resorted to in the period before the American Revolution, the period when the Parliament and the Crown would not address “taxation without representation.” Like the colonists, African-Americans had to deal with a government that did not represent them. Turning to the courts to achieve racial justice and to bring about social change was then the strategy developed by the NAACP. However, for a long time, even victories in the federal courts were stymied by state level opposition. For example, Guinn v. United States (1915) put an end to one “literacy test” technique for voter suppression but substitute methods were quickly developed.
In the 1950’s, the Supreme Court finally undid the post Civil War cases where the Court had authorized state level suppression of the civil rights of African-Americans – e.g. the Slaughterhouse Cases (1873), the Civil Rights Cases (1875), Plessy v. Ferguson (1896); these were the Court decisions that callously rolled back the 13th, 14th and 15th amendments to the Constitution and locked African-Americans into an appalling system.
The first chink in Plessy was made in a case brilliantly argued before the Supreme Court by Thurgood Marshall, Sweatt v. Painter (1950). The educational institution in this case was the University of Texas Law School at Austin, which at that time actually had a purportedly equal but certainly separate school for African-American law students. The Court was led by Kentuckian Fred M. Vinson, the last Chief Justice to be appointed by a Democratic president – in this case Harry Truman! Marshall exposed the law school charade for the scam that it was. Similarly and almost simultaneously, in McLaurin v. Oklahoma State Regents for Higher Education, the Court ruled that the University of Oklahoma could not enforce segregation in classrooms for PhD students. In these cases, the decisions invoked the Equal Protection Clause of the 14th Amendment; both verdicts were unanimous.
These two important victories for civil rights clearly meant that by 1950 things were starting to change, however slowly – was it World War II and the subsequent integration of the military? Was it Jackie Robinson, the Brooklyn Dodgers and the integration of baseball? Was it the persistence of African-Americans as they fought for what was right? Was it the Cold War fear that U.S. racial segregation was a propaganda win for the international Communist movement? Was it the fear that the American Communist party had gained too much influence in the African-American community – indeed Langston Hughes, Paul Robson and other leaders had visited the Soviet Union; the leading scholar in the U.S. of African-American history was Herbert Aptheker, a card-carrying member of the Communist Party. Or was it an enhanced sense of simple justice on the part of nine “old white men”?
The former Republican Governor of California, Earl Warren, was named to succeed Vinson in 1953 by President Dwight D. Eisenhower. The Warren Court would overturn Plessy and other post Civil War decisions that violated the civil rights of African-Americans and go on to use the power of the Court in other areas of political and civil liberties. This was a period of true judicial activism. Experienced in government, Warren saw that the Court would have to step in to achieve important democratic goals that the Congress was unwilling to act on. Several strong, eminent jurists were part of this Court. There were the heralded liberals William O. Douglas and Hugo Black. There was Viennese-born Felix Frankfurter, a former Harvard Law Professor and a co-founder of the ACLU; Frankfurter was also a proponent of judicial restraint which strained his relationship with Warren over time as bold judgments were laid down. For legal intricacies, Warren relied on William J. Brennan, another Eisenhower appointee but a friend of Labor and a political progressive. Associate Justice John Marshall Harlan, the grandson and namesake of the sole dissenter in Plessy, was the leader of the conservative wing.
Perhaps, the most well-known of the Warren era cases is Brown v. Board of Education, which grouped several civil rights suits that were being pursued by the NAACP and others; the ruling in this case, which like Sweatt was again based on the Equal Protection Clause, finally undid Plessy. This case too was argued before the Court by Thurgood Marshall.
Brown was followed by several other civil rights cases which ended legal segregation in other aspects of American life. Moreover, when school integration was not being implemented around the country, with the case Brown v. Board of Education II (1955), the Court ordered schools to desegregate “with all deliberate speed”; this elusive phrase proved troublesome. It was introduced in the 1912 decision in Virginia v. West Virginia by Court wordsmith Oliver Wendell Holmes Jr. and it was used in Brown II at the behest of Felix Frankfurter, that champion of judicial restraint. This decision was 9-0 as it was in all the Warren Court’s desegregation cases, something that Warren considered most important politically.
With Brown II and other cases, the Court ordered states and towns to carry out its orders. This kind of activism is patently inconsistent with the logic behind Marbury v. Madison where John Marshall declared that the Court could not order anything that was not a power it was explicitly given in the Constitution, not even something spelt out in an act of Congress. No “foolish consistency” to worry about here.
However, school desegregation hit many obstacles. The resistance was so furious that Prince Edward County in Virginia actually closed its schools down for 5 years to counter the Court’s order; in Northern cities like Boston, enforced busing led to rioting; Chris Rock “jokingly” recounts that in Brooklyn NY he was bused to a neighborhood poorer than the one he lived in – and he was beaten up every day to boot.
The most notorious attempt to forestall the desegregation ruling took place in Little Rock, AR in September, 1957. Nine (outstanding) African-American students had been chosen to enroll in previously all white Central High School. The governor, Orval Faubus, actually deployed National Guard troops to assist segregationists in their effort to prevent these students from attending school. President Dwight D. Eisenhower reacted firmly; the Arkansas National Guard was federalized and taken out of the governor’s control and the elite 101st Airborne Division of the U.S. Army (the “Screaming Eagles”) was sent to escort the nine students to class, all covered on national television:
Segregationist resistance did not stop there: among other things, the Little Rock schools were closed for the 1958-59 school year in a failed attempt to turn city schools into private schools and this “Lost Year” was blamed on African-American students. It was ugly.
The stirring Civil Rights movement of the 1950s and 1960s fought for racial equality on many fronts. It spawned organizations and leaders like SNCC (Stokely Carmichael), CORE (Roy Innis) and SCLC (Martin Luther King Jr.) and it spawned activists like Rosa Parks, John Lewis, Michael Schwerner, James Chaney and Andrew Goodman. The price was steep; people were beaten and people were murdered.
The President and Congress were forced to react and enacted the Civil Rights Act of 1964 and the Voting Rights Act of 1965. The latter, in particular, had enforcement provisions which Supreme Court decisions like Guinn had lacked. This legislation reportedly led Lyndon Johnson to predict that the once Solid South would be lost to the Democratic Party. Indeed today, the New South is comprised of “deep red” states. Ironically, it was the Civil Rights movement that made the prosperous New South possible – with segregation, companies (both domestic and international) wouldn’t relocate there or expand operations there; with segregation, the impressive Metro system MARTA of Atlanta could never have been possible; with segregation, a modern consumer economy cannot function; with segregation, Alabama wouldn’t be the reigning national football champion – and college football is a big, big business. .
Predictably, there was a severe backlash against the new legislation and already in 1964 two expedited challenges reached the Warren Court, Heart of Atlanta v. United States and Katzenbach v. McClung. Both rulings were in favor of the Civil Rights Act by means of 9-0 decisions. Interestingly, in both cases, the Court invoked the Commerce Clause of the Constitution rather than the 13th and 14th amendments basing the decision on the authority of the federal government to regulate interstate commerce rather than on civil liberties; experts warn that this could make these decisions vulnerable in the future.
The period of slavery followed by the period of segregation and Jim Crow laws lasted 346 years from 1619 to 1965. Until 1776, this repression was enforced by the English Crown and Parliament, then until the Civil War by the Articles of Confederation and the U.S. Constitution; and then until 1965 by state governments and the Supreme Court. During this time, there was massive wealth accumulation by white America, drawn in no small measure from the profits of slave labor and later the Jim Crow economy. Great universities such as the University of Virginia, Duke and Clemson owe their existence to fortunes gained through this exploitation. Recently, it was revealed that Georgetown University profited from significant slave sales in Maryland to finance its operations. In the North too, the profits from selling factory product to the slave states, to say nothing of the slave trade itself, contributed to the endowments of the great universities of the northeast. Indeed, Columbia, Brown and Harvard have publicly recognized their ties to slavery and the slave trade.  On the other hand, Europeans who arrived in the U.S. in the waves of immigration following the Civil War and their descendants were able, in large numbers, to accumulate capital and accede to home ownership and eventually to higher education. Black America was simply denied this opportunity for those 346 years and today the level of black family wealth is still appallingly low – to cite a Washington Post article of Sept. 28, 2017: “The median net worth of whites remains nearly 10 times the size of blacks’. Nearly 1 in 5 black families have zero or negative net worth — twice the rate of white families.”
It is hard to imagine how this historical injustice can ever be righted. The Supreme Court has played a nefarious role in all this from the Marshall Court’s assiduous defense of the property rights of slave owners (Scott v. London (1806), etc.) to Dred Scott to Plessy, weakening the 14th and 15th amendments en passant, enabling Jim Crow and creating the world of “Separate But Equal.” Earl Warren’s leadership was needed in the period following the Civil War but alas that is not what happened.
In addition to these celebrated civil rights cases, the Warren Court also had to take on suits involving separation of Church and State and involving protection of the individual citizen from the awesome power of the State, the very thing that made the Bill of Rights necessary. More to come. Affaire à suivre.