The Declaration – U.S. Scripture I

There are three founding texts for Americans, texts treated like sacred scripture. The first is the Declaration of Independence, a stirring document both political and philosophical; in schools and elsewhere, it is read and recited with religious spirit. The second is the Articles of Confederation; a government based on this text was established by the Second Continental Congress; despite the new country’s success in waging the Revolutionary War and in reaching an advantageous peace treaty with the British Empire, this document is not venerated by Americans in quite the same way because this form of government was superseded after thirteen years by that of the third founding text, the Constitution. These three texts play the role of secular scripture in the United States; in particular, the Constitution, although only 4 pages long without amendments, is truly revered and quoted like “chapter and verse.”

Athena, the Greek goddess, is said to have sprung full grown from the head of Zeus (and in full armor, to boot); did these founding texts just emerge from the heads and pens of the founding fathers? In particular, there is the Declaration of Independence. Did it have a precursor? Was it part of the spirit of the times, of the Zeitgeist? Mystère.

Though still under British rule in 1775 when hostilities broke out at Lexington and Concord, the colonies had had 159 years of self-government at the local level: they elected law makers, named judges, ran courts and collected taxes. Forward looking government took root early in the colonies. Already in 1619, in Virginia, the House of Burgesses was set up, the first legislative assembly of elected representatives in North America; in 1620, the Pilgrims drew up the Mayflower Compact before even landing; the 1683, the colonial assembly in New York passed the Charter of Liberties. A peculiar matrix evolved where there was slavery and indentured servitude on the one hand and progress in civil liberties on the other (one example, the trial of John Peter Zenger in New York City and the establishment of Freedom of the Press).

In fact, the period from 1690 till 1763 is known as the “period of salutary neglect,” where the British pretty much left the colonies to fend for themselves – the phrase “salutary neglect” was coined by the British parliamentarian Edmund Burke, “the father of modern conservatism.” Salutary neglect was abandoned at the end of the French and Indian War (aka The Seven Years War) which was a “glorious victory” for the British but which left them with large war debts; their idea was to have the Americans “pay their fair share.”

During the run-up to the French and Indian War, at the Albany Convention in 1754, Benjamin Franklin proposed the Albany Plan, which was an early attempt to unify the colonies “under one government as far as might be necessary for defense and other general important purposes.” The main thrust was mutual defense and, of course, it would all be done under the authority of the British crown.

The Albany Plan was influenced by the Iroquois’ Great Law of Peace, a compact that long predated the arrival of Europeans in the Americas; this compact is also known as the Iroquois Constitution. This constitution provided the political basis for the Haudenosaunee, aka the Iroquois Confederation, a confederacy of six major tribes. The system was federal in nature and left each tribe largely responsible for its own affairs. Theirs was a very egalitarian society and for matters of group interest such as the common defense, a council of chiefs (who were designated by senior women of their clans) had to reach a consensus. The Iroquois were the dominant Indian group in the northeast and stayed unified in their dealings with the French, British and Americans. In a letter to a colleague in 1751, Benjamin Franklin acknowledged his debt to the Iroquois with this amazing admixture of respect and condescension:

”It would be a strange thing if six nations of ignorant savages should be capable of forming such a union, and yet it has subsisted for ages and appears indissolvable, and yet a like union should be impractical for 10 or a dozen English colonies.”

The Iroquois Constitution was also the subject of a groundbreaking ethnographic monograph. In 1724, the French Jesuit missionary Joseph-François Lafitau published a treatise on Iroquois society, Mœurs des sauvages amériquains comparées aux mœurs des premiers temps (Customs of the American Indians Compared with the Customs of Primitive Times), in which he describes the workings of the Iroquois system and compares it to the political systems of the ancient world in an attempt to establish a commonality shared by all human societies. Lafitau admired this egalitarian society where each Iroquois, he observed, views “others as masters of their own actions and of themselves” and each Iroquois lets others “conduct themselves as they wish and judges only himself.”

The pioneering American anthropologist Lewis Henry Morgan, who studied Iroquois society in depth, was also impressed with the democratic nature of their way of life, writing “Their whole civil policy was averse to the concentration of power in the hands of any single individual.” In turn, Morgan had a very strong influence on Frederick Engel’s The Origin of the Family, Private Property, and the State: in the Light of the Researches of Lewis H. Morgan (1884). Apropos of the Iroquois Constitution, Engels (using “gentile” in its root Latin sense of “tribal”) waxed lyric and exclaimed “This gentile constitution is wonderful.” Engel’s work, written after Karl Marx’s death, had for its starting point Marx’ notes on Morgan’s treatise Ancient Society (1877).

All of these European writers thought that Iroquois society was an intermediate stage in a progression subject to certain laws of sociology, a progression toward a society and way of life like their own. Of course, Marx and Engels did not think things would stop there.

At the end of the French and Indian War, the British prevented the colonists, who numbered around 2 million at this point, from pushing west over the Appalachians and Alleghenies with the Royal Proclamation of 1763; indeed, his Majesty George III proclaimed (using the royal “our”) that this interdiction was to apply “for the present, and until our further pleasure be known.” This proclamation was designed to pacify French settlers and traders in the area and to keep the peace with Native American tribes, in particular the Iroquois Confederation who, unlike nearly all other tribes, did side with the British in the French and Indian War. It particularly infuriated land investors such as Patrick Henry and George Washington – the latter, a surveyor by trade, founded the Mississippi Land Company in 1763 just before the Proclamation with the expectation of profits from investments in the Ohio River Valley, an expectation dashed by the Proclamation. Designs by Virginians on this region were not surprising given Virginia’s purported westward reach at that time (for a map, click HERE); even today, the Ohio River is the Western boundary of West Virginia. Washington recovered financially and at the time of his death was a very wealthy man.

Though the Royal Proclamation was flouted by colonists who continued to migrate west, it was the first in the series of proclamations and acts that finally outraged the colonists to the point of armed rebellion. Interestingly, in Canada the Royal Proclamation still forms the legal basis for the land rights of indigenous peoples. Doubtless, this has worked out better for both parties than the Discovery Doctrine which has been in force in the U.S. since independence – for details. click HERE .

The Proclamation was soon followed by the hated Stamp Act, which was a tax directly levied by the British government on colonists, as opposed to a tax coming from the governing body of a colony. This led to the Stamp Act Congress which was held in New York City in October 1765. It was attended by representatives from 9 colonies and famously published its Declaration of Rights and Grievances which included the key point “There should be no taxation without representation.” This rallying cry of the Americans goes back to the English Bill of Rights of 1689 which asserts that taxes can only be enacted by elected representatives: “levying taxes without grant of Parliament is illegal.”

Rumblings of discontent continued as the British Parliament and King continued to alienate the colonists.

In the spring of 1774, Parliament abrogated the Massachusetts Charter of 1691, which gave people a considerable say in their government. In September, things boiled over not in Boston, but in the humble town of Worcester. Militiamen took over the courts and in October at Town Meeting independence from Britain was declared. The Committees of Correspondence assumed authority. (For a most useful guide to the pronunciation of “Worcester” and other Massachussets place names, click HERE. By the way, the Worcester Art Museum is outstanding.)

From there, things moved quickly and not so quickly. While the push for independence was well advanced in Massachusetts, the delegates to the First Continental Congress in the fall of 1774 were not prepared to take that bold step: in a letter John Adams wrote “Absolute Independency … Startle[s] People here.”  Most delegates attending the Philadelphia gathering, he warned, were horrified by “The Proposal of Setting up a new Form of Government of our own.”

But acts of insurrection continued. For example, in December 1774 in New Hampshire, activists raided Fort William and Mary, seizing powder and weaponry. Things escalated leading to an outright battle at Lexington and Concord in April, 1775.

Two years after the First Congress, at the Second Continental Congress, the delegates were ready to move in the direction of independence, convening in Philadelphia on May 10, 1776. In parallel, on June 12, 1776 at Williamsburg, the Virginia Constitutional Convention adopted the Virginia Declaration of Rights which called for a break from the Crown and which famously begins with

Section 1. That all men are by nature equally free and independent and have certain inherent rights, of which, when they enter into a state of society, they cannot, by any compact, deprive or divest their posterity; namely, the enjoyment of life and liberty, with the means of acquiring and possessing property, and pursuing and obtaining happiness and safety.

This document was authored principally by George Mason, a planter and friend of George Washington. Meanwhile back in Philadelphia, Thomas Jefferson (who would have been familiar with the Virginia Declaration) was charged by a committee with the task of putting together a statement presenting the views of the Second Continental Congress on the need for independence from the British. After some edits by Franklin and others, the committee brought forth the founding American document, The Declaration of Independence. The 2nd paragraph of which begins with that resounding universalist sentence:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

And it continues

Governments are instituted among Men, deriving their just powers from the consent of the governed

The ideas expressed in the Virginia Declaration of Rights and the Declaration of Independence are radical and they had to have legitimacy. So, we are back to the original mystère, where did that legitimacy come from?

Clearly, phrases like “all men are by nature equally free and independent” and “all men are created equal” reverberate with an Iroquois, New World sensibility. Scholars also see here the hand of John Locke, most literally in the use of Locke’s  phrase “the pursuit of Happiness”; they also see the influence of Locke in the phrase “consent of the governed” – the idea of popular sovereignty being a concept closely associated with social contract philosophers of the European Enlightenment such as Locke and Rousseau.

Others also see the influence of the Scottish Enlightenment, that intellectual flowering with giants like David Hume and Adam Smith. The thinkers whose work most directly influenced the Americans include Thomas Reid (“self-evident” is drawn from the writings of this founder of the Common Sense School of Scottish Philosophy) and Francis Hutchenson (“unalienable rights” is drawn from his work on natural rights – Hutchenson is also known for his impact on his student Adam Smith, who later held Hutchenson‘s Chair of Moral Philosophy at Glasgow). Still others hear echoes of Thomas Paine and the recently published Common Sense – that most vocal voice for independence.

Jefferson would have been familiar with the Scottish Enlightenment; he also would have read the work of Locke and, of course, Paine’s pamphlet. The same most probably applies to George Mason as well. In any case, the Declaration of Independence went through multiple drafts, was read and edited by others of the Second Continental Congress and eventually was approved by the Congress by vote; so it must also have reflected a generally shared sense of political justice.

On July 1, 1776, the Second Continental Congress did take that bold step and voted in favor of Richard Henry Lee’s motion for independence from Great Britain. On July 4th, they officially adopted the Declaration of Independence.

Having declared independence from the British Crown, the representatives at the Second Continental Congress now had to come up with a scheme to unify a collection of fiercely independent political entities. Where could they have turned for inspiration and example in a political landscape dominated by powerful monarchies? Mystère. More to come.


The Rule of St Benedict goes back to the beginning of the Dark Ages. Born in the north of Italy at the end of the 5th century, at the end of the Roman Empire, Benedict is known as the Father of Western Monasticism; he laid down a social system for male religious communities that was the cornerstone of monastic life in Western Europe for a thousand years and one that endures to this day. It was a way of life built around prayer, manual work, reading and, of course, submission to authority. The Benedictine abbey was led by an abbot; under him were the priests who were the ones to copy manuscripts and to sing Gregorian chant; then there were the brothers who bore the brunt of much of the physical work and lay people who bore as much or more. The Rule of St. Benedict is followed not only by the Benedictines, the order he founded, but also by the Cistercians, the Trappists and others.

The monasteries are credited with preserving Western Civilization during the Dark Ages; copying manuscripts led to the marvelous decorative calligraphy of the Book of Kells and other masterpieces – the monks even introduced the symbol @ (the arobase, aka the “at sign”) to abbreviate the Latin preposition ad. They are also credited with sending out those fearless missionaries who brought literacy and Christianity to pagan Northern tribes, bringing new people into the orbit of Rome: among them were Winfrid (aka Boniface) from Wessex who chopped down sacred oak trees to prosyletize German tribes and Willibrord from Northumbria who braved the North Sea to convert the fearsome Frisians, destroying pagan sanctuaries and temples in the process.

The monasteries also accumulated vast land holdings (sometimes as donations from aging aristocrats who were more concerned to make a deal with God than they were for the future of their children and heirs).  With land and discipline came wealth and monasteries became the target of choice for Viking marauders. At the time of the Protestant Reformation, the monasteries fell victim to iconoclasts and plundering potentates. Henry VIII dissolved the Cistercian abbey at Rievaulx in Yorkshire and other sites, procuring jewelry for Anne Boleyn in the process. This was the kind of thing that provoked the crypto-Catholic poet Shakespeare to lament about the

Bared ruin’d choirs, where late the sweet birds sang

Even today, though in ruins, Rievaulx is magnificent as a visit to Yorkshire or a click HERE will confirm. It can also be noted that the monks at Rievaulx abandoned the Rule of St. Benedict in the 15th century and became rather materialistic; this probably made them all the more tempting a target for Henry VIII.

We also owe the great beers to the monks. Today, two of the most sought after Belgian beers in the world are Trappiste and Abbaye de Leffe. The city of Munich derives its name from the Benedictine monastery that stood in the center of town. Today’s Paulaner and Augustiner beers trace back to monasteries. But what was it that turned other-wordly monks into master brewers? Mystère.

The likely story is that it was the Lenten fast that drove the monks to secure a form of nourishment acceptable to the prying papal legates. Theirs was a liquid only fast from Ash Wednesday through Holy Saturday, including Sundays. The beers they crafted passed muster as non-solid food and were rich in nutrients. There are also stories that the strong brews would be considered undrinkable by Italian emissaries from Rome and so would be authorized to be drunk during Lent as additional mortification of the flesh.

Indeed, the followers of the Rule of St. Benedict didn’t stop there. We also owe bubbly to the Benedictines. The first documented sparkling wine was made in 1531 at the Benedictine Abbey of St. Hilaire in the town of Limoux in the south of France – close to the Mediterranean, between Carcassonne and Perpignan – a mere 12 miles from Rennes-Le-Chateau (of Davinci Code fame). Though wine-making went back centuries and occasionally wine would have a certain effervescence, these churchmen discovered something new. What was this secret? Mystère.

These Benedictine monks were the first to come upon the key idea for bubbly – a second fermentation in the bottle. Their approach is what now called the “ancestral method” – first making a white wine in a vat or barrel (“cuve” in French), interrupting the vinification process, putting it in bottles and adding yeast, grape sugar or some alcohol, corking it and letting it go through a second fermentation process in the (hopefully strong and well-sealed) bottle. This is the technique used for the traditional Blanquette de Limoux; it is also used for the Clairette de Die. The classic Blanquette de Limoux was not strong in alcohol, around 6-8 % and it was rather doux and not brut. Today’s product comes in brut and doux versions and is 12.5%  alcohol.

By the way, St. Hilaire himself was not a monk but rather a 4th century bishop and defender of the orthodoxy of the Nicene Creed (the one intoned in the Catholic and Anglican masses and other services); he is known as the “Athanasius of the West” which puts him in the company of a man with a creed all of his own – the Athanasian Creed forcefully affirms the doctrine of the Triune God and is read on Trinity Sunday.

The original ancestral method from Limoux made for a pleasant quaff, but not for the bubbly of today. Did the Benedictines come to the rescue once again? What devilish tricks did they come up with next? Or was it done by some other actors entirely? Mystère.

This time the breakthrough to modern bubbly took place in the Champagne region of France. This region is at the point where Northern and Southern Europe meet. In the late middle ages, it was one of the more prosperous places in Europe, a commercial crossroads with important fairs at cities like Troyes and Rheims. The cathedral at Rheims is one of the most stunning in Europe and the French kings were crowned there from Louis the Pious in 816 to Charles X in 1825. In fact, Rheims has been a city known to the English speaking world for so long that its name in French (Reims) has diverged from its older spelling which we still use in English. It is also where English Catholics in the Elizabethan and Jacobean periods published the Douay-Rheims translation of the Latin Vulgate.

Enter Pierre Pėrignon, son of a prosperous bourgeois family, who in 1668 joined the Benedictine monastery at St. Pierre de Hautvillers. The order by that time deigned to accept non-noble commoners as monks and would dub them dom, a title drawn from the Latin word for lord, dominus, to make up for their lack of a traditional aristocratic handle.

Dom Pėrignon was put in charge of wine making and wine storage, both critical to the survival of the monastery which had fallen on hard times with only a few monks left and things in a sorry state. The way the story is told in France, it was during a pilgrimage south and a stay with fellow Benedictines at the monastery of St. Hilaire in Limoux that he learned the ancestral method of making sparkling wine. However he learned of their techniques, he spent the rest of his life developing and perfecting the idea. By the time of his death in 1715, champagne had become the preferred wine at the court of Louis XIV and the wine of choice of the fashionable rich in London. Technically, Dom Pėrignon was a master at choosing the right grapes to blend to make the initial wine; then he abandoned the ancestral technique of interrupting the fermentation of the wine in the cuve or vat and let the wine complete its fermentation; next, to deal with the problem of dregs caused by the second fermentation in the bottle, he developed the elaborate practice of rotating each bottle by a quarter turn, a step repeated every day for two or more months and known as the remuage (click HERE for an illustration); it is said that professionals can do 40,000 bottles a day.

To all that, one must add that he found a solution to the “exploding bottle problem”; as the pressure of the CO2 that creates the bubbles builds up during the fermentation in the bottle, bottles can explode spontaneously and even set off a chain reaction. To deal with this, Dom Pėrignon turned to bottle makers in London who could make bottles that could withstand the build-up of all that pressure. Also, that indentation in the bottom of the bottle (the punt or kick-up in English, cul in French) was modified; better corks from Portugal too entered into it.

Putting all this together yielded the champagne method. Naturally, wine makers in other parts of France have applied this process to their own wines, making for some excellent bubbly; these wines used to carry the label “Mėthode Champenoise” or “Champagne Method.” While protection of the right to the term Champagne itself is even included in the Treaty of Versailles, more recently, a restriction was made to the effect that only wines from the Champagne region could even be labeled “Champagne Method.” So the other wines produced this way are now called crėmants (a generic term for sparkling wine). Thus we have the Crėmant d’Alsace, the Crėmant de Bourgogne, the Crėmant de Touraine and even the Crėmant de Limoux. All in all, these crėmants constitute the best value in French wines on the market. Other countries have followed the French lead and do not use the label “Champagne” or “Champagne Method” for sparkling wines; even the USA now follows the international protocol (although wines labeled Champagne prior to 2006 are exempt).

Admittedly, champagne is a marvelous beverage. It has great range and goes well with steak and with oysters. The sound of the pop of a champagne cork means that the party is about to begin.  Of course, one key thing is that champagne provides a very nice “high”; and it does that without resorting to high levels of alcoholic content. Drolly, at wine tastings and similar events, the quality of the “high” provided by the wine is never discussed directly – instead they skirt around it talking about legs and color and character and what not, while the key thing is the quality of the “high” and the percentage of alcohol in the wine. So how do you discuss this point in French itself? Mystère. In effect, there is no way to deal with all this in French, la Langue de Voltaire. The closest you can come to “high” is “ivresse” but that has a negative connotation; you might force the situation and try something like “douce ivresse” but that doesn’t work either. Tis a mystère without a solution, then. But it is the main difference between a $60 bottle of Burgundy and a $15 bottle of Pinot Noir – do the experiment.

There are some revisionist historians who try to diminish the importance of Dom Pėrignon in all this. But he has been elevated to star status as an icon for the Champagne industry. As an example, click HERE for a statue in his honor erected by Moȅt-Chandon; click HERE for an example of an advertisement they actually use to further sales.

So the secret to the ancestral method and the champagne method is that second fermentation in the bottle. But now we have popular alternatives to champagnes and crėmants in the form of Prosecco and Cava. If these very sparkling wines are not made with the champagne method, what is it then? Mystère.

In fact, the method used for these effervescent wines does not require that second fermentation in the bottle, a simplification made possible by the invention of stainless steel toward the end of the 19th century. This newer method carries out the secondary fermentation in closed stainless steel tanks that are kept under pressure. This simplifies the entire production process and makes the bubbly much less expensive to make. The process was first invented by Frederico Martinotti in Italy, then improved by Eugène Charmat in France – all this in the 1890’s. So it is known as the metodo italiano in Italy and the Charmat method most elsewhere. In the 1960’s things were improved further to allow for less doux, more brut wines. They are now extremely popular and considered a good value by the general wine-loving public.

Finally, there is the simplest method of all to make sparkling wine or cider – inject carbon dioxide directly into the finished wine or cider, much like making seltzer water from plain water with a SodaStream device. In fact, this is the method used for commercial ciders. Please don’t try this at home with a California chardonnay if you don’t have strong bottles, corks and protective wear.

Indeed, beer and wine have played an important role in Western Civilization; wine is central to rituals of Judaism and Christianity; the ancient Greeks and Romans even had a god of wine. In fact, beer and wine go back to the earliest stages of civilization. When hunter gatherers metamorphosed into farmers during the Neolithic Revolution and began civilization as we know it, they traded a life-style where they were taller, healthier and longer-lived for one in which they were shorter, less healthy, had a shorter life span and had to endure the hierarchy of a social structure with priests, nobles, chiefs and later kings.  On the other hand, archaeological evidence often points to the fact that one of the first things these farmers did do was to make beer by fermenting grains. Though the PhD thesis hasn’t been written yet, it is perhaps not unsafe to conclude that fermentation made this transition bearable and might even have been the start of it.




North America III

When conditions allowed, humans migrated across the Bering Land Bridge moving from Eurasia to North America thousands of years before the voyages of discovery of Columbus and other European navigators. That raises the question whether there were Native Americans who encountered Europeans before Columbus. If so, were these encounters of the first, second or third kind? Mystère.

For movement west by Europeans before Columbus, first we have the Irish legend of the voyage of St. Brendan the Navigator. Brendan and his 16 mates (St. Malo among them) sailed in a currach – an Irish fisherman’s boat with a wooden frame over which are stretched animal skins (nowadays they use canvas and sometimes anglicize the name to curragh). They reportedly reached lands far to the West, even Iceland and Newfoundland in the years 512-530 A.D. All this was presented as fact by nuns in parochial schools of yore, but, begorrah, there is archaeological evidence of the presence of Irish visitors on Iceland before any Viking settlements there. Moreover, in the 1970’s the voyage of St. Brendan was reproduced by the adventurer Tim Severin and his crew, which led to a best-selling book The Brendan Voyage, lending further credence to the nuns’ version of history. However, there is no account in the legends of any contact with new people; for contact with a mermaid, click HERE.

In the late 9th century, Norsemen accompanied by (not so willing) Celtic women reached Iceland and established a settlement there (conventionally dated as of 874 A.D.). Out of Iceland came the Vinland Saga of the adventures of Leif Erikson (who converted to Christianity and so gained favor with the later saga compilers) and of the fearless Freydis Eriksdottir (who also led voyages to Vinland but who stayed true to her pagan roots). It has been established that the Norse did indeed reach Newfoundland and did indeed start to found a colony there; the site at L’Anse-aux-Meadows has yielded abundant archaeological evidence of this – all taking place around 1000 A.D.  The Norse of the sagas called the indigenous people they encountered in Vinland the Skraeling (which can be translated as “wretched people”). These people were not easily intimidated; there were skirmishes and more between the Skraeling (who did have bows and arrows) and the Norse (who did not have guns). In one episode, Freydis grabs a fallen Viking’s sword and drives the Skraeling attackers off on her own – click HERE for a portrait of Freydis holding the sword to her breast.

Generally speaking, the war-like nature of the Skraeling is credited with keeping the Vikings from establishing a permanent beach head in North America. So these were the first Native Americans to encounter Europeans, solving one mystery. But exactly which Native American group were the Skraeling? What is their migration story? Again mystère.

The proto-Inuit or Thule people, ancestors of the modern Inuit, emerged in Alaska around 1000 B.C. Led by their Sled Dogs, they “quickly” made their way east across Arctic Canada, then down into Labrador and Newfoundland. The proto-Inuit even made their way across the Baffin Bay to Greenland. What evidence there is supports the idea that these people were the fierce Skraeling of the sagas.

As part of the movement West, according to the sagas, Norse settlers came to Greenland around 1000 A.D. led by the notorious Erik the Red, father both of Leif and Freydis. Settlers from Norway as well as Iceland joined the Greenland colony and it became a full blown member of medieval Christendom, what with churches, a monastery, a convent and a bishop. In relatively modern times, around 1200 A.D., the proto-Inuit reached far enough South in Greenland to encounter the Europeans there. Score one more for these intrepid people. These are the only two pre-Columbian encounters between Europeans and Native Americans that are well established.

In the end, though, the climate change of the Little Ice Age (which began around 1300 A.D.) and the Europeans’ impact on the environment proved too much and the Greenland colony died out sometime in the 1400’s. The proto-Inuit population with their sled dogs stayed the course (though not without difficulty) and survived. As a further example of the impact of the climate on Western Civilization, the Little Ice Age practically put an end to wine making in the British Isles; the crafty and thirsty Scots then made alcohol from grains such as barley and Scotch whisky was born.

The success of the proto-Inuit in the Arctic regions was based largely on their skill with the magnificent sled dogs that their ancestors had brought with them from Asia. The same can be said for the Aleut people and their Alaskan Malamute Dog. Both these wolf-like marvels make one want to read or reread The Call of the Wild; for a Malamute picture, click HERE.

We know the Clovis people and other populations in the U.S. and Mexico also had dogs that had come from Siberia. But today in the U.S. we are overwhelmed with dogs of European origin – the English Sheepdog, the Portuguese Water Dog, the Scottish Terrier, the Irish Setter, the French Poodle, the Cocker Spaniel, and on and on. What happened to the other dogs of the Native Americans? Are they still around? Mystère.

The simple answer is that, for the most part, in the region south of the Arctic, the native dogs were simply replaced by the European dogs. However, for the lower 48, it has been established recently that the Carolina Dog, a free range dingo, is Asian in origin. In Mexico, the Chihuahua is the only surviving Asian breed; the Xoloitzcuintli (aka the Mexican Hairless Dog) is thought to be a hybrid of an original Asian dog and a European breed.  (Some additional Asian breeds have survived in South America.)

Still it is surprising that the native North Americans south of the Arctic regions switched over so quickly and so completely to the new dogs. A first question that comes to mind is whether or not these two kinds of dogs were species of the same animal; this question can’t be answered since dogs and wolves are already hard to distinguish – they can still interbreed and their DNA split only goes back at most 30,000 years. A more tractable formulation would be whether or not the Asian dogs and the European dogs are the issue of the same domestication event or of different domestication events. However, “it’s complicated.” One sure thing we know comes from the marvelous cave at Chauvet Pont d’Arc in central France where footprints of a young child walking side by side with a dog or wolf have been found which date back some 26,000 years ago. This would point to a European domestication event; however, genetic evidence tends to support an Asian origin for the dog. Yet another theory backed by logic and some evidence is that both events took place but that subsequently the Asian dogs replaced the original European dogs in Western Eurasia.

For one thing, the dogs that came with the Native Americans from Siberia were much closer to the dogs of the original Asian self-domestication that took place in China or Mongolia (according to PBS); in any case, they would not have gone through as intense and specialized a breeding process as would dogs in populous Europe, a process that made the European dogs more useful to humans south of the Arctic and more compatible with domestic animals such as chickens, horses and sheep – until the arrival of Europeans the Native Americans did not have domestic animals and did not have resistance to the poxes associated with them.

The role of dogs in the lives of people grows ever more important and dogs continue to take on new work roles – service dogs, search and rescue dogs, guide dogs, etc. People with dogs reportedly get more exercise, get real emotional support from their pets and live longer. And North America has given the world the wonderful Labrador Retriever. The “Lab” is a descendant of the St John’s Water Dog which was bred in the Province of New Foundland and Labrador; this is the only Canadian province to bear a Portuguese name – most probably for the explorer João Fernandes Lavrador who claimed the area for the King of Portugal in 1499, an area purportedly already well known to Portuguese, Basque and other fearless fishermen who were trolling the Grand Banks before Columbus (but we have no evidence of encounters of such fishermen with the Native Americans). At one point later in its development, the Lab was bred in Scotland by the Duke of Buccleuch, whose wife the Duchess was Mistress of the Robes for Queen Victoria (played by Diana Rigg in the PBS series Victoria – for the actual duchess, click HERE). From Labrador and Newfoundland, the Lab has spread all over North America and is now the most popular dog breed in the U.S. and Canada – and in the U.K. as well.

North America II

At various times in history, the continents of Eurasia and North America have been connected by the Bering Land Bridge which is formed when water levels recede and the Bering Sea is filled in (click  HERE for a dynamic map that shows changes in sea level over the last 21,000 years).

When conditions allowed, humans (with their dogs) migrated across the Bering Land Bridge moving from Eurasia to North America. It is not certain exactly when this human migration began and when it ended but a typical estimated range is from 20,000 years ago to 10,000 years ago. It has also been put forth that some of these people resorted to boats to ferry them across this challenging, changing area.

DNA analysis has verified that these new arrivals came from Siberia. It has also refuted Thor Heyerdahl’s “Kon Tiki Hypothesis” that Polynesia was settled by rafters from Peru – Polynesian DNA and American DNA do not overlap at all. On the other hand, there is DNA evidence that some of the trekkers from Siberia had cousins who lit off in the opposite direction and became aboriginal people of New Guinea and of Australia!

The Los Angeles area is famous for its many attractions. Among them is the site of the La Brea Tar Pits, click HERE for a classic image. This is the resting place of countless animals who were sucked in to the primeval tar ooze here over a period of thousands and thousands of years. What is most striking is that so many of them have gone extinct, especially large animals such as the camel, the horse, the dire wolf, the giant sloth, the American lion, the sabre-toothed tiger, … .  In fact, with the exception of the jaguar, the musk ox, the moose, the caribou and the bison, all the large mammals of North America disappeared by 8,000 years ago. Humans arrived in North America not so many years before and we know they were successful hunters of large mammals in Eurasia. So, as in radio days, the $64 question is: did humans cause the extinction of these magnificent animals in North America? Mystère.

The last Ice Age lasted from 110,00 years ago to 11,700 years ago (approximately, of course).  During this period, glaciers covered Canada, Greenland and states from Montana to Massachusetts. In fact, the Bering Land Bridge was created when sea levels dropped because of the massive accumulation of frozen water locked in the glaciers. The glaciers didn’t just sit still; rather they moved South, receded to the North and repeated the cycle multiple times. In the process, they created the Great Lakes, flattened Long Island, and amassed that mound of rubble known as the Catskill Mountains – technically the Catskills are not mountains but rather a mass of monadnocks but the Rockies and Appalachians are mountains, created by mighty tectonic forces that thrust them upward. In any case, South of the glaciers, North America was home to many large mammal species.

As the Ice Age ended, much of North America was populated by the hunting group known as the Clovis People. Sites with their characteristic artifacts have been excavated all over the lower 48 and northern Mexico. For a picture of their characteristic spear head, click HERE. The term Clovis refers to the town of Clovis NM, scene of the first archaeological dig that uncovered this culture; luckily this dig preceded any work at the nearby town of Truth Or Consequences NM.

The Clovis people, who dominated North America south of the Arctic at the time of these mass extinctions, were indeed hunters of large and small mammals and that has given rise to the “Clovis overkill hypothesis” – that it was the Clovis people who hunted the horse and other large species to extinction. The hypothesis is supported by the fact that Clovis sites have been found with all kinds of animal remains – mammoths, horses, camels, sloths, tapirs and other species.

For one thing, these animals did not co-evolve with humans and had not honed defensive mechanisms against them – unlike, say, the Zebra in Africa (descendants of the original North American horse) who are aggressive toward humans; the Zebra have neither been hunted to extinction nor domesticated. In fact, the sociology of grazing animals such as the horse can work against them when pursued by hunters. The herd provides important defense mechanisms – mobbing, confusing, charging, stampeding, plus the presence of one or more bulls. But since these animals employ a harem system and the herd has many cows per bull, the solitary males or the males in very small groups who are not part of the herd are natural targets for hunting. Even solitary males who use speed of flight as their first line of defense can fall victim to persistence hunting, a tactic known to have been used by humans – pursuing prey relentlessly until it is immobile with exhaustion:  a horse can run the 10 furlongs of the Kentucky Derby faster than a human in any weather but on a hot day a sweating human can beat a panting horse in a marathon. An example of persistence hunting in more modern times is stag hunting on horseback with yelping dogs – the hunts ends when the stag is exhausted and immobile and the coup de grace is administered by the Master of the Hunt. As for dogs, they were new to the North American mammals and certainly would have aided the Clovis people in the hunt.

Also there is a domino effect as an animal is hunted to extinction: the predator animals that depend on it are also in danger. By way of example, it is thought that the sabre-toothed tiger disappeared because its prey the mammoth went extinct.

Given all this, how did the bison and caribou survive?  In fact, for the bison, it was quite the opposite. At this time, there was a bison population explosion; given that horses and mammoths have the same diet as bison, some scientists postulate that that competition with the overly successful bison drove the others out. Another thing is that bison live in huge herds while animals like horses live in small bands. It is theorized that the caribou, who also travel in massive herds, survived by pushing ever further north into the arctic regions where ecological conditions were less hostile for them and less hospitable to humans and others.

However, all this is happening at a dramatic period, the end of the Ice Age. So warming trends, the end of glaciation and other environmental changes can have contributed to this mass extinction: open spaces were replaced by forests reducing habitat, heavy coats became a burden, … In fact, the end of the last Ice Age is also the end of the much longer Pleistocene period; this  was followed by the much warmer Holocene period which is the one we are still in today. So the Ice Age and the movement of the glaciers suddenly ended; this was global warming to a degree that would not be seen again until the present time. The warming that followed the Ice Age would also have changed the ecology of insects, arachnids, viruses et al, with a potentially lethal impact on plant life and on mega fauna. Today we are witnessing a crisis among moose caused by the increase of the winter tick population which is no longer kept in check by cold winters. We are also seeing insects unleashed to attack trees. Along the East Coast, it is the southern pine beetle which has now reached New England – on its Shermanesque march north, this beetle has destroyed forests, woods, groves, woodlands, copses, thickets and stands of once proud pine trees. It is able to move north because the minimum cold temperature in the Mid-Altantic states has warmed by about 7 degrees Fahrenheit over the last 50 years. In Montana and other western states it is the mountain pine beetle and the endless fires that are destroying forests.

Clearly, the rapidly occurring and dramatic transformations at the end of the Ice Age could have disrupted things to the point of causing widespread extinctions – evolution did not have time to adjust.

And then there is this example where the overkill hypothesis is known not to apply. The most recent extinction of mammoths took place on uninhabited Wrangel Island in the Arctic Ocean off the Chukchi Peninsula in Siberia, only 4,000 years ago, and that event is not attributed to humans in any way. The principal causes cited are environmental factors and genetic meltdown – the accumulation of bad traits due to the small size of the breeding population

In sum, scientists make the case that climate change and environmental factors were the driving forces behind these extinctions and that is the current consensus.

So it seems that the overkill hypothesis is an example of the logical fallacy “post hoc ergo propter hoc” – A happened after B, therefore B caused A. By the way, this fallacy is the title of the 2nd episode of West Wing where Martin Sheen as POTUS aggressively invokes it to deconstruct his staff’s analysis of his electoral loss in Texas; in the same episode, Rob Lowe shines in a subplot involving a call-girl who is working her way through law school! Still, the circumstantial evidence is there – humans, proven hunters of mammoths and other large fauna, arrive and multiple large mammals disappear.

The situation for the surviving large mammals in North America is mixed at best. The bison are threatened by a genetic bottleneck (a depleted gene pool caused by the Buffalo Bill era slaughter to make the West safe for railroads), the moose by a climate change and tick borne diseases, the musk ox’s habitat is reduced to arctic Canada, the polar bear and the caribou have been declared vulnerable species, the brown bear and its subspecies the grizzly bear also range over habitats that have shrunk. The fear is that human involvement in climate change is moving things along so quickly that future historians will be analyzing the current era in terms of a new overkill hypothesis.


North America I

Once upon a time, there was a single great continent – click HERE – called Pangaea. It formed 335 million years ago, was surrounded by the vast Panthalassic Ocean and only began to break apart about 175 million years ago. North America dislodged itself from Pangaea and started drifting west; this went on until its plate rammed into the Pacific plate and the movement dissipated but not before the Rocky Mountains swelled up and reached great heights.

As the North American piece broke off, it carried flora and fauna with it. But today we know that many land species here did not come on that voyage from Pangaea; even the iconic American bison (now the National Mammal) did not originate here. How did they get to North America? Something of a mystère.

Today the Bering Strait separates Alaska from the Chukchi Peninsula in Russia but, over the millennia, there were periods when North America and Eurasia were connected by a formation known as the Bering Land Bridge, aka Beringia. Rises and ebbs in sea level due to glaciation, rather than to continental drift, seem to be what creates the bridge. When the land bridge resurfaces, animals make their way from one continent to the other in both directions.

Among the large mammals who came from Eurasia to North America by means of the Bering Land Bridge were the musk ox (click HERE), the steppe mammoth, the steppe bison (ancestor of our bison), the moose, the cave lion (ancestor of the American lion) and the jaguar. The steppe mammoth and the American lion are extinct today.

Among the large mammals native to North America were the Columbian mammoth, the sabre-toothed tiger, and the dire wolf. All three are now extinct; for an image of the three frolicking together in sunny Southern California, click HERE .

The dire wolf lives on, however, in the TV series Game of Thrones and this wolf is the sigil of the House of Stark; so it must also have migrated from North America to the continent of Westeros – who knew !

Also among the large mammals native to North America are the caribou, the brown bear, the short-faced bear and the tapir. The latter two are sadly extinct today.

In school we all learned how the Spanish conquistadors brought horses to North America, how the Aztecs and Incas had never seen mounted troops before and how swiftly their empires fell to much smaller forces as a result. An irony here is that the North American plains were the homeland of the species Equus and horses thrived there until some 10,000 years ago. Indeed, the horse and the camel are among the relatively few animals to go from North America to the more competitive Eurasia; these odd-toed and even-toed ungulates prospered there and in Africa – even Zebras are descended from the horses that made that crossing.  The caribou also crossed to Eurasia where they are known as reindeer.

Similarly, South America split off from Pangaea and drifted west creating the magnificent range of the Andes Mountains before stopping.

The two New World continents were not connected until volcanic activity and plate tectonics created the Isthmus of Panama by 2.8 million years ago – some say earlier (click HERE). The movement of animals between the American continents via the Isthmus of Panama is called the Great American Interchange. Among the mammals who came from South America to North America were the mighty ground sloth (click HERE ).

This sloth is now extinct but some extant smaller South American mammals such as the cougar, armadillo, porcupine and opossum also made the crossing. The opossum and the ground sloth are both marsupials; before the Great American Interchange, there were no marsupials in North America as there are none in Eurasia or Africa.

The camel, the jaguar, the tapir, the short-faced bear and the dire wolf made their way across the Isthmus of Panama to South America. The camel is the ancestor of today’s llamas, vicuñas, alpacas and guanacos. The jaguar and tapir have found the place to their liking, the short-faced bear has evolved into the spectacled bear but the dire wolf is not found there today; it is unknown if it has survived on the fictional continent of Essos.

The impact of the movement of humans and dogs into North America is a subject that needs extensive treatment and must be left for another day. But one interesting side-effect of the arrival of humans has been the movements of flora in and out of North America. So grains such as wheat, barley and rye have been brought here from Europe, the soybean from Asia, etc.. In the other direction, pumpkins, pecans, and cranberries have made their way from North America to places all over the planet. Two very popular vegetables, that originated here and that have their own stories, are corn and the sweet potato.

North America gave the world maize. This word came into English in the 1600s from the Spanish maiz which in turn was based on an Arawak word from Haiti. The Europeans all say “maize” but why do Americans call it “corn”? Mystère.

In classical English, corn was a generic term for the locally grown grain – wheat or barley, or rye … . Surely Shakespeare was not thinking of maize when he included this early version of Little Boy Blue in King Lear where Edgar, masquerading as Mad Tom, recites

Sleepest or wakest thou, jolly shepherd?

Thy sheep be in the corn.

And for one blast of thy minikin mouth,

Thy sheep shall take no harm.

So naturally the English colonists called maize “Indian corn” and the “Indian” was eventually dropped for maize in general – although Indian corn is still in widespread use for a maize with multicolored kernels, aka flint corn. If you want to brush up on your Shakespeare, manikin is an archaic word that came into English from the Dutch minneken and that means small.

That other culinary gift of North America whose name has a touch of mystère is the sweet potato. In grocery stores, one sees more tubers labeled yam than one does sweet potato. However, with the rare exception of imported yams, all these are actually varieties of the sweet potato. The yam is a different thing entirely – it is a perennial herbaceous vine, while a sweet potato is in the morning glory family (and is the more nutritious vegetable). The yam is native to Africa and the term yam was probably brought here by enslaved West Africans.

The sweet potato has still another language problem: it was the original vegetable that was called the batata and brought back to Spain early in the 1500’s; later the name was usurped by that Andean spud which was also called a batata, and the original vegetable had to add “sweet” to its name making it a retronym (like “acoustic guitar” , “landline phone” and “First World War”).


Gold Coin to Bit Coin

The myth of Midas is about the power of money – it magically transforms everything in its path and turns it into something denominated by money itself. The Midas story comes from the ancient lands of Phrygia and Lydia, in western modern day Turkey, close to the Island of Lesbos and to the Ionian Greek city Smyrna where Homer was born. It was the land of a people who fought Persians and Egyptians, who had an Indo-European language, and who were pioneering miners and metal workers. It was the land of Croesus, the King of Sardis, the richest man in the world whose wealth gave rise to the expression “as rich as Croesus.” Click HERE for a map.
Croesus lived in the 6th century B.C., a century dominated by powerful monarchs such as of Cyrus the Great of Persia (3rd blog appearance), Tarquin the Proud of Rome, Hammurabi II of Babylon, Astyages of the Medes, Zedekiah the King of Judah, the Pharaoh Amasis II of Egypt, Hamilcar I of Carthage. How could the richest man in the world at that time be a king from a backwater place such as Sardis in Lydia? Mystère.
In the time of Achilles and Agamemnon in the Eastern Mediterranean as in the rest of the world, goods and services were exchanged by barter. Shells, beads, cocoa beans and other means were also used to facilitate exchange, in particular to round things out when the goods bartered didn’t quite match up in value.
So here is the simple answer to our mystère: The Lydians introduced coinage to the world, a first in history, and thus invented money and its magic – likely sometime in the 8th century B.C. Technically, the system they created is known as commodity money, one where a valuable commodity such as gold or silver is minted into a standardized form.
Money has two financial functions: exchange (buying and selling) and storage (holding on to your wealth until you are ready to spend it or give to someone else). So it has to be easy to exchange and it has to hold its value over time. The first Lydian coins were made from an alloy of gold and silver that was found in the area; later technology advances meant that by Croesus’ time, coins of pure gold or pure silver could be struck and traded in the marketplace. All this facilitated commerce and led to an economic surge which made Croesus the enduring personification of wealth and riches. Money has since has made its way into every nook and cranny of the world, restructuring work and society and creating some of the world’s newest professions along the way; money has evolved dramatically from the era of King Croesus’ gold coin to the present time and Satoshi Nakamoto’s Bitcoin.
The Lydian invention was adopted by the city states of Greece. Athens and other cities created societies built around the agora, the market-place, as opposed to the imperial societies organized around the palace with economies based on in-kind tribute – taxes paid in sacks of grain, not coin. These Greek cities issued their own coins and created new forms of political organization such as democratic government. This is a civilization far removed from the warrior world of the Homeric poems. The agora model spread to the Greek cities of Southern Italy such as Crotone in Calabria known for Pythagoras and his Theorem, such as Syracuse in Sicily renowned for Archimedes and his “eureka moment.” Further north in Rome, coinage was introduced to facilitate trade with Magna Graecia as the Romans called this region.
The fall of Rome came about in part because the economy collapsed under the weight of maintaining an overextended and bloated military. Western Europe then fell into feudalism which was only ended with the growth of towns and cities in the late middle ages: new trading networks arose such as the Hanseatic League, the Fairs of the Champagne region (this is before the invention of bubbly), and the revival of banking in Italy that fueled the Renaissance. The banks used bills of exchange backed by gold. So this made the bill of exchange a kind of paper money, but it only circulated within the banking community.
Banks make money by charging interest on loans. The Italian banks did not technically charge interest because back then charging interest was the sin of usury in the eyes of the Catholic Church – as it still is in Sharia Law. Rather they side-stepped this prohibition with the bills of exchange and took advantage of exchange rates between local currencies – their network could transfer funds from London to Lyons, from Antwerp to Madrid, from Marseilles to Florence, … . The Christian prohibition against interest actually goes back to the Hebrew Bible, but Jewish law itself only prohibits charging interest on loans made to other Jews. Although the reformers Luther and Calvin both condemned charging interest, the Protestant world as well as the Church of Rome eventually made their peace with this practice.
In China also, paper exchange notes appeared during the Tang Dynasty (618 A.D. -907 A.D.) and news of this wonder was brought back to Europe by Marco Polo; perhaps that is where the Italian bankers got the idea of replacing gold with paper. Interestingly, the Chinese abandoned paper entirely in the mid 15th century during a bout of inflation and only re-introduced it in recent times.
Paper money that is redeemable by a set quantity of a precious metal is an example of representative currency. In Europe, the first true representative paper currency was introduced in Sweden in 1661 by the Stockholm Bank. Sweden waged major military campaigns during the Thirty Years War which lasted until 1648 and then went on to a series of wars with Poland, Lithuania, Russia and Denmark that only ended in 1658. The Stockholm bank’s mission was to reset the economy after all these wars but it failed after a few years simply because it printed much more paper money than it could back up with gold.
The Bank of England was created in 1694 in order to deal with war debts. It issued paper notes for the British pound backed by gold and silver. The British Pound became the world’s leading exchange currency until the disastrous wars of the 20th century.
The trouble with gold and silver is that supply is limited. The Spanish had the greatest supply of gold and silver and so the Spanish peso coin was the most widespread and the most used. This was especially the case in the American colonies and out in the Pacific. In the English speaking colonies and ex-colonies, pesos were called “pieces of 8” since a peso was equivalent to 8 bits, the bit being a Spanish and Mexican coin that also circulated widely. The story of the term dollar itself begins in the Joachimsthal region of Bohemia where the coin called the thaler (Joachim being the father of Mary, thal being German for valley) was first minted in 1517 by the Hapsburg Empire; it was then imitated elsewhere in Europe including in Holland where the daler was introduced and in Scotland where a similar coin took the name dollar. The daler was the currency of New Amsterdam and was used in the colonies. The Spanish peso, for its part, became known as “the Spanish dollar.” After independence, in the U.S. the dollar became the official currency and dollar coins were minted – paper money (except for the nearly worthless continentals of the Revolutionary War) would not be introduced in the U.S. until the Civil War. En passant, it can be noted that the early U.S. dollar was still thought of as being worth 8 bits and so “2 bits” became a term for a quarter dollar. (Love those fractions, 2/8 = 1/4). The Spanish dollar also circulated in the Pacific region and the dollar became the name of the currencies of Hong Kong, Australia, New Zealand, … .
During the Civil War, the Lincoln government introduced paper currency backed by a quantity of gold to help finance the war. Before long, Lincoln lowered this quantity because of the cost of the war, debasing the dollar in the process.
The money supply is limited by the amount of gold a government has in its vaults; this can have the effect of obstructing commerce. In the period leading up to World War I, in order to get more money into the economy, farmers in the American mid-west and west militated for silver to back the dollar at the ratio of 16 ounces of silver to 1 ounce of gold. The Wizard of Oz, written in support of this movement, is an allegory where the Cowardly Lion is really William Jennings Bryan, the Scarecrow is the American farmer, the Tin Man is the American worker; and the Wizard is Mark Hanna, the Republican political kingmaker; Dorothy’s magic slippers are silver not red in the book. Unlike the book, in the real world, the free silver movement failed to achieve its objective.
Admittedly, this is a long story – but soldier on, Bitcoin is coming up soon.
Needless to say, World War I had a terrible effect on currencies, especially in Europe where the German mark succumbed to hyper-inflation and money was worth less than the cost of printing it. This would have tragic consequences.
In the U.S. during the depression, the Roosevelt government made some very strong moves. It split the dollar into two – the domestic dollar and the international dollar; the domestic dollar was disconnected from gold completely while the dollar for international payments was still backed by gold but at $21 dollars the ounce, a significant devaluation. A paper currency, like Roosevelt’s domestic dollar, which is not backed by a commodity is called a fiat currency – meaning, in effect, that the currency’s value is declared by the issuer – prices then go up and down according to the supply of the currency and the demand for the currency. To increase the U.S. treasury’s supply of gold, as part of this financial stratagem, the government ordered the confiscation of all privately held gold (bullion, coin, jewelry, … ) with some small exceptions for collectors and dealers. Today, it is hard to imagine people being called upon to bring in the gold in their possession, have it weighed and then be reimbursed in paper money by the ounce.
Naturally, World War II also wreaked havoc with currencies around the world. The Bretton Woods agreement made the post-war U.S. dollar the currency of reference and other currencies were evaluated vis-à-vis the international (gold backed) U.S. dollar. So even the venerable British Pound became a fiat currency in 1946.
The impact of war on currency was felt once again in 1971 when Richard Nixon, with the U.S. reeling from the cost of the war in Vietnam, disconnected the dollar completely from gold making the almighty dollar a full fiat currency.
Soapbox Moment: The impact of war on a nation and the nation’s money is a recurring theme. Even Croesus lost his kingdom when he was killed in a battle with the forces of Cyrus the Great. Joan Baez and Marlene Dietrich both sang “When will they ever learn?” beautifully in English and even more plaintively in German, but men simply do not learn. Louis XIV said it all on his death bed: the Sun King lamented how he had wrecked the French economy and declared “I loved war too much.” Maybe we should all read or re-read Lysistrata by Aristophanes, the caustic playwright of 5th century Athens.
Since World War II, the world of money has seen many innovations, notably credit cards, electronic bank transfers and the relegation of cash to a somewhat marginal role as the currency of the poor and, alas, the criminal. Coupons and airline miles are examples of another popular form of currency, know as virtual currency; this form of currency has actually been around for a long time – C.W. Post distributed a one cent coupon with each purchase of Grape Nuts flakes as far back as 1895.
The most recent development in the history of money has been digital currency which is completely detached from coin, paper or even government – its most celebrated implementation being Bitcoin. A bitcoin has no intrinsic value; it is not like gold or silver or even paper notes backed by a precious metal. It is like a fiat currency but one without a central bank to tell us what it is worth. Logically, it should be worthless. But a bitcoin sells for thousands of dollars right now; it trades on markets much like mined gold. Why? Mystère.
A bitcoin’s value is determined by the marketplace: its worth is its value as a medium of exchange and its value as a storage medium for wealth. But Bitcoin has some powerful, innovative features that make it very useful both as a medium of exchange and as a medium of storage; its implementation is an impressive technological tour de force.
In 2008, a pdf file entitled “Bitcoin: A Peer-to-Peer Electronic Cash System” authored by one Satoshi Nakamoto, was published on-line; click HERE for a copy. “Satoshi Nakamoto” then is the handle of the founder or group of co-founders of the Bitcoin system (abbreviated BTC) which was launched in January 2009. BTC has four special features:
• Unlike the Federal Reserve System, unlike the Bank of England, it is decentralized. There is no central computer server or authority overseeing it. It employs the technology that Napster made famous, peer-to-peer networking; individual computers on the network communicate directly to one another without passing through a central post office; Bitcoin electronic transfers are not instantaneous but they are very, very fast compared to traditional bank transfers – SWIFT and all that.
• BTC guarantees a key property of money: the same bitcoin can not be in two different accounts and an account cannot transfer the same bitcoin twice – this is the electronic version of “you can’t spend the same dollar twice.” This also makes it virtually impossible to counterfeit a bitcoin. This is achieved by means of a technical innovation called the blockchain, which is a concise and efficient way of keeping track of bitcoins’ movements over time (“Bob sent Alice 100 bitcoins at noon GMT on 01/31/2018 … ”; it is a distributed, public account book – “ledger” as accountants like to say. A data compression method called hashing is employed to keep the size of the blockchain under control. Blockchain technology itself has since been adopted by tech companies such as IBM and One Network Enterprises.
• BTC guarantees that bitcoin transfers are secret, known only to the sender and the receiver. For this, in addition to hashing, it uses sophisticated cryptography protocols such as public key encryption; this is the method that distinguishes “https”from “http” in URL addresses and makes a site safe. Public key encryption is based on an interesting mathematical idea – the solvable problem that cannot be solved in our lifetimes even with the best algorithms running on the fastest computers; this is an example of the phenomenon mathematicians call “combinatorial explosion.” The receiver has created this set of problems or puzzles himself and so his private key gives him a way to decrypt the sender’s message. This impenetrability makes Bitcoin an example of a crypto-currency since transactions are decipherable only by the buyer and the seller. This feature clearly makes the system attractive to parties with a need for privacy and makes it abhorrent to tax collectors and regulators.
• New bitcoins are created by a process called mining that in some ways emulates mining gold. A new bitcoin is born when a computer manages to be the first to solve a terribly boring mathematical problem at the expense of a great deal of time, computer cycles and electricity; in the process of mining, as a side-effect, the BTC miners perform some of the grunt work of verifying that a new transaction can be added safely to the blockchain. Also, in analogy with gold, there is a limit of 21 million on the number of unit bitcoins that can be mined. This limit is projected to be reached around the year 2140; as is to be expected, this schedule is based on a clever strategy, one that reduces the rewards for mining over time.
The bitcoin can be divided into a very small unit called the satoshi. This means that $5 purchases, say, can be made. For example, using Gyft or eGifter or other system, one can use bitcoin for purchases in participating stores or even meals in restaurants. In the end, it is supply and demand that infuse bitcoins with value, the demand created by usefulness to people. It is easy enough to get into the game; for example, you can click HERE for one of many sites that support BTC banking etc
The future of Bitcoin itself is not easy to predict. However, digital currency is here to stay; there are already many digital currency competitors (e.g. Ethereum, Ripple) and even governments are working on ways to use this technology for their own national currencies. For your part, you can download the Satoshi Nakamoto paper, slug your way through it, rent a garage and start your own company.


At the very beginning of the 1600’s, explorers from England (Gosnold), Holland (Hudson and Block) and France (Champlain) nosed around Cape Cod and other places on the east coast of North America. Within a very short time, New England, New Netherland and New France were founded along with the English colony in Virginia; New Sweden followed soon. Unlike the early Spanish conquests and settlements in the Americas which were under the aegis of the King of Spain and legitimized by a papal bull, these new settlements were undertaken by private companies – the Massachusetts Bay Colony, the Dutch West India Company, the Compagnie des Cent-Associės de la Nouvelle France, the Virginia Company, the South Sweden Company.

New Sweden was short lived and was taken over by New Netherland; New Netherland in turn was taken over by the Duke of York for the English crown.

In terms of sheer area, New France was the most impressive. To get an idea of its range, consider taking a motor trip through the U.S.A. going from French named city to French named city, a veritable Tour de France:

Presque Isle ME -> Montpelier VT -> Lake Placid NY -> Duquesne PA -> Vincennes IN -> Terre Haute IN -> Louisville KY -> Mobile AL -> New Orleans LA -> Petite Roche (now Little Rock) AR -> Laramie WY -> Coeur d’Alène ID -> Pierre SD -> St Paul MN -> Des Moines IA -> Joliet IL -> Detroit MI

That covers most of the U.S. and you have to add in Canada. It is interesting to note that even after the English takeover of New Netherland (NY, NJ, PA, DE) in 1664, the English territories on the North American mainland still basically came down to the original 13 colonies of the U.S..

The first question is how did the area known as the Louisiana Territory get carved out of New France? Mystère.

To look into this mystery, one must go back to the French and Indian War which in Europe is known as the Seven Years War. This war, which started in 1756, was a true world war in the modern sense of the term with fronts on five continents and with many countries involved. This was the war in which Washington and other Americans learned from the French and their Indian allies how to fight against the British army – avoid open field battles above all. This was the war which left England positioned to take control of India, this was the war that ended New France in North America: with the Treaty of Paris of 1763, all that was left to France in the new world was Haiti and two islands in the Caribbean (Guadeloupe and Martinique) and a pair of islands off the Grand Banks (St. Pierre and Miquelon). The English took control of Canada and all of New France east of the Mississippi. Wait a second – what happened to New France west of the Mississippi? Here the French resorted to a device they would use again to determine the fate of this territory – the secret treaty.

In 1762, realizing all was lost in North America but still in control of the western part of New France, the French king, Louis XV (as in the furniture), transferred sovereignty over the Louisiana Territory to Spain in a secret pact, the Treaty of Fontainebleau. (For a map, click HERE) The British were not informed of this arrangement when signing the Treaty of Paris in 1763, apparently believing that the area would remain under French control. On the other hand, in this 1763 treaty, the Spanish ceded the Florida Territory to the British. This was imperial real estate wheeling-and-dealing at an unparalleled scale; but to the Europeans the key element of this war was the Austrian attempt to recover Silesia from the Prussians (it failed); today Silesia is part of Poland.

How did the Louisiana Territory get from being part of Spain to being part of the U.S.? Again mystère.

The Spanish period in the Louisiana Territory was marked by struggles over Native American slavery and African slavery. With the Treaty of Paris of 1783 which ended the American War of Independence, the Florida Territory which included the southern ends of Alabama and Mississippi was returned to Spain by the British. For relations between the U.S. and Spain, the important issue became free navigation on the Mississippi River. Claims and counterclaims were made for decades. Eventually the Americans secured the right of navigation down the Mississippi. So goods could be freely shipped on the Father of Waters on barges and river boats and the cargo could still pass through New Orleans before being moved to ships for transport to further locations. This arrangement was formalized by a treaty in 1795, know as Pinckney’s Treaty but one honored by the Spanish governor often in the breach.

The plot thickened in 1800 when France and Spain signed another secret treaty, the Third Treaty of San Ildefonso. This transferred control of the Louisiana Territory back to the French, i.e. to Napolėon.

Switching to the historical present for a paragraph, Napolėon’s goal is to re-establish New France in New Orleans and the rest of the Louisiana Territory. This ambition so frightens President Thomas Jefferson that, in a letter to Robert Livingston the ambassador to France, he expresses the fear that the U.S. will have to seek British protection if Napoleon does in fact take over New Orleans:

“The day that France takes possession of New Orleans…we must marry ourselves to the British fleet and nation.”

This from the author of the Declaration of Independence!! So he instructs Livingston to try to purchase New Orleans and the surrounding area. This letter is dated April 18, 1802. Soon he sends James Monroe, a former ambassador to France who has just finished his term as Governor of Virginia, to work with Livingston on the negotiations.

The staging area for Napoleon’s scheme was to be Haiti. However, Haiti was the scene of a successful rebellion in the 1790’s against French rule led by Toussaint Louverture leading to the abolition of slavery in Haiti and the entire island of Hispaniola by 1801. Napoleon’s response was to send a force of 31,000 men to retake control. At first, this army managed to defeat the rebels under Louverture, to take him prisoner, and to re-establish slavery. Soon, however, the army was out-maneuvered by the skillful military tactics of the Haitians and it was decimated by yellow fever; finally, at the Battle of Vertières in 1803, the French force was defeated by an army under Jean-Jacques Dessalines, Louverture’s principal lieutenant.

With the defeat of the French in Haiti at the hands of an army of people of color, the negotiations in Paris over transportation in New Orleans turned suddenly into a deal for the whole of the Louisiana Territory – for $15 million. The Americans moved swiftly, possibly illegally and unconstitutionally, secured additional funding from the Barings Bank in London and overcame loud protests at home. The Louisiana Territory was formally ceded to the U.S. on Dec. 20, 1803.

Some numbers: the price of the Louisiana Purchase comes to less than 3 cents an acre; adjusted for inflation, this is $58 an acre today – a good investment indeed made by a man, Thomas Jefferson, whose own finances were always in disarray.

There is something here that is reminiscent of the novel Catch 22 and the machinations of the character Milo Minderbinder (Jon Voight in the movie) : Barings Bank profited from this large transaction by providing funds for the Napoleonic regime at a point in time when England was once more at war with France! What makes the Barings Bank stunt more egregious is that Napolėon was planning to use the money for an invasion of England (which never did take place). But, war or no war, money was being made.

The story doesn’t quite end there. The British were not happy with these secret treaties and the American purchase of the Louisiana Territory, but they were too occupied by the Napoleonic Wars to act. However, their hand was forced with the outbreak of the War of 1812. At their most ambitious, the British war aims were to restore the part of New France in the U.S. that is east of the Mississippi to Canada and to gain control of the Louisiana Territory to the west of the Mississippi. To gain control of the area in question east of the Mississippi, forces from Canada joined with Tecumseh and other Native Americans; this strategy failed. With Napolėon’s exile to Elba, a British force was sent to attack New Orleans in December 1814 and to gain control of the Louisiana Territory. This led to the famous Battle of New Orleans, to the victory which made Andrew Jackson a national figure, and to that popular song by Johnnie Horton. So this strategy failed too. It is only at this point that American sovereignty over the Louisiana Territory became unquestioned. It can be pointed out that the Treaty of Ghent to end this war had been signed before the battle; however, it was not ratified by the Senate until a full month after the battle and who knows what a vengeful and batty George III might not have done had the battle gone in his favor. It can be said that it was only with the conclusion of this war that the very existence of the U.S. and its sovereignty over vast territories were no longer threatened by European powers. The Monroe Doctrine soon followed.

The Haitians emerge as the heroes of this story. Their skill and valor forced the French to offer the entire Louisiana Territory to the Americans at a bargain price and theirs was the second nation in the Americas to declare its independence from its European overlords – January 1, 1804. However, when Haiti declared its independence, Jefferson and the Congress refused to recognize their fellow republic and imposed a trade embargo, because they feared the Haitian example could lead to a slave revolt here. Since then, French and American interference in the nation’s political life have occurred repeatedly, rarely with benign intentions. And the treatment of Haitian immigrants in the U.S. today hardly reflects any debt of gratitude this nation might have.

The Haitian struggle against the French is the stuff of a Hollywood movie, what with heroic figures like Louverture, Dessalines and others, political intrigues, guerrilla campaigns, open-field battles, defeats and victories, and finally a new nation. Hollywood has never taken this on (although Danny Glover appears to be working on a project), but in the last decade there have been a French TV mini-series (since repackaged as a feature film) and other TV shows about this period in Haitian history.

The Barens Bank continued its financially successful ways. At one point, the Duc de Richelieu called it “the sixth great European power”; at another point, it actually helped the Americans carry out deals in Europe during the War of 1812, again proving that banks can be above laws and scruples. However, its comeuppance finally came in 1995. It was then the oldest investment bank in the City of London and banker to the Queen, but the wildly speculative trades of a star trader in the Singapore office forced the bank to fail; it was sold off to the Dutch bank ING for £1. The villain in the piece, Nick Leeson, was played by Ewan McGregor in the movie Rogue Trader.

In the end, Napoleon overplayed his hand in dealing with Spain. In 1808, he forced the abdication of the Spanish king Carlos IV and installed his own brother as “King of Spain and the Indies” in Madrid. This led to a long guerrilla war in Spain which relentlessly wore down the troops of the Grande Armėe and which historians consider to have been the beginning of the end for the Little Corporal.





The modern world uses a number system built around 10, the decimal system. Things are counted and measured in tens and powers of ten. Thus we have 10 years in a decade, 100 years in a century and 1000 years in a millennium. On the other hand, the number 60 pops up in some interesting places; most notably, there are 60 minutes in an hour and 60 seconds in a minute.  The only challenge to this setup came during the decimal-crazed French Revolution which introduced a system with 10 hours in the day, 100 minutes in the hour and 100 seconds in the minute. Their decimal metric system (meters and kilos) prevailed but the decimal time system, despite its elegance, was soon abandoned. But why have there been 60 minutes in an hour and 60 seconds in a minute in a numerical world dominated by the number 10? And that for thousands of years. Mystère?

The short answer is Babylon. In the ancient world, Babylon was renowned as a center of learning, culture, religion, commerce and riches. It gave us the code of Hammurabi and the phrase “an eye for an eye”; it was conquered by Cyrus the Great and Alexander the Great. The Israelites endured captivity there and two books of the Hebrew Bible (Ezekiel and Daniel) were written there. The Christian Bible warns us of the temptations of the Whore of Babylon. It was the Babylonian system of telling time that spread west and became the standard in the Mediterranean world: 24 hours in the day, 60 minutes in the hour, 60 seconds in the minute. So far, so good; but still why did the Babylonians have 60 minutes in an hour?  Mystère within a mystère.

Here the answer is “fractions,” a very difficult topic that throws a lot of kids in school but one that will not throw the intrepid sleuths getting to the solution of this mystery. The good news is that one-half of ten is 5 and that one-fifth of ten is 2; the bad news is that one-fourth of ten, one-third of ten and one-sixth of ten are all improper fractions: 5/2, 10/3, 5/3; same for three-fourths, two-thirds and five-sixths. A number system based on 60 is called a “sexagesimal” system. If you have a sexagesimal number system, you need different notations for the numbers from 1 through 59 rather than just 1 to 9, but it makes fractions much easier to work with – the numbers 1,2,3,4,5,6 are all divisors of 60 and so one-half, one-quarter, one-third, one-fifth, and one-sixth of 60 are whole quantities, nothing improper about them. This also applies to two-thirds, three-quarters and other common fractions.

This practice of using a base different from 10 for a number system is alive-and-well in the computer era. For example, the base 16 hexadecimal system is used for the addresses of memory locations rather than the verbose binary number system that computers use for numerical computation. The hexadecimal system which uses 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F as its sixteen “digits,” is also used to describe colors on web-pages and you might come across something like <body bgcolor = “#FF0000”> that is instructing the page to make the background color red.

Having a good system for fractions is especially important if you are measuring quantities of products or land area: thus a quarter of a pound of ham, two-thirds  of an acre, … . The Babylonians of the period when the Book of Daniel was written did not invent the sexagesimal system out of whole cloth; rather they inherited it from the great Sumerian civilizations that preceded them. At the birth of modern civilization in Mesopotamia, Sumerian scribes introduced cuneiform writing (wedges on clay tablets) and then sexagesimal numbers for keeping track of accounts, fractions being important in transactions between merchants and buyers. At first, the notations for these systems would differ somewhat from city to city and also would differ depending on the thing being quantified. In the city of Uruk, 6000 years ago, there were at least twelve different sexagesimal number systems in use with differing names for the numbers from 1 to 60, each for working with different items: barley, malt, land, wheat, slaves and animals, beer, … . It is as though they were using French for one item and German for another; thus “cinq goats” and “fűnf tomatoes.” What this illustrates is that it requires an insight to realize that the five in “cinq goats” is the same as the five in “fűnf tomatoes.” Eventually, these systems became standardized and by Babylonian times only one system was in use.

The Sumerians gave us the saga, The Epic of Gilgamesh, whose account of the great flood is a brilliant forerunner of the version in Genesis. In fact, so renowned were the Sumerian cities that the Hebrew Bible tells us the patriarch Abraham (Richard Harris in the TV movie Abraham) came from the city of Ur, a city also known as Ur of the Chaldees; at God’s bidding, he left Ur and its pagan gods and moved west with his family and retinue to the Land of Canaan. The link to Ur serves as a hommage on the part of the Israelite scribes to the Sumerians, one designed to ascribe illustrious origins to the founder of the three Abrahamic religions.

In addition to being pioneers in literature, agriculture, time-telling, accounting, etc. the Sumerians pushed beyond the boundaries of arithmetic into more abstract mathematics. They developed methods we still use today for solving quadratic equations (x2 + b x = c) and their methods and techniques were imported by mathematicians of the Greco-Roman world. Moreover, very early on, the Sumerian scribes were familiar with the mighty Pythagorean Theorem: an ancient clay tablet, known as “ybc 7289,” shows that these scribes knew this famous and fundamental theorem at least two thousand years before Pythagoras. For a picture of the tablet, click HERE . The image shows a square with each side of length 1 and with a diagonal of length  2 written in sexagesimal with cuneiform wedges (so this satisfies the Pythagorean formula a2 + b2 = c2 in its most simple form 1 + 1 = 2). In his Elements, Euclid gives a proof of the Pythagorean Theorem based on axioms and postulates. We do not know whether the Sumerians thought of this remarkable discovery as a kind of theorem or as an empirical observation or as something intuitively clear or just a clever aperçu or something else entirely. This tablet was on exhibit in NYC at the Institute for the Study of the Ancient World in late 2010; for the mathematically sensitive the event was an epiphany.

The Sumerians were also great astronomers – they invented the constellations we use today and the images associated with them – and their observations and techniques were used by geometers and astronomers from the Greco-Roman world such as Eratosthenes and Ptolemy. Indeed, the Babylonian practice of using sexagesimal numbers has persisted in geography and astronomy; so to this day, latitude and longitude are measured in degrees, minutes and seconds: thus a degree of north latitude is divided into 60 minutes and each minute is divided into 60 seconds. James Polk was elected to be the 11th president of the United States on the platform “Fifty-Four Forty or Fight” meaning that he would take on the British and push the Oregon territory border with Canada up to 54°40′ .  (Polk wisely settled for 49°00′).

The Sumerians were also great astrologers and soothsayers and it was they who invented the Zodiac that we still use today. If we think of the earth as the center of the universe, then of course it takes one year for the sun to revolve around the earth; as it revolves it follows a path across the constellations, each of which sits in an area called its “house.” As it leaves the constellation Leo and rises in front of the constellation Virgo, the sun is entering the house of Virgo. According to the horoscopes in this morning’s newspaper, the sun is in the house of Virgo from Aug 23 until Sept 22.

Recently, though, NASA scientists have noted that the Sumerian Zodiac we employ is based on observations of the relative movements of the sun, earth, planets and stars made a few thousand years ago. Things have changed since – the tilt of the earth’s axis is not the same, measurements have improved, calendars have been updated, etc.; the ad hoc Sumerian solution to keeping the number of signs the same as the number of months in the year no longer quite works. So the constellations are just not in the locations in the heavens prescribed by the ancients on the same days as in the current calendar; the sun might just not be in the house you think it’s in – if you were a Capricorn born in early January, you are now a Sagittarius. So far, the psychic world has pretty much ignored the implications of all this – people are just too attached to their signs.

It must be admitted that the NASA people did get carried away with the numbers and the science. They stipulated that there should actually be 13 signs, the new one being Ophiuchus, the Serpent Bearer (cf. Asclepius the Greek god of medicine and his snake entwined caduceus); this is a constellation that was known to the Sumerians and Babylonians but one which they finessed out of the Zodiac to keep the number of signs at 12. Click HERE for a picture. However, it is a sign of the Zodiac of Vedic Astrology and, according to astrologer Tali Edut, “It’s a pretty sexy sign to be!”

But why insist on 12 months and 12 signs, you may ask. Again mystère. This time the solution lies in ancient Egypt. The Egyptians started with a calendar based on 12 lunar months of 28 days each, then moved to 12 months of 30 days each with 5 extra days inserted at the end of the year. This solved the problem encountered world-wide of synchronizing the solar and lunar years (the leap year was later added). And this 12 month year took root. We also owe the 24 hour day to the Egyptians, who divided the day into 12 day hours and 12 night hours; at the outset, the length of an hour would vary with the season and the time of the day so as to insure that there were 12 hours of daylight and 12 hours of nighttime. The need for simplicity eventually prevailed and each hour became one-twenty-fourth of a day.

The number 12 is also handy when it comes to fractions and to this day it is the basis for many measuring systems: 12 donuts in a dozen, 12 dozen in a gross, 12 inches in a foot, 12 troy ounces in a troy pound. One that recently bit the dust, though, is 12 pence in a shilling; maybe Brexit will bring it back and make England Great Again.

Battle Creek

Battle Creek is a city of some 50,000 inhabitants in southwestern Michigan situated at the point where the Battle Creek River flows into the Kalamazoo River. The name Battle Creek traces back, according to local lore, to a skirmish between U.S. soldiers and Native Americans in the winter of 1823-1824.
At the beginning of the 20th century, Battle Creek was the Silicon Valley of the time: entrepreneurs, investors and workers poured in, companies were started, fortunes were made. As the local Daily Moon observed in 1902, “Today Battle Creek is more like a boom town in the oil region than a staid respectable milling center.” A new industry had taken over the town: “There are … large establishments some running day and night” and all were churning out the new product that launched this gold rush.
Even before the boom, Battle Creek was something of a manufacturing center producing such things as agricultural equipment and newspaper printing presses. But this was different. Battle Creek was now known as “Health City.” So what was this new miracle product? Mystère.
Well, it was dry breakfast cereal, corn flakes and all that. By 1902, more than 40 companies were making granulated or flaked cereal products with names like Vim, Korn-Krisp, Zest, X-Cel-O, Per-Fo, Flak-ota, Corn-O-Plenty, Malt-Too; each labeled as the perfect food.
And how did Battle Creek come to be the center of this cereal boom? Again mystère?
For this, things turn Biblical and one has to go back to the Book of Daniel and the Book of Revelation; the first is a prophetic book of the Hebrew Bible, the second a prophetic book of the New Testament. Together, the books offer clues as to the year when the Second Coming of Christ will take place. Belief in the imminence of this time is known as adventism. No one less than Isaac Newton devoted years to working on this. Newton came to the conclusion that it would be in the year 2060, although sometimes he thought it might be 2034 instead; superscientist though he was, Newton could have made a mistake – he also invested heavily in the South Sea Bubble.
The First Great Awakening was a Christian revival movement that swept England and the colonies in the 1700’s; among its leaders were John Wesley (of Methodism) and Jonathan Edwards (of “Sinners in the Hands of an Angry God”). During the Second Great Awakening in the first part of the 19th century, in the U.S. the adventist William Miller declared the year of the Second Coming to be 1844. When that passed uneventfully (known as the Great Disappointment), the Millerite movement splintered but some regrouped and soldiered on. One such group became the Jehovah’s Witnesses. Another became the Seventh Day Adventists.
The “Seventh Day” refers to the fact that these Adventists moved the Sabbath back to its original day, the last day of the week, the day on which the Lord rested after the Creation, the day observed by the Biblical Israelites and by Jews today, viz., Saturday. The early Gentile Christians, after breaking off from their Jewish roots, moved their Sabbath to Sunday because that was the day of rest for the non-Jewish population of the Roman Empire; they likely also wanted to distance themselves from Jews after the revolts of 66-73 AD and 132-136 AD, uprisings against the power of Rome.
The Seventh Day Adventists do not claim to know the year of the Second Coming but do live in anticipation of it. Article 24 of their statement of faith concludes with
  “The almost complete fulfillment of most lines of prophecy, together with the present condition of the world, indicates that Christ’s coming is imminent. The time of that event has not been revealed, and we are therefore exhorted to be ready at all times.”
In 1863, in Battle Creek, the church was formally established and had 3,500 members at the outset. So the plot thickens: “Battle Creek” did you say?
Their system of beliefs and practices extends beyond adventism. They are basically pacifists and eschew combat. Unlike the Quakers, they leave the decision to serve in the Armed Forces to the individual and, typically, those who are in the military serve as medics or in other non-combatant roles. They are also vegetarians; they proscribe alcohol and tobacco – coffee and tea as well; they consider the health of the body to be necessary for the health of the spirit.
It is their interest in health that leads to a solution to our mystery. As early as 1863, Adventist prophetess Ellen White had visions about health and diet. She envisaged a “water cure and vegetation institution where a properly balanced, God-fearing course of treatments could be made available not only to Adventists, but to the public generally.” Among her supporters in this endeavor were her husband James White and John and Ann Kellogg. The plot thickens again: “Kellogg” did you say? The Kelloggs were Adventists to the point that they did not believe that their son John Harvey or their other children needed a formal education because of the imminence of the Second Coming. In 1866, the Western Reform Institute was opened in Battle Creek realizing Ellen White’s vision. By then the Whites had taken an interest in the self-taught John Harvey Kellogg which eventually led to their sending him for medical training with a view to having a medical doctor at the Institute. He finished his training at the NYU Medical College at Bellevue Hospital in New York City in 1875. In 1876, John Harvey Kellogg became the director of the Institute and he would lead this institution until his death in 1943. In 1878, he was joined by his younger brother Will Keith Kellogg who worked on the business end of things.
John Harvey Kellogg threw himself into his work. He quickly changed the name to the Battle Creek Medical Surgical Sanitarium; he coined the term sanitarium to distinguish it from sanatorium to describe a place where one would learn to stay healthy. He described the Sanitarium’s system as “a composite physiologic method comprising hydrotherapy, phototherapy, thermotherapy, electrotherapy, mechanotherapy, dietetics, physical culture, cold-air cure, and health training.” Physical exercise was thus an important component of the system; somewhat inconsistently, sexual abstinence was also strongly encouraged as part of the program
Kellogg’s methods could be daring, if not extreme; what web sites most remember him for is his enema machine that involved yogurt as well as water and ingestion through the mouth as well as through the anus.
Through all this, vegetarianism remained a principal component of the program. The Kelloggs continually experimented with ways of making vegetarian foods more palatable and more effective in achieving the goals of the Sanatarium. In 1894, serendipity struck: they were working to produce sheets of wheat when they left some cooked wheat unattended; when they came back they continued processing it but produced flakes of wheat instead of sheets – these flakes could be toasted and served to guests at the Sanitarium. They filed for a patent in 1895 and it was issued in 1896. For an advertisement for Corn Flakes from 1919, click HERE .
John Harvey Kellogg showed this new process to patients at the Sanitarium. One guest, C. W. Post grasped its commercial potential and started his own company, a company that became General Foods. See what we mean by Silicon Valley level rewards – that was “General Foods,” which was the Apple Computer of the processed food industry, with an industrial name modeled after General Electric and General Motors. (The name is gone today; the company eventually merged with Kraft.) Post’s first cereal product in 1897 was Elijah’s Manna later rechristened Grape-Nuts Flakes – in the name, only Flakes is accurate as the ingredients are wheat and barley. But the Gold Rush was on.
In 1906 Will Keith Kellogg founded the Battle Creek Toasted Corn Flake Company; this company was later named Kellogg’s and, to this day, it is headquartered in Battle Creek and continues to bless the world with its corn flakes and other dry breakfast cereals
Through all this, the Sanitarium continued on and, in fact, prospered. Its renown spread very far and very wide and a remarkable set of patients spent time there. This list includes William Howard Taft, George Bernard Shaw, Roald Amundsen and Sojourner Truth.
Perhaps the simplest proof of the efficacy of the Kelloggs’ methods is that both brothers lived past 90. For another proof, let us go from the 19th and 20th centuries to the 21st century and let us move from Battle Creek Michigan to Loma Linda California.
Loma Linda is the only place in the U.S. that made it onto the list of Blue Zones – places in the world where people have exceptionally long life spans. (The other places are in Sardinia, Okinawa, Greece and Costa Rica). The reason is that this California area has a large population of Seventh Day Adventists and they live a decade longer than the rest of us. They follow the principles of their early coreligionists: a vegetarian diet, physical exercise, no alcohol, no tobacco – to which is added that sense of community and purpose in life that their shared special beliefs bring to them.
Playful Postscript:
For the unserious among us, much of the goings on at the Sanitarium could be the stuff of high comedy. In fact, it inspired the author T. Coraghessan Boyle to write a somewhat zany novel The Road to Wellville which was later made into a movie. The characters have fictitious names except for Dr. John Harvey Kellogg (played by Anthony Hopkins in the movie). The title itself comes from the booklet written by C.W. Post which used to be given out with each box of Grape-Nuts Flakes.
Tragic Postscript:
In the 1930’s a group broke off from the Seventh Day Adventist church and eventually became known as the Davidian Seventh Day Adventists. In turn, in the 1950’s there was a split among them and a new group, the Branch Davidians, was formed; so we are at two removes from the church founded in Battle Creek. In the 1982 Vernon Wayne Howell, then 22 years old, moved to Waco Texas and joined the Branch Davidians there; he subsequently changed his name to David Koresh: Koresh is the Hebrew ( כֹּרֶשׁ ) for Cyrus (the Great), the Persian king who is referenced in the Book of Daniel and elsewhere in the Hebrew Bible; Cyrus is the only non-Jew to be named a Messiah (a person chosen by God for a mission) in the Hebrew Bible (Isaiah 45:1) and it was he who liberated the Jews from the Babylonian Captivity. As David Koresh, Howell eventually took over the Waco Branch Davidian compound and turned it into a nightmarish cult. There followed the horrifying assault in 1993 and the deaths of so many. To make an unimaginable horror yet worse, this assault has become a rallying cry for paranoid militia people – its second anniversary was a motivation for Timothy McVeigh and Terry Nichols when they planned the Oklahoma City bombing.


Ernest Hemingway famously favored Anglo-Saxon words and phrases over Latin or French ones: thus “tell” and not “inform.”  Scholars, critics and Nobel Prize committees have analyzed passages such as
“What do you have to eat?” the boy asked.
“A pot of yellow rice with fish. Do you want some.”
“No, I will eat at home; do you want me to make the fire?”
“No, I will make it later on, or I may eat the rice cold.”
Not a single word from French or Latin; not a single subordinate clause, no indirect discourse, no adverbs, … .
Some trace this aspect of his style to Hemingway’s association with Gertrude Stein, Ezra Pound, Djuna Barnes and modernist writers. Others trace it to his first job as a cub reporter for the Kansas City Star and the newspaper’s style guide:  “Use short sentences. Use short first paragraphs. Use vigorous English. Be positive, not negative.”
However that may be, Hemingway was true to his code and he set a standard for American writing. Still, it is often impossible to say whether a word or phrase is Anglo-Saxon or not. For example, the multi-word sound has four meanings each derived from a different language: “sound” as in “Long Island Sound” (Norse), as in “of sound mind” (German), as in “sound the depths of the sea” (French), as in “sound of my voice” (Latin). To add to the confusion, the fell in “one fell swoop” is from the Norman French (same root as felon) though nothing sounds more Anglo-Saxon than fell. Among the synonyms pigeon and dove, it is the former that is French and the latter that is Anglo-Saxon. Nothing sounds more Germanic than skiff but it comes from the French esquif. So where can you find 100% Anglo-Saxon words lurking about? Mystère.
Trades are a good source of Anglo-Saxon words — baker, miller, driver, smith, shoe maker, sawyer, wainwright, wheelwright, millwright, shipwright; playwright doesn’t count, it’s a playful coinage from the early 17th century introduced by Ben Jonson. Barnyard animals also tend to have old English origins  — cow, horse, sheep, goat, chicken, lamb, … Body parts too — foot, arm, leg, eye, ear, nose, throat, head, … .
Professions tend to have Latin or French names — doctor, dentist, professor, scientist, accountant, … ; teacher, lawyer, writer and singer are exceptions though.
Military terms are not a good source at all; they are relentlessly French — general, colonel, lieutenant, sergeant, corporal, private, magazine, platoon, regiment, bivouac, caisson, soldier, army, admiral, ensign, marine and on and on.
However, there is a rich trove of Anglo-Saxon words to be found in the calendars of the Anglican and Catholic Churches. The 40 days of Lent begin on Ash Wednesday and the last day before Lent is Mardi Gras. Lent and Ash Wednesday are Anglo-Saxon in origin but Mardi Gras is a French term. Literally, Mardi Gras means Fat Tuesday, and the Tuesday before Lent is now often called Fat Tuesday in the U.S. But there is a legitimate, traditional Anglo-Saxon name for it, namely Shrove Tuesday. Here shrove refers to the sacrament of Confession and the need to have a clean slate going into Lent; the word shrove is derived from the verb shrive which means “to hear confession.” The expression “short shrift” is also derived from this root: a short confession was given to prisoners about to be executed. Another genuinely English version of Mardi Gras is Pancake Tuesday, which, like Mardi Gras, captures the fact that the faithful need to fatten up before the fasting of Lent. Raised Episcopalian and one with a strong attraction to the ritual and pageantry of Catholicism, Hemingway was in fact listed as a Catholic for his 2nd marriage, the one to Pauline Pfeiffer. Still Hemingway never got to use the high church Anglo-Saxon term Shrove Tuesday (or even Pancake Tuesday) in his writings. On the other hand, Hemingway always wrote “Holy Ghost” and never would have cottoned to the recent shift to the Latinate “Holy Spirit.”
Staying with high church Christianity — Lent goes on for forty days until Easter Sunday; the period of Eastertide begins with Palm Sunday which celebrates Christ’s triumphant entry into Jerusalem for the week of Passover. The Last Supper was a Passover Seder dinner. Thus, in Italian, for example, the word for Passover and the word for Easter are the same; if the context is not clear, one can distinguish them as “Pasqua” and “Pasqua Ebraica.” Something similar applies in French and Spanish. So how did English usage come to be so different? Mystère.
Simply put Easter Sunday is named for a Pagan goddess. In the Middle Ages, the author of the first history of the English people, the Venerable Bede, wrote that the Christians in England adopted the name the local Pagans were giving to a holiday in honor of their goddess of the spring Ēastre – you cannot get more Anglo-Saxon than that.
In addition to Eastertide, the Anglo-Saxon root tide is also used for other Christian holiday periods – Whitsuntide (Pentecost), Yuletide (Christmas), … . This venerable meaning of tide as a period of time is also the one that figures in the expression
    Neither tide nor time waits for no man.
Nothing to do with maritime tides whether high, low or neap. Hemingway would likely have avoided this phrase, though, because of its awkward, archaic double negative.
Sometimes French words serve to protect us from the brutal frankness of the Anglo-Saxon. Classic examples of this are beef and pork which are derived from the French boeuf and porc. When studying a German menu, it is always disconcerting to have to choose between “cow flesh” and “pig flesh.” Even Hemingway would have to agree.
An area where Hemingway is on more solid ground is that of grammar. The structure of English is basically Germanic. The Norman period introduced a large vocabulary of French and Latin words, but French had very little influence on English grammar. The basic reason for this is that Norman French used Latin as the language for the administration of the country; thus the Domesday Book and the Magna Carta were written in Latin. Since French was the language of the court, however, English legal vocabulary to this day employs multiple French words and phrases such as the splendid “Oyez, Oyez.”
While the grammar of English can be classified as Germanic, there are some key structural elements that are Celtic in origin. An important example is the “useless do” as in “do you have a pen?”  English is one of the only languages that inserts an extra verb, in this case do, to formulate a question; typically in other languages, one says “have you a pen?” or “you have a pen?” with a questioning tone of voice.  Another Celtic import is the progressive tense as in “I am going to the store,” which can express a mild future tense or a description of current activity. This progressive tense is an especial challenge for students in ESL courses.
Danish invaders such as the Jutes have also contributed to English structure – for example, there is the remarkably simple way English has of conjugating verbs: contrast the monotony of  “I love, you love, it loves, we love, you love, they love” with other languages.
When languages collide like this, one almost invariably emerges as the “winner” in terms of structure with the others’ contributing varying amounts of vocabulary and with their speakers’ influencing the pronunciation and music of the language. The English language that emerged finally at the time of Chaucer was at its base Anglo-Saxon but it had structural adaptations from Celtic and Norse languages as well as a vast vocabulary imported whole from Latin and French. This new language appeared suddenly on the scene in the sense that during all this time it wasn’t a language with a literature like the other languages of Britain – Old English, Latin, French, Welsh, Cornish, … ; so it just simmered for centuries but eventually the actual spoken language of the diverse population forced its way to the surface.
Hemingway himself spent many years in places where romance languages are spoken – Paris, Madrid, Havana, … . Maybe this helped insulate him from the galloping changes in American and English speech and writing and let him ply his craft in relative peace. How else could he have ended a tragic wartime love story with a sentence so perfect but so matter-of-fact as “After a while I went out and left the hospital and walked back to the hotel in the rain.”