Grumpy opinions about everything.

Tag: History Page 1 of 2

The Art of Rigging Democracy: A Close Look At Gerrymandering

Wikimedia Commons: Elkanah Tisdale (1771-1835) (often falsely attributed to Gilbert Stuart), public domain


Picture this: It’s 1812 in Massachusetts, and Governor Elbridge Gerry has just approved a redistricting plan that creates such a bizarrely shaped legislative district that when a local newspaper editor saw it on a map, he thought it looked like a salamander. The editor sketched wings and claws onto the district, and someone quipped that it looked more like a “Gerry-mander” than a salamander. The term stuck, and more than two centuries later, we’re still dealing with the same problem that inspired that joke.
What Gerrymandering Actually Means
At its core, gerrymandering is the practice of drawing electoral district boundaries to give one political party or group an unfair advantage over its opponents. It’s a form of political manipulation that allows those in power to essentially choose their voters, rather than letting voters choose their representatives. Think of it as a sophisticated form of gaming the system—perfectly legal in many cases, but profoundly anti-democratic in spirit.
The mechanics are surprisingly straightforward. There are two main techniques: “cracking” and “packing.” Cracking involves splitting up concentrations of opposing voters across multiple districts so they can’t form a majority anywhere. Packing does the opposite—cramming as many opposing voters as possible into a few districts so they waste their votes winning by huge margins in just a couple of places, leaving the rest of the districts safely in your column. These techniques can be deployed together to engineer a decisive partisan advantage.
A History of Creative Mapmaking
The practice didn’t start with Gerry, of course. The Founding Fathers—for all their lofty rhetoric about representative democracy—weren’t above putting their thumbs on the electoral scales. But gerrymandering really came into its own in the 20th century as advances in census data, and statistics combined with the addition of newly available computing power made it possible to draw districts with surgical precision.
The 2010 redistricting cycle marked a watershed moment. Single-party control of the redistricting process gave partisan line drawers free rein to craft some of the most extreme gerrymanders in American history often down to the level of individual city blocks. Republicans, having won control of many state legislatures in the 2010 midterms, used sophisticated computer modeling to create maps that locked in their advantages for a decade. Democrats did the same where they had the power, though Republicans controlled more state legislatures and thus wielded greater gerrymandering capability overall.
The 2024-2025 Gerrymandering Wars
Here’s where things get really interesting—and deeply concerning. The situation has exploded into what some are calling “gerrymandering wars” following the 2020 census and a critical Supreme Court decision in 2019. In Rucho v. Common Cause, the Supreme Court ruled that partisan gerrymandering constitutes a non-justiciable “political question” where federal court intervention is unsuitable. Translation: The federal courts won’t stop partisan gerrymandering because they claim there’s no objective standard to measure it.
This opened the floodgates. The Brennan Center estimates that gerrymandering gave Republicans an advantage of around 16 House seats in the 2024 race to control Congress compared to fair maps. But here’s the kicker: we’re not even done with this decade’s redistricting.
In an unprecedented move, President Donald Trump has pushed Republican state lawmakers to further gerrymander their states’ congressional maps, prompting Democratic state lawmakers to respond in kind. In August 2025, during a special session, Texas’s legislature passed a redistricting plan that weakens electoral opportunities for Black and Hispanic voters.  California has threatened to respond with its own gerrymander, creating a tit-for-tat dynamic that could spiral out of control.
North Carolina provides perhaps the most dramatic example. After the state supreme court reversed its position on policing partisan gerrymandering, the Republican-controlled legislature redrew the map, and after the 2024 election, three Democratic districts flipped to Republicans—enough to give control of the U.S. House to the GOP by a slim margin.
The situation has gotten so extreme that both parties are now openly engaging in mid-decade redistricting—something that traditionally only happened after each ten-year census. California, Missouri, North Carolina, Ohio, Texas and Utah have all adopted new congressional maps in 2025, with new maps also appearing possible in Florida, Maryland and Virginia.
The Racial Dimension
It’s crucial to note that gerrymandering comes in two flavors: partisan and racial. While partisan gerrymandering is currently legal thanks to the Supreme Court, racial gerrymandering—drawing districts specifically to dilute the voting power of racial minorities—violates the Voting Rights Act of 1965. The line between the two can get blurry, though, since partisan voting patterns often correlate with race.
In May 2025, a federal court ruled that Alabama’s 2023 congressional map not only violates Section 2 of the Voting Rights Act but was enacted by the Alabama Legislature with racially discriminatory intent. Similar battles are playing out in Louisiana, Mississippi, and other states. The legal landscape here is complex, with courts sometimes walking a tightrope between ensuring fair representation for communities of color and avoiding the creation of what could be challenged as racial gerrymanders.
What Can Be Done About It?
The most popular reform proposal is the creation of independent redistricting commissions—bodies of citizens (not politicians) who draw district maps according to neutral criteria.  Currently, several states including Colorado, Michigan, Ohio and Virginia use redistricting commissions to draw congressional and state legislative maps, ranging from political commissions with elected officials to completely independent commissions that bar all elected officials from serving as commissioners.
Do they work? The evidence is mixed but generally positive. According to a Redistricting Report Card published with the Princeton Gerrymandering Project, the states that had some form of commission drew “B+” maps on average, while states where partisans controlled the process drew “D+” maps.  California’s independent commission is often held up as the gold standard, though it’s not perfect—even fairly drawn maps can produce lopsided results due to how voters cluster geographically.
Another proposal focuses on clear, enforceable criteria: compactness, contiguity, respect for existing political boundaries, and transparency in the mapping process. Advances in statistical analysis also make it possible to compare proposed maps against thousands of neutral alternatives to detect extreme outliers, a method increasingly discussed in academic and legal circles.
Federal legislation has been proposed repeatedly. The Redistricting Reform Act of 2025 would prohibit states from mid-decade redistricting and would require every state to adopt nonpartisan, independent redistricting commissions. Similar provisions were included in the “For the People Act” that Democrats passed in the House in 2021, but it died in the Senate. Getting such legislation through Congress would require bipartisan cooperation, which seems unlikely given that both parties see gerrymandering as a political weapon and they believe they can’t afford to unilaterally disarm.
Some reformers advocate for more radical solutions. Proportional representation systems, where the share of votes equals the share of seats, would end boundary-drawing battles altogether and make democracy more representative. Under such a system, if Democrats win 60% of the vote in a state, they’d get roughly 60% of that state’s congressional seats. It’s intuitive and fair, but it would require a fundamental restructuring of American electoral systems—something that’s probably not politically feasible in the near term.
State courts have emerged as a potential backstop. At least 10 state supreme courts have found that state courts can decide cases involving allegations of partisan gerrymandering, even though federal courts won’t touch them.  This means that state constitutional provisions against gerrymandering could provide meaningful protection—though as North Carolina demonstrated, state court compositions can change, and with them, their willingness to police gerrymandering.
The Bottom Line
Gerrymandering represents a fundamental tension in American democracy: How do we draw districts fairly when the people drawing them have every incentive to rig the game in their favor? The problem has ancient roots but ultra-modern manifestations, powered by big data and sophisticated computer modeling that would make Elbridge Gerry’s head spin.
The current moment feels particularly precarious. With Republicans’ razor-thin majority in the House and midterm elections traditionally being unfavorable for the party in power, the Republicans’ action amounts to a preemptive move to retain control of Congress. The Democrats threatened response could trigger an escalating cycle of partisan map manipulation that further entrenches our political divisions and makes elections less responsive to actual voter preferences.
Independent commissions offer a promising path forward, but they’re not a silver bullet. They work better than partisan control, but they can’t eliminate all the inherent challenges of translating votes into seats through geographic districts. More ambitious reforms like proportional representation could solve the problem more completely, but they face enormous political and practical obstacles.
For now, gerrymandering remains what that newspaper editor saw in 1812: a monstrous distortion of democratic principles, hiding in plain sight on our electoral maps. The question is whether we have the political will to slay the beast, or whether we’ll keep feeding it for another two centuries.
 
Sources:
Brennan Center for Justice – Gerrymandering Explained
https://www.brennancenter.org/our-work/research-reports/gerrymandering-explained
 
Brennan Center for Justice – How Gerrymandering Tilts the 2024 Race for the House
https://www.brennancenter.org/our-work/research-reports/how-gerrymandering-tilts-2024-race-house
 
American Constitution Society – America’s Gerrymandering Crisis
https://www.acslaw.org/expertforum/americas-gerrymandering-crisis-time-for-a-constructive-redistricting-framework/
 
ACLU – Court Cases on Gerrymandering
https://www.aclu.org/court-cases?issue=gerrymandering
 
Stateline – State Courts and Gerrymandering
https://stateline.org/2025/12/22/as-supreme-court-pulls-back-on-gerrymandering-state-courts-may-decide-fate-of-maps/
 
Campaign Legal Center – Do Independent Redistricting Commissions Work?
https://campaignlegal.org/update/do-independent-redistricting-commissions-really-prevent-gerrymandering-yes-they-do
 
RepresentUs – End Partisan Gerrymandering
https://represent.us/policy-platform/ending-partisan-gerrymandering/
 
Senator Alex Padilla – Redistricting Reform Act of 2025
https://www.padilla.senate.gov/newsroom/press-releases/watch-padilla-lofgren-introduce-legislation-to-establish-independent-redistricting-commissions-end-mid-decade-redistricting-nationwide/
 
Protect Democracy – How to End Gerrymandering
https://protectdemocracy.org/work/how-to-end-gerrymandering/

The Evolution of the English Language: From Anglo-Saxon Roots to a Global Tongue

English is a beautifully messy language—shameless in its borrowing and relentless in its evolution. It resists the tidy logic that might make a grammarian’s life easier, and that resistance is part of what makes its history so compelling. The English we speak today is the product of centuries of invasion, migration, cultural collision, and literary ambition—a language built in layers, like geological strata laid down over time.

To see how English grew from an obscure Germanic dialect into a global lingua franca, it helps to trace three broad phases: Old English, Middle English, and Modern English. Each stage was shaped by different historical forces, from Germanic migration and Viking settlement to the Norman Conquest, the Renaissance, the printing press, and ultimately the worldwide reach of the British Empire and the United States.

Anglo-Saxon Foundations

The story begins on the European mainland. When Roman authority collapsed in Britain in the early fifth century, Germanic-speaking peoples from what is now northern Germany, Denmark, and the Netherlands moved into the island. The Angles, Saxons, and Jutes arrived in waves, bringing closely related West Germanic dialects that gradually developed into Old English, often called Anglo-Saxon.

Old English was thoroughly Germanic in both grammar and vocabulary. It was a highly inflected language: case endings marked whether a noun was subject, object, or possessive, and nouns had grammatical gender. Verbs were conjugated with a complexity that feels foreign to most modern English speakers. Much of the core vocabulary of modern English—words such as water, house, bread, child, earth, life, and death—dates back to this early period and still carries that Germanic stamp.

The language of Beowulf, composed between the eighth and early eleventh centuries, is virtually unreadable today without specialized training. Its famous opening line, “Hwæt! We Gardena in geardagum,” is technically English, but it feels closer to a foreign language. Old English used letters such as þ (thorn) and ð (eth) and relied on grammatical structures that later disappeared.

Nor was Old English a single uniform tongue. It existed as a cluster of regional dialects including Northumbrian, Mercian, Kentish, and West Saxon. Under King Alfred the Great in the late ninth century, Wessex became the leading political power in England and a center of learning. Alfred sponsored translations of important Latin works into Old English, most often in the West Saxon dialect. As a result, most surviving Old English texts come from that dialect, giving us only a partial view of the linguistic diversity of early England.

Latin and Celtic Influences

Even before the Anglo-Saxons arrived in Britain, Latin had begun to influence their speech through contact with the Roman world. Early Latin loanwords include street (from strata), wall (from vallum), and wine (from vinum).

A second wave of Latin influence arrived with the Christianization of England beginning in 597, when Augustine of Canterbury established a mission in Kent. Christianity introduced vocabulary connected with religion, learning, and administration—words such as church, bishop, monk, school, altar, and verse.

By contrast, the Celtic languages spoken by the native Britons left a surprisingly small mark on English vocabulary. Their influence survives most clearly in place names—for example Thames, Avon, and Dover—and in landscape terms such as combe (valley) and tor (rocky hill). Why Celtic languages left relatively few everyday words in English remains one of the lingering puzzles of linguistic history.

Vikings and the Norse Contribution

Beginning in the late eighth century, Scandinavian raiders and settlers—collectively known as Vikings—began attacking and eventually settling parts of England. By the ninth century much of northern and eastern England had become part of the Danelaw, where Old English speakers lived alongside speakers of Old Norse.

Because Old Norse and Old English were closely related Germanic languages, speakers could often roughly understand each other. Over time, however, sustained contact produced deep linguistic blending. English absorbed many Norse-derived words that now feel completely native, including sky, skin, skill, skirt, egg, leg, window, husband, call, take, give, get, want, and die.

Perhaps the most striking Norse contribution lies in the pronouns they, them, and their, which replaced earlier Old English forms. When a language adopts core pronouns from another language, it signals unusually intense and prolonged contact.

Many linguists also believe that contact with Norse speakers helped accelerate the simplification of English grammar. In bilingual communities, speakers often reduce complex inflectional endings that make communication difficult. As a result, English gradually moved away from the elaborate grammatical endings of Old English and toward a system that relied more heavily on word order.

The Norman Transformation

The Norman Conquest of 1066 transformed English more dramatically than any other single event in its history. When William of Normandy defeated King Harold at the Battle of Hastings and became king of England, he brought with him a French-speaking aristocracy.

For several centuries after the conquest, French dominated the language of power—the court, the law, the church hierarchy, and much of government administration. English remained the everyday language of the population but lost prestige in elite circles.

French vocabulary poured into English in areas associated with authority and culture. Law gained terms such as justice, court, judge, jury, prison, crime, and verdict. Government absorbed parliament, sovereign, minister, authority, tax, and treasury. Military language adopted army, navy, soldier, captain, defense, and siege.

Even the language of food reflects this social divide. The animals in the field kept their Old English names—cow, sheep, pig, and deer—while the meat served at noble tables took French names: beef, mutton, pork, and venison.

The Rise of Middle English

Over time, French dominance gradually weakened. The loss of Normandy in 1204 encouraged English nobles to identify more strongly with England itself. Later, the Black Death (1348–1350) reshaped English society by elevating the economic importance of English-speaking laborers and craftsmen.

During the fourteenth century, English returned as the language of all social classes. The language that emerged—Middle English—looked very different from Old English. Most grammatical endings disappeared, grammatical gender vanished, and sentence structure shifted toward the familiar subject-verb-object order.

At the same time, English vocabulary became a rich mixture of Germanic and Romance elements. This layering produced sets of near-synonyms with different levels of formality: ask (Germanic), question (French), and interrogate (Latin).

The most famous literary figure of this period was Geoffrey Chaucer, whose Canterbury Tales demonstrated that English could rival French and Latin as a vehicle for sophisticated literature. Chaucer wrote in the London dialect, which was gaining prominence due to the city’s political and commercial importance. Though not yet standardized, London English gradually became the foundation of later written English.

Printing and the Great Vowel Shift

William Caxton established England’s first printing press in 1476, and this technological revolution had far-reaching consequences for the language. Printing created a need for standardized spelling and grammar, since texts would now be distributed widely rather than copied by hand in local scriptoria. Caxton himself struggled with the problem of dialect variation, complaining about the difficulty of choosing forms that all English readers could understand. Over time, the conventions adopted by London printers became the de facto standard. Additionally, the need to create type for the printing press led to the dropping of the letters þ (thorn) and ð (eth) that were difficult to replicate in lead.

At the same time, English pronunciation underwent a dramatic change known as the Great Vowel Shift, which occurred roughly between 1400 and 1700. Long vowel sounds moved upward in the mouth, transforming the pronunciation of many common words. For example, “name” once sounded closer to nah-muh, while “mouse” sounded more like moose. 

The causes of the Great Vowel Shift remain debated—theories range from the social upheaval following the Black Death to the influence of French-accented English—but its effects were enormous. The spellings had been largely fixed by printing before the vowel shift was complete, so that the written words reflected pronunciations that no longer existed such as knife and through.

Renaissance Expansion

The English Renaissance of the sixteenth and seventeenth centuries unleashed another flood of new vocabulary, much of it borrowed from Latin and Greek. Scholars and writers introduced thousands of words connected to science, philosophy, and literature, including democracy, encyclopedia, atmosphere, thermometer, criticism, and educate.

Critics derided the new coinages as “inkhorn terms”—pretentious, unnecessary words invented by scholars dipping their quills in inkhorns. Some of these attacked words, like “perpetrate” and “contemplate,” survived, others, like “ingent” (enormous), did not.

Two towering cultural works further shaped English during this era: Shakespeare’s plays and the King James Bible (1611). Shakespeare popularized countless words and expressions—among them assassination, lonely, eventful, and phrases like “break the ice” and “wild goose chase.” The King James Bible, widely read for centuries, left deep marks on English rhythm and idiom.

Dictionaries and Standardization

By the eighteenth century, many writers wanted to standardize and regulate English. The most influential effort was Samuel Johnson’s Dictionary of the English Language (1755), which became the dominant reference work of its era.

In the United States, Noah Webster’s American Dictionary of the English Language (1828) promoted simplified spellings such as color instead of colour and center instead of centre. Webster viewed spelling reform as part of America’s broader cultural independence from Britain.

English Goes Global

From the seventeenth through the early twentieth centuries, the British Empire spread English across the globe. Along the way, the language absorbed vocabulary from many other languages. Hindi contributed words such as jungle and shampoo, Arabic added algebra and alcohol, and Malay gave English bamboo and ketchup.

As English took root in different regions, new varieties emerged—American, Australian, Canadian, Indian, Nigerian, Singaporean, and many others. Linguists today increasingly recognize these as legitimate forms of English rather than deviations from a single standard.

English in the Digital Age

In the twentieth and twenty-first centuries, mass media and digital communication have accelerated linguistic change. Radio, film, television, and the internet spread slang, accents, and new expressions around the world with unprecedented speed.

English continues to absorb new words from science, technology, business, and online culture. Brand names become verbs; internet slang becomes everyday speech. Today more than a billion people speak English as a first or second language, making it the most widely used language in human history.

A Language Still Evolving

The history of English reminds us that language is not a fixed monument but a living system shaped by human interaction. Its vocabulary is like an archaeological site, where almost every common word carries traces of earlier eras.

English has never been “pure,” and attempts to purify it have always failed. Its strength lies in its openness—its ability to borrow, adapt, and reinvent itself. From the heroic poetry of Beowulf to Shakespeare’s theater, from the King James Bible to the language of the internet, English continues to grow through the voices of those who use it.

And if history is any guide, the English spoken a few centuries from now will sound just as surprising to us as Chaucer’s language once did.

Illustration generated by author using ChatGPT.

Sources

Baugh, Albert C. and Thomas Cable. A History of the English Language (6th edition). Routledge, 2012. This remains the standard academic textbook on the subject and covers every period and influence discussed above.

Crystal, David. The Cambridge Encyclopedia of the English Language (3rd edition). Cambridge University Press, 2019. An accessible and richly illustrated reference covering the structure and history of English.

McCrum, Robert, Robert MacNeil, and William Cran. The Story of English (3rd revised edition). Penguin, 2003. A popular history that accompanied the PBS television series, excellent for general readers.

Mugglestone, Lynda (ed.). The Oxford History of English (2nd edition). Oxford University Press, 2012. A collection of essays by specialists covering English from its earliest origins to the present day.

Bede, The Venerable. Ecclesiastical History of the English People. Penguin Classics, 1990 (translated by Leo Sherley-Price). The primary early source on the Anglo-Saxon migrations.

Townend, Matthew. “Contacts and Conflicts: Latin, Norse, and French.” In The Oxford History of English, edited by Lynda Mugglestone, 2012. A detailed treatment of the major external influences on English.

Online resource: The British Library’s “Evolving English” exhibit materials are available at https://www.bl.uk/learning/langlit/evolvingenglish/

Online resource: Durkin, Philip. “Borrowed Words: A History of Loanwords in English.” Oxford University Press, 2014. Summary and excerpts available at https://global.oup.com/academic/product/borrowed-words-9780199574995

The Easter Bunny: A Surprisingly Serious History

How a German hare hopped its way into American Easter tradition

Every Easter morning, children across America hunt for eggs left by a rabbit. It’s a charming ritual—and a deeply strange one, when you stop to think about it. Rabbits don’t lay eggs. They don’t carry baskets. Yet here we are, every spring, maintaining the fiction with great enthusiasm. Where did this tradition come from? The answer turns out to be a lot more interesting than you might expect.

The story starts in Germany. The earliest documented reference to an Easter Hare—called the “Osterhase” in German—appears in 1678, in a medical text by the physician Georg Franck von Franckenau. In the German tradition, the Osterhase was specifically a hare, not a rabbit, and its job was straightforward: deliver colored eggs to well-behaved children. Naughty children got nothing. This moral dimension—gift delivery tied to good behavior—should sound familiar. The Easter Bunny was, in a sense, an early version of Santa Claus.

The tradition crossed the Atlantic in the 1700s, carried by German Protestant immigrants who settled in Pennsylvania. Their children knew the Osterhase (sometimes rendered as “Oschter Haws” in Pennsylvania Dutch dialect) and kept up the custom of leaving out nests—made from caps and bonnets—for the hare to fill with eggs. Over time, the nests became baskets, the simple colored eggs became candy and chocolate, and the moral judgment quietly dropped away. By the 20th century, the Easter Bunny had transformed from selective gift-giver into universal children’s benefactor.

But why eggs at all? Eggs entered the Easter story long before Germany. For ancient Romans, they symbolized new life and fertility, and the custom of giving dyed eggs as spring gifts predates Christianity. The Christian tradition added another layer: during the Lenten fasting period eggs were a forbidden food. By Easter Sunday, the people were ready to use the accumulated eggs and were ready to celebrate.  They cooked, decorated, and shared them. The emergence from the shell became a visual metaphor for resurrection, and the symbolism stuck.

Rabbits and hares had their own long history as symbols of fertility and springtime. Some writers have linked the Easter Bunny to an ancient Anglo-Saxon goddess named Eostre—from whose name we may get the word “Easter”—and they claim the hare was her sacred animal. It’s a compelling story. It’s also largely unsupported by evidence. The Oxford Dictionary of English Folklore notes that the only historical source mentioning Eostre is the medieval scholar Bede, and Bede says nothing about hares. The goddess-and-hare connection appears to be modern folklore dressed up as ancient tradition.

What is better documented is that hares held symbolic significance across many early cultures. Neolithic burial sites in Europe include hares interred alongside humans, suggesting ritual importance. Hares are conspicuous breeders—they produce multiple litters each year and nest above ground, making their reproductive activity visible in a way that rabbits’ underground burrows do not. For pre-modern peoples marking the return of spring, the hare was a living advertisement for new life.

The combination of egg symbolism and hare symbolism wasn’t a deliberate design decision by any single culture or institution. It was a gradual collision—two powerful images of renewal fusing together over centuries of seasonal celebration. The church absorbed local spring customs rather than eliminating them, allowing pagan associations with fertility and rebirth to persist beneath a Christian overlay. The result is the hybrid tradition we have today.

Today’s Easter Bunny is genuinely a global figure, though not always a rabbit. In Australia, the role is played by the Easter Bilby, an endangered marsupial that conservationists have promoted as a local alternative since the 1990s. Switzerland has an Easter Cuckoo. Parts of Germany have an Easter Fox. Each region adapted the basic concept of a spring gift-bringer to fit its own wildlife and folklore.

The commercial Easter Bunny we know—the chocolate molded figure, the pastel basket, the branded plush toy—is largely a product of the late 19th and 20th centuries, shaped by the same forces that turned Saint Nicholas into Santa Claus. Candy manufacturers, greeting card companies, and department stores found in Easter a spring counterpart to the Christmas retail season, and the Easter Bunny was the obvious mascot.

None of that diminishes what the tradition actually does. The Easter Bunny survived precisely because its meaning kept evolving. It began as a moral enforcer in 17th-century Germany, became a community ritual for immigrant families in Pennsylvania, and eventually became a child’s-eye-view celebration of spring available to secular and religious families alike. The rabbit never needed to make logical sense. It only needed to mark the moment the world turns green again—and every civilization, it seems, finds a way to celebrate that.

Illustration generated by author using ChatGPT.

Sources:

  • Bede, De Temporum Ratione (8th century)
    https://sourcebooks.fordham.edu/basis/bede-reckoning.asp
  • Encyclopaedia Britannica — Easter holiday origins
    https://www.britannica.com/topic/Easter-holiday
  • Catholic Encyclopedia — Lent and fasting traditions
    https://www.newadvent.org/cathen/09152a.htm
  • Smithsonian Magazine — History of Easter Eggs
    https://www.smithsonianmag.com/arts-culture/the-history-of-the-easter-egg-180971982/
  • History.com — Easter Symbols and Traditions
    https://www.history.com/topics/holidays/easter-symbols
  • Library of Congress — Easter traditions in early America
    https://blogs.loc.gov/folklife/2016/03/easter-on-the-farm/
  • National Geographic — Where Did the Easter Bunny Come From?
    https://www.nationalgeographic.com/history/article/easter-bunny-origins
  • American Folklife Center, Library of Congress
    https://www.loc.gov/folklife/
  • National Confectioners Association — Easter candy statistics
    https://www.nationalconfectioners.org/blog/seasonal-easter-candy-data/
  • Smithsonian — How holidays became commercial traditions
    https://www.smithsonianmag.com/history/the-surprising-history-of-holiday-shopping-180964949/
  • Oxford Companion to the Year — Ronald Hutton
    https://global.oup.com/academic/product/the-stations-of-the-sun-9780192854483
  • University of Pennsylvania Religious Studies overview of seasonal festivals
    https://www.penn.museum/sites/expedition/easter/

The Marble Statue Problem: Why Half the Story Is No Story at All

A Commentary on Selective American History

There is a version of American history that looks spectacular. Founding Fathers on horseback, industrialists building steel empires from nothing, pioneers pushing west into open lands. It is the kind of history that gets carved into marble, hoisted onto pedestals, and taught as national mythology. Clean. Inspiring. Incomplete. And right now, there is a visible push by some politicians, curriculum reformers, and commentators to make that marble-statue version the only version — to scrub away what one American Historical Association report called the “inconvenient” truths that complicate the picture. What we lose in that scrubbing is not just accuracy. We lose the full human story of this country, and with it, the lessons that might be useful today.

The selective telling is not new, but its current form has new energy. In recent years, legislation has been introduced across multiple states to restrict how teachers discuss slavery, Indigenous displacement, immigration history, and the treatment of women and the poor. The argument is usually dressed up as national unity and pride. But the practical effect is something else: a history curriculum where triumph and innovation are permissible but suffering and exploitation are edited out.

Historians surveying American teachers in 2024 found this impulse reflected in the classroom as well — students arriving with what teachers described as a “marble statues” version of history absorbed from earlier grades, one that freezes the Founders and other heroes in idealized civic memory, stripped of contradiction. The pitch is usually framed as morale: kids need pride and self esteem, not “division.” But the practical effect is a kind of historical editing that turns real people—enslaved Americans, Native communities, women, immigrants, and the poor—into background scenery rather than participants with agency, suffering, and claims on the national memory. 

You can see the argument playing out in education policy and curriculum fights. The “patriotic education” push associated with the federal 1776 Commission is a clear example: it cast some approaches to teaching slavery and racism as inherently “anti-American,” and it encouraged a narrative that stresses national ideals while softening the lived realities that contradicted those ideals. 

Historians’ organizations have answered back that this kind of narrowing doesn’t create unity so much as it creates amnesia.  At the state level, controversies over how to describe or contextualize slavery—down to euphemisms and selective framing—keep resurfacing, because controlling the vocabulary controls the moral takeaway.  Florida’s education standards went so far as to compare slavery with job training.

The tension between celebratory and critical history also appears in how we interpret national symbols. The Statue of Liberty, now widely read as a welcoming beacon for immigrants, was originally conceived in significant part as a commemoration of the end of slavery in the United States and of the nation’s centennial. Over time, its antislavery meaning was overshadowed by a more comfortable story about voluntary immigration and opportunity as official imagery and public campaigns recast the statue to fit new national needs. This shift did not merely “add” an interpretation; it obscured the connection between American liberty and Black emancipation, pushing aside the reality that millions arrived in chains rather than by choice.

The deeper problem isn’t that Americans disagree about the past—healthy societies argue about meaning all the time. The problem is when disagreement becomes a one-way ratchet: complexity gets labeled “bias,” and only a feel-good storyline qualifies as “neutral.” That’s not neutral. That’s a choice to privilege certain experiences as representative and treat others as “inconvenient.”

Nowhere does this distortion show up more clearly than in how Americans tend to celebrate the industrialists of the late 19th and early 20th centuries — the Gilded Age titans who built railroads, steel mills, and oil empires. Andrew Carnegie, John D. Rockefeller, J.P. Morgan, Cornelius Vanderbilt: these men are frequently held up as models of American ambition and ingenuity, visionaries who transformed a post-Civil War nation into the world’s dominant industrial power. And they did do that. But the marble-statue version stops there, and stopping there is where the dishonesty begins.

Look at what powered that industrial machine: coal. And look at who powered coal. The men — and children — who went underground every day to dig it out of the earth under conditions that were, by any modern standard, a form of institutionalized violence. Between 1880 and 1923, more than 70,000 coal miners died on the job in the United States. That is not a rounding error; it is a small city’s worth of human lives, consumed by an industry that knew the dangers and chose profits over protection. Cave-ins, gas explosions, machinery accidents, and the slow suffocation of black lung took miners in ones and twos on ordinary days, and in mass casualties during what miners grimly called “explosion season” — when dry winter air made methane and coal dust especially volatile. Three major mine disasters in the first decade of the 1900s killed 201, 362, and 239 miners respectively, the latter two occurring within two weeks of each other.

And those were the adults. In the anthracite coal fields of Pennsylvania alone, an estimated 20,000 boys were working as “breaker boys” in 1880 — children as young as eight years old, perched above chutes and conveyor belts for ten hours a day, six days a week, picking slate and impurities out of rushing coal with bare hands. The coal dust was so thick at times it obscured their view. Photographer Lewis Hine documented these children in the early 1900s specifically because he understood that seeing them — their coal-blackened faces, their missing fingers, their flat eyes — was the only way to make comfortable Americans confront the total cost of the industrial miracle. Pennsylvania passed a law in 1885 banning children under twelve from working in coal breakers. The law was routinely ignored; employers forged age documents and desperate families went along with it because the wages, however meager, kept families from starving.

Coal mining is a representative case study because the work was both essential and punishing, and because the labor conflicts were not metaphorical—they were sometimes literally armed. In the coalfields, many miners lived in company towns where the company controlled the housing and the local economy. Some workers were paid in “scrip” redeemable only at the company store, a system that locked families into dependency and debt.  When union organizing surged, the backlash could be violent. West Virginia’s Mine Wars culminated in the Battle of Blair Mountain in 1921—widely described as the largest labor uprising in U.S. history—where thousands of miners confronted company-aligned forces and state power.  The mine owners deployed heavy machine guns and hired private pilots to drop arial bombs on the miners.

If you zoom out, this pattern wasn’t limited to coal. The Triangle Shirtwaist Factory fire in 1911 became infamous partly because locked doors and poor safety practices trapped workers—mostly young immigrant women—leading to 146 deaths in minutes. 

When workers tried to organize for better pay and safer conditions, the response from the industrialists and their allies was not negotiation. It was force. Henry Clay Frick, chairman at Carnegie Steel, cut worker wages in half while increasing shifts to twelve hours, then hired the Pinkerton Detective Agency — effectively a private army — to break the strike that followed at Homestead, PA in 1892. During the Great Railroad Strike of 1877, when workers walked off the job across the country, state militias were called in. In Maryland, militia fired into a crowd of strikers, killing eleven. In Pittsburgh, twenty more were killed with bayonets and rifle fire. A railroad executive of the era, asked about hungry striking workers, reportedly suggested they be given “a rifle diet for a few days” to see how they liked it. Throughout this period the federal government largely sided with capital against labor.

This is the part of the story that the marble-statue version leaves out — and not because it is marginal. The labor movement that emerged from these battles shaped virtually every protection American workers have today: the eight-hour workday, child labor laws, workplace safety regulations, the right to organize. These were not gifts handed down by generous industrialists. They were won through strikes, suffering, and in some cases, death. Ignoring that history does not honor the industrialists. It dishonors the workers.

The same pattern runs through every thread of American history that is currently under pressure. The story of westward expansion is incomplete without the story of Native displacement and the deliberate destruction of Indigenous cultures. The story of American agriculture is incomplete without the story of enslaved labor and the systems of racial control that followed emancipation. The story of American prosperity is incomplete without the story of immigrant communities channeled into the most dangerous, lowest-paid work and then told to be grateful for the opportunity. Women’s history, for most of American history, was not considered history at all. In each case, leaving out the difficult chapter does not produce a cleaner story. It produces a false one.

The argument for the marble-statue version is usually that complexity is demoralizing — that children need heroes, that citizens need pride, that a nation cannot function if it is constantly relitigating its worst moments. There is something in that concern worth taking seriously. History taught purely as a catalog of grievances is not good history either. But the answer to that problem is not to swap one distortion for another. Good history holds both: the genuine achievement and the genuine cost. Mark Twain understood this when he coined “The Gilded Age” — a title that means literally covered in a thin layer of gold over something much cheaper underneath. That phrase has been in the American vocabulary for 150 years because it captures something true about how surfaces can deceive.

A country that cannot look honestly at its own history is a country that will keep repeating the parts it refuses to examine. The enslaved deserve to be in the story. Indigenous people deserve to be in the story. Women deserve to be in the story. The breaker boys deserve to be in the story. The miners killed by the thousands deserve to be in the story. The workers shot by militias while asking for a living wage deserve to be in the story. Not because the story should only be about suffering, but because they were there — and because understanding what they faced, and what they fought for, and what they eventually changed, is how the story makes sense.

Illustration generated by author using ChatGPT.

Sources

American Historical Association. “American Lesson Plan: Curricular Content.” 2024.
https://www.historians.org/teaching-learning/k-12-education/american-lesson-plan/curricular-content/

Brewminate. “Replaceable Lives and Labor Abuse in the Gilded Age: Labor Exploitation and the Human Cost in America’s Gilded Age.” 2026.
https://brewminate.com/replaceable-lives-and-labor-abuse-in-the-gilded-age/

Bureau of Labor Statistics. “History of Child Labor in the United States, Part 1.” 2017.
https://www.bls.gov/opub/mlr/2017/article/history-of-child-labor-in-the-united-states-part-1.htm

Energy History Project, Yale University. “Coal Mining and Labor Conflict.”
https://energyhistory.yale.edu/coal-mining-and-labor-conflict/

Hannah-Jones, Nikole, et al. “A Brief History of Slavery That You Didn’t Learn in School.” New York Times Magazine. 2019.
https://www.nytimes.com/interactive/2019/08/14/magazine/slavery-capitalism.html

Investopedia. “The Gilded Age Explained: An Era of Wealth and Inequality.” 2025.
https://www.investopedia.com/terms/g/gilded-age.asp

MLPP Pressbooks. “Gilded Age Labor Conflict.”
https://mlpp.pressbooks.pub/ushistory2/chapter/chapter-1/

Princeton School of Public and International Affairs. “Princeton SPIA Faculty Reflect on America’s Past as 250th Anniversary Approaches.” 2026.
https://spia.princeton.edu/

USA Today. “Millions of Native People Were Enslaved in the Americas. Their Story Is Rarely Told.” 2025.
https://www.usatoday.com/

Wikipedia. “Breaker Boy.”
https://en.wikipedia.org/wiki/Breaker_boy

Wikipedia. “Robber Baron (Industrialist).”
https://en.wikipedia.org/wiki/Robber_baron_(industrialist)

America250 (U.S. Semiquincentennial Commission). “America250: The United States Semiquincentennial.”
https://www.america250.org/

Bunk History (citing Washington Post reporting). “The Statue of Liberty Was Created to Celebrate Freed Slaves, Not Immigrants.”
https://www.bunkhistory.org/

Upworthy. “The Statue of Liberty Is a Symbol of Welcoming Immigrants. That’s Not What She Was Originally Meant to Be.” 2026.
https://www.upworthy.com/

Henry Knox vs Joseph Plumb Martin: A Case Study in Officer Privilege After the Revolution

Last week I looked at how poorly revolutionary war veterans were treated in general. This week I’d like to take a look at a specific example —the contrast between how generals like Henry Knox and common soldiers like Joseph Plumb Martin fared after the Revolutionary War. It perfectly illustrates the class divide I discussed in my previous post. These two men served in the same army, helped win the same independence, and endured similar hardships—although Martin endured far greater hardship. Their post-war experiences couldn’t have been more different—and in a bitter twist, Knox’s prosperity came partly at Martin’s expense.
Knox’s Golden Parachute
Henry Knox entered the war as a Boston bookseller of modest means whose military knowledge was gained from reading rather than formal training. He rose to become Washington’s chief of artillery and a major general. When the war ended, Knox received benefits that set him up for life—or should have.
As an officer who served until the war’s end, Knox received the 1783 commutation payment: five years’ full pay in the form of government securities bearing six percent annual interest. This came after Knox himself helped lead the officer corps in pressuring Congress for payment during the near-mutiny known as the Newburgh Conspiracy in early 1783. In total, 2,480 officers received these commutation certificates
But Knox’s real windfall came from his marriage and his government connections. His wife Lucy came from a wealthy Loyalist family—her grandfather was Brigadier General Samuel Waldo, who’d gained control of a massive land patent in Maine in the 1730’s. When Lucy’s family fled to England, she became the sole heir to approximately 576,000 acres known as the Waldo Patent.
Knox used his position as the first Secretary of War (earning $3,000 annually in 1793) and his wartime connections to expand his land holdings and business ventures. He was able to ensure that his wife’s family lands were passed to her, rather than being seized by the government, as the holding of many loyalists were. Knox was firmly positioned on the creditor side of the equation, and his political connections helped shield him from the harsh economic reality faced by common soldiers.
He also acquired additional property in the Ohio Valley and engaged in extensive land speculation. He ran multiple businesses: timber operations, shipbuilding, brick-making, quarrying, and extensive real estate development.
After retiring from government in 1795, he built Montpelier, a magnificent three-story mansion in Thomaston, Maine, described as having “beauty, symmetry and magnificence” unequaled in Massachusetts. (My wife and I visited a reconstruction of his mansion this past summer and I can personally testify as to how elaborate a home it was.)
Martin’s Broken Promises
Joseph Plumb Martin’s story is the experience of the roughly 80,000-90,000 common soldiers who did most of the fighting. Martin enlisted at age 15 in 1776 and served seven years—fighting at Brooklyn, White Plains, Monmouth, surviving Valley Forge, and digging trenches at Yorktown. He rose from private to sergeant.
When Martin mustered out, he received certificates of indebtedness instead of actual pay—IOUs that depreciated rapidly. Unlike Knox, enlisted men received no pension, no commutation payment, nothing beyond those nearly worthless certificates. Martin, like many veterans, sold his certificates to speculators at a fraction of their face value just to survive.
After teaching briefly in New York, Martin settled in Maine in the early 1790s.  Based on the promise of a land bounty from Massachusetts, Martin and other “Liberty Men” each claimed 100 acres in Maine, assuming that Loyalist lands would be confiscated and sold cheaply to the current occupants or, perhaps, even treated as vacant lands they could secure by clearing and improving.
Martin married Lucy Clewley in 1794 and started farming. He’d fought for independence and now just wanted to build a modest life in the belief that the country he had fought for would stand by its promises.
When Former Comrades Became Adversaries
Here’s where the story takes a dark turn. In 1794, Henry Knox—Martin’s former commanding general—asserted legal ownership of Martin’s 100-acre farm. Knox claimed the land was part of the Waldo Patent. Martin and other settlers argued they had the right to farm the land they’d improved, especially as it should be payment for their Revolutionary service.
The dispute dragged on for years, with some veterans even forming a guerrilla group called the “White Indians” who attacked Knox’s surveyors. But Knox had wealth, lawyers, and political connections. In 1797, the legal system upheld Knox’s claim. Martin’s farm was appraised at $170—payable over six years in installments.
To put that in perspective, when Martin finally received a pension in 1818—twenty-one years later—it paid only $96 per year. And to get even that meager pension, Martin had to prove he was destitute. The $170 Knox demanded represented nearly two years of the pension Martin wouldn’t receive for another two decades.
Martin begged Knox to let him keep the land. There’s no evidence Knox even acknowledged his letters. By 1811, Martin had lost more than half his farm. By 1818, when he appeared before the Massachusetts General Court with other veterans seeking their long-promised pensions, he owned nothing.
The Irony of “Fair Treatment”
Knox claimed he treated settlers on his Maine lands fairly, though he used intermediaries to evict those who couldn’t pay rent or whom he considered to be squatters. The settlers disagreed so strenuously that they once threatened to burn Montpelier to the ground
The situation’s bitter irony is hard to overstate. Knox had been one of the officers who organized the Society of the Cincinnati in 1783, ostensibly to support widows and orphans of Revolutionary War officers. He’d helped lead the push for officer commutation payments by threatening Congress during the Newburgh affair. Yet when common soldiers like Martin—men who’d literally dug the trenches that won the siege at Yorktown—needed help, Knox showed no mercy.
The Numbers Tell the Story
Let’s compare their situations side by side:
Henry Knox:
              ∙            Officer commutation: Five years’ full pay in securities with 6% interest
              ∙            Secretary of War salary: $3,000 per year (1793)
              ∙            Land holdings: 576,000+ acres in Maine, plus Ohio Valley properties
              ∙            Housing: Three-story mansion with extensive outbuildings
              ∙            Businesses: Multiple ventures in timber, ships, bricks, quarrying, real estate
              ∙            Death: 1806, in debt from failed business ventures but having lived in luxury
Joseph Plumb Martin:
              ∙            Enlisted pay: Mostly unpaid certificates sold at a loss to speculators
              ∙            Pension: None until 1818, then $96 per year (had to be destitute to qualify)
              ∙            Land holdings: Started with 100 acres, lost all most all of it to Knox by 1818
              ∙            Housing: Small farmhouse, struggling to farm 8 of his original 100 acres
              ∙            Income: Subsistence farming, served as town clerk for modest pay
              ∙            Death: 1850 at age 89, having struggled financially his entire post-war life
A Memoir Born of Frustration
In 1830, at age 70, Martin published his memoir anonymously. The full title captured his experience: “A Narrative of Some of the Adventures, Dangers, and Sufferings of a Revolutionary Soldier.” He published it partly to support other veterans fighting for their promised benefits and possibly hoping to earn some money from sales.
The book didn’t sell. It essentially disappeared until a first edition was rediscovered in the 1950s and republished in 1962. Today it’s considered one of the most valuable primary sources we have for understanding what common soldiers experienced during the Revolution. Historians praise it precisely because it’s not written by someone like Washington, Knox, or Greene—it’s the voice of a regular soldier
When Martin died in 1850, a passing platoon of U.S. Light Infantry stopped at his house and fired a salute to honor the Revolutionary War hero. But that gesture of respect came long after the country should have helped Martin when he needed it.
The Broader Pattern
Knox wasn’t unusual among officers, nor was Martin unusual among enlisted men. This was the pattern: officers with education, connections, and capital leveraged their wartime service into political positions, land grants, and business opportunities. Common soldiers received promises, waited decades for minimal pensions, and often lost what little property they had to the very elites who’d commanded them.
It’s worth noting that Knox’s business ventures eventually failed. He died in debt in 1806, having borrowed extensively to fund his speculations. His widow Lucy had to gradually sell off land to survive. But Knox still lived eleven years in a mansion, engaged in enterprises of his choosing, and died surrounded by family on his comfortable estate. Martin outlived him by forty-four years, spending most of them in poverty.
The story of Knox and Martin isn’t one of villainy versus heroism. Knox was a capable general who genuinely contributed to winning independence. Martin was a dedicated soldier who did the same. But the system they operated within distributed the benefits of that shared victory in profoundly unequal ways, and Knox—whether intentionally or not—used that system to take what little they had from soldiers who’d fought under his command. This was not corruption in the modern sense; it was the predictable outcome of a system that rewarded status, education, and proximity to power. Knox’s experience illustrates a broader truth of the post-Revolutionary period: independence redistributed political sovereignty, but economic security flowed upward, not downward.
When we talk about how Continental Army veterans were treated, this is what it looked like on the ground: the officer who led the charge for officer pensions living in a mansion on 600,000 acres, while the sergeant who dug the trenches at Yorktown lost his 100-acre farm and had to prove he was destitute to get $96 a year, decades too late to matter. This will always be a black mark on American history.
 
Illustrations generated by author using ChatGPT.

Personal note: I spent 12 years on active duty, both as an officer and an enlisted man. I’m proud of my service and I’m proud of the people who have served our country. I do not write this in order to condemn our history. I write it in order to make us aware that we need to always support the common people who contribute vitally to our national success and are seldom recognized.

Sources
Martin, Joseph Plumb. “A Narrative of a Revolutionary Soldier: Some of the Adventures, Dangers and Sufferings of Joseph Plumb Martin”
Originally published anonymously in 1830 at Hallowell, Maine as “A narrative of some of the adventures, dangers, and sufferings of a Revolutionary soldier, interspersed with anecdotes of incidents that occurred within his own observation.” The memoir fell into obscurity until a first edition copy was discovered in the 1950s and donated to Morristown National Historical Park. Republished by Little, Brown in 1962 under the title “Private Yankee Doodle” (edited by George F. Scheer). Current edition published 2001. This firsthand account by a Continental Army private who served seven years provides invaluable insight into the common soldier’s experience during the war and the struggles veterans faced afterward, including Martin’s own land dispute with Henry Knox.  I highly recommend this book to anyone with an interest in ordinary people and their role in history.
 
American Battlefield Trust – The Newburgh Conspiracy
https://www.battlefields.org/learn/articles/newburgh-conspiracy
 
Maine Memory Network – Henry Knox: Land Dealings
https://thomaston.mainememory.net/page/735/display.html
 
World History Encyclopedia – Henry Knox
https://www.worldhistory.org/Henry_Knox/
 
Maine: An Encyclopedia – Knox, Henry
https://maineanencyclopedia.com/knox-henry/
 
American Battlefield Trust – Joseph Plumb Martin: Voice of the Common American Soldier
https://www.battlefields.org/learn/articles/joseph-plumb-martin
 
Wikipedia – Joseph Plumb Martin
https://en.wikipedia.org/wiki/Joseph_Plumb_Martin
 
Note on Additional Context: While these were the primary sources directly used in this article, the discussion also drew on information from my earlier Revolutionary War veterans article about the general treatment of enlisted soldiers, pension systems, and the class disparities in how benefits were distributed after the war.

Black Soldiers on Both Sides: The Complex Story of African Americans in the Revolutionary War

When we picture the American Revolution, we often imagine Continental soldiers in blue coats facing off against British redcoats—but this image leaves out thousands of crucial participants. Between 5,000 and 8,000 Black men fought for the Patriot cause, while an estimated 20,000 joined the British forces. Their stories reveal the war’s profound contradictions and the complex choices Black Americans faced when white colonists fought for “liberty” while holding hundreds of thousands of people in bondage. Their participation reflected the Revolution’s central paradox: a war waged in the name of liberty within a society deeply dependent on slavery.

The irony wasn’t lost on anyone at the time. As Abigail Adams wrote in 1774, “it always appeared a most iniquitous scheme to me to fight ourselves for what we are daily robbing and plundering from those who have as good a right to freedom as we have”.

For most Black participants, the key question was which side offered the clearest path out of bondage rather than abstract allegiance to King or Congress.  The tension between revolutionary rhetoric and the reality of slavery shaped every decision Black Americans made about which side to support.  This dynamic meant that enslaved people frequently escaped to British forces, while free Blacks (especially in New England) were more likely, though not exclusively, to enlist with the Patriots where they already had tenuous civic footholds

The British Offer: “Liberty to Slaves”

In November 1775, Virginia’s royal governor Lord Dunmore made a move that sent shockwaves through the colonies. With his military position deteriorating and losing men under his command, Dunmore issued a proclamation offering freedom to any enslaved person who abandoned their Patriot masters and joined British forces. The proclamation declared “all indented servants, Negroes, or others (appertaining to rebels) free, that are able and willing to bear arms”.

The response was immediate. Within a month, an estimated 300 Black men had enlisted in what Dunmore called the “Royal Ethiopian Regiment,” eventually growing to about 800 men.  Their uniforms were emblazoned with the provocative words “Liberty to Slaves.” The name “Ethiopian” wasn’t random—it referenced ancient associations of Ethiopia with wisdom and nobility. These soldiers saw action at the Battle of Kemp’s Landing, where—in a moment rich with symbolic meaning—one previously enslaved soldier captured his former master, militia colonel Joseph Hutchings.

Dunmore’s promise came with devastating costs. The regiment’s only other major battle was the disastrous British defeat at Great Bridge in December 1775. Far worse was the disease that ravaged the Black soldiers’ ranks. As the Virginia Gazette reported in March 1776, “the jail distemper rages with great violence on board Lord Dunmore’s fleet, particularly among the negro forces”. Disease ultimately killed more of Dunmore’s recruits than combat, as was common among all armies of the time. By 1776, Dunmore was forced to flee Virginia, taking only about 300 survivors with him.

The Patriot Response: Reluctant Acceptance

The Continental Army’s relationship with Black soldiers was complicated from the start. Black men fought at Lexington and Concord.  They also distinguished themselves at Bunker Hill, where Black patriot Salem Poor performed so heroically that fourteen officers petitioned the Massachusetts legislature to recognize his “brave and gallant” service.

But in November 1775, just days after Dunmore’s Proclamation, George Washington—himself a Virginia slaveholder—banned the recruitment of all Black men. The ban didn’t last long. The British continued recruiting Black soldiers, and Washington faced a simple reality: he desperately needed troops. By early 1778, after the brutal winter at Valley Forge had decimated his forces, Washington grudgingly allowed states to enlist Black soldiers. Rhode Island led the way with legislation that promised immediate freedom to any “able-bodied negro, mulatto, or Indian man slave” who enlisted, with the state compensating slaveholders for their “property”.

The result was the 1st Rhode Island Regiment, which became known as the “Black Regiment.” Of its roughly 225 soldiers, about 140 were Black or Native American men. The regiment fought at the Battle of Rhode Island in August 1778, where they held their position against repeated British and Hessian charges—a performance that earned them, according to Major General John Sullivan, “a proper share of the day’s honors”. They went on to fight at Yorktown, where they stood alongside southern militiamen whose peacetime job had been hunting runaway slaves.

Throughout the Continental Army, Black soldiers generally served in integrated units. One French officer estimated that a quarter of Washington’s army was Black—though historians believe 10 to 15 percent is more accurate. As one historian noted, “In the rest of the Army, the few blacks who served with each company were fully integrated: They fought, drilled, marched, ate and slept alongside their white counterparts.”

Naval service—on both sides—was often more racially integrated than the army. Black men served as sailors, gunners, and marines in the Royal Navy and the Continental Navy. Maritime labor traditions had long been more flexible on race, and skill mattered more than status.

Free Blacks in northern towns could enlist much like white common citizens, sometimes motivated by pay, local patriotism, and the hope that visible service would strengthen claims to equal rights after the war.  Enslaved men rarely chose independently; Patriot masters often enlisted them as substitutes to avoid service, while Loyalist masters sometimes allowed or forced them to join British units. In both cases emancipation promises were unevenly honored.

Some enslavers freed men in advance of service, others promised manumission afterward and reneged, while still others simply collected bounties or commutation while trying to retain control over Black veterans. On the British side, imperial policy also vacillated, with some officers fully supporting freedom for Black refugees tied to rebel masters, and others quietly returning runaways to Loyalist owners or exploiting them as unpaid labor.​

The Promise and the Betrayal

As the war ended, the gulf between British and American treatment of their Black allies became stark. In 1783, as British forces prepared to evacuate New York, General George Washington demanded the return of all formerly enslaved people as “property” under the Treaty of Paris. British commander Sir Guy Carleton refused. Instead, he created the “Book of Negroes”—a ledger documenting about 3,000 Black Loyalists who were granted certificates of freedom and evacuated to Nova Scotia, England, Germany, and British territories.

The Book provides glimpses of individual journeys. Boston King, who had escaped slavery in South Carolina to join the British, was evacuated with his wife Violet to Nova Scotia. Their entry simply notes Violet as a “stout wench”—a reminder that even their liberators viewed them through racist lenses. Harry Washington, who had escaped from George Washington’s Mount Vernon plantation, also reached Nova Scotia and later became a leader in the resettlement to Sierra Leone.

Nova Scotia proved no paradise. Black Loyalists received inferior land—rocky and infertile compared to what white Loyalists received. They faced discrimination, exploitation, and broken promises about land grants. By 1792, nearly 1,200 Black Loyalists—about half of those in Nova Scotia—accepted an offer to resettle in Sierra Leone, where they founded Freetown.

For Black Patriots, the outcome was often worse. While some white soldiers received up to 100 acres of land and military pensions from Congress, Black soldiers who had been promised freedom often received nothing beyond freedom—and some didn’t even get that. As one historian put it, they were “dumped back into civilian society”. In June 1784, thirteen veterans of the Rhode Island Regiment had to hire a lawyer just to petition for their back pay. The state responded with an act that classified them as “paupers, who heretofore were slaves” and ordered towns to provide charity.

Lieutenant Colonel Jeremiah Olney, who commanded the Rhode Island Regiment after Christopher Greene’s death, spent years advocating for his former soldiers—fighting attempts to re-enslave them and supporting their pension claims. Some soldiers, like Jack Sisson, finally received pensions decades later in 1818—forty years after they’d enlisted, and often too late. Many died before seeing any recognition.

Even more cruelly, many Black soldiers who had been promised freedom by their masters were returned to slavery after the war. Some remained enslaved for a few years until their owners honored their promises; others remained enslaved permanently, having fought for a freedom they would never experience.

It is plausible that the widespread participation of Black soldiers subtly accelerated Northern emancipation by making slavery harder to justify ideologically, even as Southern resistance hardened.

The Larger Meaning

The American Revolution was the last time the U.S. military would be significantly integrated until President Truman’s Executive Order 9981 in 1948. In 1792, Congress passed legislation limiting military service to “free, able-bodied, white male citizens”—a restriction that would last for generations.

Yet the Revolutionary War period saw more enslaved people gain their freedom than any other time before the Civil War. Historian Gary Nash estimates that between 80,000 and 100,000 enslaved people escaped throughout the thirteen colonies during the war—not all joined the military, but the war created opportunities for flight that many seized.

As historian Edward Countryman notes, the Revolution forced Americans to confront a question that Black Americans had been raising all along: “What does the revolutionary promise of freedom and democracy mean for African Americans?” The white founders failed to answer that question satisfactorily, but the thousands of Black soldiers who fought—on both sides—had already answered it with their lives. They understood that liberty was worth fighting for, even when the people promising it had no intention of extending it to everyone.

Image generated by author using ChatGPT.

Sources

  • “African Americans in the Revolutionary War,” Wikipedia.
  • Museum of the American Revolution, “Black Patriots and Loyalists” and “Black Founders: Black Soldiers and Sailors in the Revolutionary War.”​
  • Gilder Lehrman Institute, “African American Patriots in the Revolution.”​
  • National Archives blog, “African Americans and the American War for Independence.”​
  • Douglas R. Egerton, Death or Liberty: African Americans and Revolutionary America (individual stories on both Patriot and Loyalist sides).
  • Edward Countryman, The American Revolution.
  • Gary B. Nash, The Forgotten Fifth: African Americans in the Age of Revolution.
  • Alan Gilbert, Black Patriots and Loyalists: Fighting for Emancipation in the War for Independence.​
  • DAR, Forgotten Patriots – African American and American Indian Patriots in the Revolutionary War: A Guide to Service, Sources, and Studies).​
  • NYPL LibGuide, “Black Experience of the American Revolution”
  • American Battlefield Trust, “10 Facts: Black Patriots in the American Revolution.”​
  • Massachusetts Historical Society, “Revolutionary Participation: African Americans in the American Revolution.”
  • Fraunces Tavern Museum, “Enlistment of Freed and Enslaved Blacks in the Continental Army.”​
  • American Independence Museum, “African-American Soldiers’ Service During the Revolutionary War.”​
  • Encyclopedia Virginia, “Lord Dunmore’s Ethiopian Regiment.”​
  • Mount Vernon, “Dunmore’s Proclamation and Black Loyalists” and “The Ethiopian Regiment.”​
  • American Battlefield Trust, “Lord Dunmore’s Ethiopian Regiment”
  • Lord Dunmore’s Proclamation (1775), in transcription with context at Gilder Lehrman, Encyclopedia Virginia, and Mount Vernon.​
  • “Book of Negroes” (1783 evacuation ledger of Black Loyalists to Nova Scotia; digital copies and discussions via BlackPast and Dictionary of Canadian Biography).​
  • Boston King, “Memoirs of the Life of Boston King, a Black Preacher,” Methodist Magazine (1798)
  • NYPL “Black Experience of the American Revolution”
  • 1st Rhode Island Regiment, World History Encyclopedia

The Founding Feuds: When America’s Heroes Couldn’t Stand Each Other

The mythology of the founding fathers often portrays them as a harmonious band of brothers united in noble purpose. The reality was far messier—these brilliant, ambitious men engaged in bitter personal feuds that sometimes threatened the very republic they were creating.  In some ways, the American revolution was as much of a battle of egos as it was a war between King and colonists.

The Revolutionary War Years: Hancock, Adams, and Washington’s Critics

The tensions began even before independence was declared. John Hancock and Samuel Adams, both Massachusetts firebrands, developed a rivalry that simmered throughout the Revolution. Adams, the older political strategist, had been the dominant figure in Boston’s resistance movement. When Hancock—wealthy, vain, and eager for glory—was elected president of the Continental Congress in 1775, the austere Adams felt his protégé had grown too big for his britches. Hancock’s request for a leave of absence from the presidency of Congress in 1777 coupled with his desire for an honorific military escort home, struck Adams as a relapse into vanity. Adams even opposed a resolution of thanks for Hancock’s service, signaling open estrangement. Their relationship continued to deteriorate to the point where they barely spoke, with Adams privately mocking Hancock’s pretensions and Hancock using his position to undercut Adams politically.

The choice of Washington as commander sparked its own controversies. John Adams had nominated Washington, partly to unite the colonies by giving Virginia the top military role. Washington’s command was anything but universally admired and as the war dragged on with mixed results many critics emerged.

After the victory at Saratoga in 1777, General Horatio Gates became the focal point of what’s known as the Conway Cabal—a loose conspiracy aimed at having Gates replace Washington as commander-in-chief. General Thomas Conway wrote disparaging letters about Washington’s military abilities. Some members of Congress, including Samuel Adams, Thomas Mifflin, and Richard Henry Lee, questioned whether Washington’s defensive strategy was too cautious and if his battlefield performance was lacking. Gates himself played a duplicitous game, publicly supporting Washington while privately positioning himself as an alternative.

When Washington discovered the intrigue, his response was characteristically measured but firm.  Rather than lobbying Congress or forming a counter-faction, Washington leaned heavily on reputation and restraint. He continued to communicate respectfully with Congress, emphasizing the army’s needs rather than defending his own position.  Washington did not respond with denunciations or public accusations. Instead, he handled the situation largely behind the scenes. When he learned that Conway had written a critical letter praising Gates, Washington calmly informed him that he was aware of the letter—quoting it verbatim.

The conspiracy collapsed, in part because Washington’s personal reputation with the rank and file and with key political figures proved more resilient than his critics had anticipated. But the episode exposed deep fractures over strategy, leadership, and regional loyalties within the revolutionary coalition.

The Ideological Split: Hamilton vs. Jefferson and Madison

Perhaps the most consequential feud emerged in the 1790s between Alexander Hamilton and Thomas Jefferson, with James Madison eventually siding with Jefferson. This wasn’t just personal animosity—it represented a fundamental disagreement about America’s future.

Hamilton, Washington’s Treasury Secretary, envisioned an industrialized commercial nation with a strong central government, a national bank, and close ties to Britain. Jefferson, the Secretary of State, championed an agrarian republic of small farmers with minimal federal power and friendship with Revolutionary France. Their cabinet meetings became so contentious that Washington had to mediate. Hamilton accused Jefferson of being a dangerous radical who would destroy public credit. Jefferson called Hamilton a monarchist who wanted to recreate British aristocracy in America.

The conflict got personal. Hamilton leaked damaging information about Jefferson to friendly newspapers. Jefferson secretly funded a journalist, James Callender, to attack Hamilton in print. When Hamilton’s extramarital affair with Maria Reynolds became public in 1797, Jefferson’s allies savored every detail. The feud split the nation into the first political parties: Hamilton’s Federalists and Jefferson’s Democratic-Republicans. Madison, once Hamilton’s ally in promoting the Constitution, switched sides completely, becoming Jefferson’s closest political partner and Hamilton’s implacable foe.

The Adams-Jefferson Friendship, Rivalry, and Reconciliation

John Adams and Thomas Jefferson experienced one of history’s most remarkable personal relationships. They were close friends during the Revolution, working together in Congress and on the committee to draft the Declaration of Independence (though Jefferson did the actual writing). Both served diplomatic posts in Europe and developed deep mutual respect.

But the election of 1796 turned them into rivals. Adams won the presidency with Jefferson finishing second, making Jefferson vice president under the original constitutional system—imagine your closest competitor becoming your deputy. By the 1800 election, they were bitter enemies. The campaign was vicious, with Jefferson’s supporters calling Adams a “hideous hermaphroditical character” and Adams’s allies claiming Jefferson was an atheist who would destroy Christianity.

Jefferson won in 1800, and the two men didn’t speak for over a decade. Their relationship was so bitter that Adams left Washington early in the morning, before Jefferson’s inauguration. What makes their story extraordinary is the reconciliation. In 1812, mutual friends convinced them to resume correspondence. Their letters over the next fourteen years—158 of them—became one of the great intellectual exchanges in American history, discussing philosophy, politics, and their memories of the Revolution. Both men died on July 4, 1826, the fiftieth anniversary of the Declaration of Independence, with Adams’s last words reportedly being “Thomas Jefferson survives” (though Jefferson had actually died hours earlier).

Franklin vs. Adams: A Clash of Styles

In Paris, the relationship between Benjamin Franklin and John Adams was a tense blend of grudging professional reliance and deep personal irritation, rooted in radically different diplomatic styles and temperaments. Franklin, already a celebrated figure at Versailles, cultivated French support through charm, sociability, and patient maneuvering in salons and at court, a method that infuriated Adams. He equated such “nuances” with evasiveness and preferred direct argument, formal memorandums, and hard‑edged ultimatums. Sharing lodgings outside Paris only intensified Adams’s resentment as he watched Franklin rise late, receive endless visitors, and seemingly mix pleasure with business, leading Adams to complain that nothing would ever get done unless he did it himself, while Franklin privately judged Adams “always an honest man, often a wise one, but sometimes and in some things, absolutely out of his senses.” Their French ally, Foreign Minister Vergennes, reinforced the imbalance by insisting on dealing primarily with Franklin and effectively sidelining Adams in formal diplomacy. This deepened Adams’s sense that Franklin was both overindulged by the French and insufficiently assertive on America’s behalf. Yet despite their mutual loss of respect, the two ultimately cooperated—often uneasily—in the peace negotiations with Britain, and both signatures appear on the 1783 Treaty of Paris, a testament to the way personal feud and shared national purpose coexisted within the American diplomatic mission.

Hamilton and Burr: From Political Rivalry to Fatal Duel

The Hamilton-Burr feud ended in the most dramatic way possible: a duel at Weehawken, New Jersey, on July 11, 1804, where Hamilton was mortally wounded and Burr destroyed his own political career.

Their rivalry had been building for years. Both were New York lawyers and politicians, but Hamilton consistently blocked Burr’s ambitions. When Burr ran for governor of New York in 1804, Hamilton campaigned against him with particular venom, calling Burr dangerous and untrustworthy at a dinner party. When Burr read accounts of Hamilton’s remarks in a newspaper, he demanded an apology. Hamilton refused to apologize or deny the comments, leading to the duel challenge.

What made this especially tragic was that Hamilton’s oldest son, Philip, had been killed in a duel three years earlier defending his father’s honor. Hamilton reportedly planned to withhold his fire, but he either intentionally shot into the air or missed. Burr’s shot struck Hamilton in the abdomen, and he died the next day. Burr was charged with murder in both New York and New Jersey and fled to the South.  Though he later returned to complete his term as vice president, his political career was finished.

Adams vs. Hamilton: The Federalist Crack-Up

One of the most destructive feuds happened within the same party. John Adams and Alexander Hamilton were both Federalists, but their relationship became poisonous during Adams’s presidency (1797-1801).

Hamilton, though not in government, tried to control Adams’s cabinet from behind the scenes. When Adams pursued peace negotiations with France (the “Quasi-War” with France was raging), Hamilton wanted war. Adams discovered that several of his cabinet members were more loyal to Hamilton than to him and fired them. In the 1800 election, Hamilton wrote a fifty-four-page pamphlet attacking Adams’s character and fitness for office—extraordinary since they were in the same party. The pamphlet was meant for limited circulation among Federalist leaders, but Jefferson’s allies got hold of it and published it widely, devastating both Adams’s re-election chances and Hamilton’s reputation. The feud helped Jefferson win and essentially destroyed the Federalist Party.

Washington and Jefferson: The Unacknowledged Tension

While Washington and Jefferson never had an open feud, their relationship cooled significantly during Washington’s presidency. Jefferson, as Secretary of State, increasingly opposed the administration’s policies, particularly Hamilton’s financial program. When Washington supported the Jay Treaty with Britain in 1795—which Jefferson saw as a betrayal of France and Republican principles—Jefferson became convinced Washington had fallen under Hamilton’s spell.

Jefferson resigned from the cabinet in 1793, partly from policy disagreements but also from discomfort with what he saw as Washington’s monarchical tendencies (the formal receptions and the ceremonial aspects of the presidency). Washington, in turn, came to view Jefferson as disloyal, especially when he learned Jefferson had been secretly funding attacks on the administration in opposition newspapers and had even put a leading critic on the federal payroll. By the time Washington delivered his Farewell Address in 1796, warning against political parties and foreign entanglements, many saw it as a rebuke of Jefferson’s philosophy. They maintained outward courtesy, but their warm relationship never recovered.

Why These Feuds Mattered

These weren’t just personal squabbles—they shaped American democracy in profound ways. The Hamilton-Jefferson rivalry created our two-party system (despite Washington’s warnings). The Adams-Hamilton split showed that parties could fracture from within. The Adams-Jefferson reconciliation demonstrated that political enemies could find common ground after leaving power.

The founding fathers were human, with all the ambition, pride, jealousy, and pettiness that entails. They fought over power, principles, and personal slights. What’s remarkable isn’t that they agreed on everything—they clearly didn’t—but that despite their bitter divisions, they created a system robust enough to survive their feuds. The Constitution itself, with its checks and balances, almost seems designed to accommodate such disagreements, ensuring that no single person or faction could dominate.

SOURCES

  1. National Archives – Founders Online

https://founders.archives.gov

2.   Massachusetts Historical Society – Adams-Jefferson Letters

https://www.masshist.org/publications/adams-jefferson

       3.    Founders Online – Hamilton’s Letter Concerning John Adams

https://founders.archives.gov/documents/Hamilton/01-25-02-0110

       4.    Gilder Lehrman Institute – Hamilton and Jefferson

https://www.gilderlehrman.org/history-resources/spotlight-primary-source/alexander-hamilton-and-thomas-jefferson

       5.    National Park Service – The Conway Cabal

https://www.nps.gov/articles/000/the-conway-cabal.htm

       6.    American Battlefield Trust – Hamilton-Burr Duel

https://www.battlefields.org/learn/articles/hamilton-burr-duel

        7.   Mount Vernon – Thomas Jefferson

https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/thomas-jefferson

        8.   Monticello – Thomas Jefferson Encyclopedia

https://www.monticello.org/research-education/thomas-jefferson-encyclopedia

        9.   Library of Congress – John Adams Papers

https://www.loc.gov/collections/john-adams-papers

10. Joseph Ellis – “Founding Brothers: The Revolutionary Generation”

https://www.pulitzer.org/winners/joseph-j-ellis

Illustration generated by author using ChatGPT.

The Sugar Act of 1764: The Tax Cut That Sparked a Revolution

The Sugar Act of 1764: The Tax Cut That Sparked a Revolution

Imagine a time when people rose up in protest of a tax being lowered.  Welcome to the world of the Sugar Act.

The Sugar Act of 1764 stands as one of the most ironic moments in the history of taxation. Here Britain was actually lowering a tax, and yet colonists reacted with a fury that would help spark a revolution. To understanding this paradox, we must understand that this new act represented something far more threatening than any previous attempt by Britain to regulate its American colonies.

The Old System: Benign Neglect

For decades before 1764, Britain had maintained what historians call “salutary neglect” toward its American colonies. The Molasses Act of 1733 had imposed a steep duty of six pence per gallon on foreign molasses imported into the colonies. On paper, this seemed like a significant burden for the rum-distilling industry, which depended heavily on cheap molasses from French and Spanish Caribbean islands. In practice, though, the tax was rarely collected. Colonial merchants either bribed customs officials or simply smuggled the molasses past them. The British government essentially looked the other way, and everyone profited.

This informal arrangement worked because Britain’s primary interest in the colonies was commercial, not fiscal. The Navigation Acts required colonists to ship certain goods only to Britain and to buy manufactured goods from British merchants, which enriched British traders and manufacturers without requiring aggressive tax collection in America. As long as this system funneled wealth toward London, Parliament didn’t care much about collecting relatively small customs duties across the Atlantic.

Everything Changed in 1763

The Seven Years’ War (which Americans call the French and Indian War) changed this comfortable arrangement entirely. Britain won decisively, driving France out of North America and gaining vast new territories. But victory came with a staggering price tag. Britain’s national debt had nearly doubled to £130 million, and annual interest payments alone consumed half the government’s budget. Meanwhile, Britain now needed to maintain 10,000 troops in North America to defend its expanded empire and manage relations with Native American tribes.

Prime Minister George Grenville faced a political problem. British taxpayers, already heavily burdened, were in no mood for additional taxes. The logic seemed obvious: since the colonies had benefited from the war’s outcome and still required military protection, they should help pay for their own defense. Americans, paid far lower taxes than their counterparts in Britain—by some estimates, British residents paid 26 times more per capita in taxes than colonists did.

What the Act Actually Did

The Sugar Act (officially the American Revenue Act of 1764) approached colonial taxation differently than anything before it. First, it cut the duty on foreign molasses from six pence to three pence per gallon—a 50% reduction. Grenville calculated, reasonably, that merchants might actually pay a three-pence duty rather than risk getting caught smuggling, whereas the six-pence duty had been so high it encouraged universal evasion.

But the Act did far more than adjust molasses duties. It added or increased duties on foreign textiles, coffee, indigo, and wine imported into the colonies. colonialtened regulations around the colonial lumber trade and banned the import of foreign rum entirely. Most significantly, the Act included elaborate provisions designed to strictly enforce these duties for the first time.

The enforcement mechanisms represented the real revolution in British policy. Ship captains now had to post bonds before loading cargo and had to maintain detailed written cargo lists. Naval patrols increased dramatically. Smugglers faced having their ships and cargo seized.

Significantly, the burden of proof was shifted to the accused.  They were required to prove their innocence, a reversal of traditional British justice. Most controversially, accused smugglers would be tried in vice-admiralty courts, which had no juries and whose judges received a cut of any fines levied.

The Paradox of the Lower Tax

So why did colonists react so angrily to a tax cut? The answer reveals the fundamental shift in the British-American relationship that the Sugar Act represented.

First, the issue wasn’t the tax rate. It was the certainty of collection. A six-pence tax that no one paid was infinitely preferable to a three-pence tax rigorously enforced. New England’s rum distilling industry, which employed thousands of distillery workers and sailors, depended on cheap molasses from the French West Indies. Even at three pence per gallon, the tax significantly increased operating costs. Many merchants calculated they couldn’t remain profitable if they had to pay it.

Second, and more importantly, colonists recognized that the Act’s purpose had changed relationships. Previous trade regulations, even if they involved taxes, were ostensibly about regulating commerce within the empire. The Sugar Act openly stated its purpose was raising revenue—the preamble declared it was “just and necessary that a revenue be raised” in America. This might seem like a technical distinction, but to colonists it mattered enormously. British constitutional theory held that subjects could only be taxed by their own elected representatives. Colonists elected representatives to their own assemblies but sent no representatives to Parliament. Trade regulations fell under Parliament’s legitimate authority to govern imperial commerce, but taxation for revenue was something else entirely.

Third, the enforcement mechanisms offended colonial sensibilities about justice and traditional British rights. The vice-admiralty courts denied jury trials, which colonists viewed as a fundamental right of British subjects. Having to prove your innocence rather than being presumed innocent violated another core principle. Customs officials and judges profiting from convictions created obvious incentives for abuse.

Implementation and Colonial Response

The Act took effect in September 1764, and Grenville paired it with an aggressive enforcement campaign. The Royal Navy assigned 27 ships to patrol American waters. Britain appointed new customs officials and gave them instructions to strictly do their jobs rather than accept bribes. Admiralty courts in Halifax, Nova Scotia became particularly notorious.  Colonists had to travel hundreds of miles to defend themselves in a court with no jury and with a judge whose income game from convictions.

Colonists responded immediately. Boston merchants drafted a protest arguing that the act would devastate their trade. They explained that New England’s economy depended on a complex triangular trade: they sold lumber and food to the Caribbean in exchange for molasses, which they distilled into rum, which they sold to Africa for slaves, who were sold to Caribbean plantations for molasses, and the cycle repeated. Taxing molasses would break this chain and impoverish the region.

But the economic arguments quickly evolved into constitutional ones. Lawyer James Otis argued that “taxation without representation is tyranny”—a phrase that would echo through the coming decade. Colonial assemblies began passing resolutions asserting their exclusive right to tax their own constituents. They didn’t deny Parliament’s authority to regulate trade, but they drew a clear line: revenue taxation required representation.

The protests went beyond rhetoric. Colonial merchants organized boycotts of British manufactured goods. Women’s groups pledged to wear homespun cloth rather than buy British textiles. These boycotts caused enough economic pain in Britain that London merchants began lobbying Parliament for relief.

The Road to Revolution

The Sugar Act’s significance extends far beyond its immediate economic impact. It established precedents and patterns that would define the next decade of imperial crisis.

Most fundamentally, it shattered the comfortable arrangement of salutary neglect. Once Britain demonstrated it intended to actively govern and tax the colonies, the relationship could never return to its previous informality. The colonists’ constitutional objections—no taxation without representation, right to jury trials, presumption of innocence—would be repeated with increasing urgency as Parliament passed the Stamp Act (1765), Townshend Acts (1767), and Tea Act (1773).

The Sugar Act also revealed the practical difficulties of governing an empire across 3,000 miles of ocean. The vice-admiralty courts became symbols of distant, unaccountable power. When colonists couldn’t get satisfaction through established legal channels, they increasingly turned to extralegal methods, including committees of correspondence, non-importation agreements, and eventually armed resistance.

Perhaps most importantly, the Sugar Act forced colonists to articulate a political theory that ultimately proved incompatible with continued membership in the British Empire. Once they agreed to the principle that they could only be taxed by their own elected representatives, and that Parliament’s authority over them was limited to trade regulation, the logic led inexorably toward independence. Britain couldn’t accept colonial assemblies as co-equal governing bodies since Parliament claimed supreme authority over all British subjects. The colonists couldn’t accept taxation without representation since they claimed the rights of freeborn Englishmen. These positions couldn’t be reconciled.

The Sugar Act of 1764 represents the point where the British Empire’s century-long success in North America began to unravel. By trying to make the colonies pay a modest share of imperial costs through what seemed like reasonable means, Britain inadvertently set in motion forces that would break the empire apart just twelve years later.

Sources

Mount Vernon Digital Encyclopedia – Sugar Act https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/sugar-act/ Provides overview of the Act’s provisions, economic context, and relationship to British debt from the Seven Years’ War. Includes information on tax burden comparisons between Britain and the colonies.

Britannica – Sugar Act https://www.britannica.com/event/Sugar-Act Covers the specific provisions of the Act, enforcement mechanisms, vice-admiralty courts, and the shift from the Molasses Act of 1733. Useful for technical details of the legislation.

History.com – Sugar Act https://www.history.com/topics/american-revolution/sugar-act Discusses colonial constitutional objections, the “taxation without representation” argument, and the enforcement provisions including burden of proof reversal and jury trial denial.

American Battlefield Trust – Sugar Act https://www.battlefields.org/learn/articles/sugar-act Details colonial response including boycotts, James Otis’s arguments, and the triangular trade system that the Act disrupted.

Additional Recommended Sources

Library of Congress – The Sugar Act https://www.loc.gov/collections/continental-congress-and-constitutional-convention-broadsides/articles-and-essays/continental-congress-broadsides/broadsides-related-to-the-sugar-act/ Primary source collection including contemporary colonial broadsides and protests against the Act.

National Archives – The Sugar Act (Primary Source Text) https://founders.archives.gov/about/Sugar-Act The actual text of the American Revenue Act of 1764, useful for verifying specific provisions and language.

Yale Law School – Avalon Project: Resolutions of the Continental Congress (October 19, 1765) https://avalon.law.yale.edu/18th_century/resolu65.asp Colonial responses to the Sugar and Stamp Acts, showing how the arguments evolved.

Massachusetts Historical Society – James Otis’s Rights of the British Colonies Asserted and Proved (1764) https://www.masshist.org/digitalhistory/revolution/taxation-without-representation Primary source for the “taxation without representation” argument that emerged from Sugar Act opposition.

Colonial Williamsburg Foundation – Sugar Act of 1764 https://www.colonialwilliamsburg.org/learn/deep-dives/sugar-act-1764/ Discusses economic impact on colonial merchants and the rum distilling industry.

Slavery and the Constitutional Convention: The Compromise That Shaped a Nation

When fifty-five delegates gathered in Philadelphia during the sweltering summer of 1787, they faced a challenge that would haunt American politics for the next eight decades. The question wasn’t whether slavery was morally right—many delegates privately acknowledged its evil—but whether a unified nation could exist with slavery as a part of it. That summer, the institution of slavery nearly killed the Constitution before it was born.

The Battle Lines

The convention revealed a stark divide. On one side stood delegates who spoke forcefully against slavery, though they represented a minority voice. Gouverneur Morris of Pennsylvania delivered some of the most scathing condemnations, calling slavery a “nefarious institution” and “the curse of heaven on the states where it prevailed.” According to James Madison’s notes, Morris argued passionately that counting enslaved people for representation would mean that someone “who goes to the Coast of Africa, and in defiance of the most sacred laws of humanity tears away his fellow creatures from their dearest connections & damns them to the most cruel bondages, shall have more votes in a Government instituted for protection of the rights of mankind.”

Luther Martin of Maryland, himself a slaveholder, joined Morris in opposition. He declared the slave trade “inconsistent with the principles of the revolution and dishonorable to the American character.”.  Even George Mason of Virginia, who owned over 200 enslaved people, denounced slavery at the convention, warning that “every master of slaves is born a petty tyrant” and that it would bring “the judgment of heaven on a country.”

The Southern Coalition

Facing these critics stood delegates from the Deep South—primarily South Carolina and Georgia—who made it abundantly clear that protecting slavery was non-negotiable. The South Carolina delegation was particularly unified and aggressive in defending the institution. All four of their delegates—John Rutledge, Charles Pinckney, Charles Cotesworth Pinckney, and Pierce Butler—owned slaves, and they spoke with one voice.

Charles Cotesworth Pinckney stated bluntly: “South Carolina and Georgia cannot do without slaves.” John Rutledge framed it even more starkly: “The true question at present is, whether the Southern States shall or shall not be parties to the Union.” The message was unmistakable—attempt to restrict slavery, and there would be no Constitution and perhaps no United States.

The Southern states didn’t just defend slavery; they threatened to walk out repeatedly. When debates over the slave trade heated up on August 22, delegates from North Carolina, South Carolina, and Georgia stated they would “never be such fools as to give up” their right to import enslaved Africans.  These weren’t idle threats—they were credible enough to force compromise.

The Three-Fifths Compromise

The central flashpoint came over representation in Congress. The new Constitution would base representation on population, but should enslaved people count? Southern states wanted every enslaved person counted fully, which would dramatically increase their congressional power. Northern states argued that enslaved people—who had no rights and couldn’t vote—shouldn’t count at all.

The three-fifths ratio had actually been debated before. Back in 1783, Congress had considered using it to calculate state tax obligations under the Articles of Confederation, though that proposal failed. James Wilson of Pennsylvania resurrected the idea at the Constitutional Convention, suggesting that representation be based on the free population plus three-fifths of “all other persons”—the euphemism they used to avoid writing the word “slave” in the Constitution.

The compromise passed eight states to two. New Jersey and Delaware are generally identified as the states voting against the compromise, New Hampshire is not listed as taking part in the vote. Rhode Island did not send a delegation to the convention and by the time of the vote New York no longer had a functioning delegation.

Though the South ultimately accepted the compromise, it wasn’t what they wanted. Southern delegates had pushed to count enslaved people equally with free persons—but otherwise ignored on all issues of human rights. The three-fifths ratio was a reduction from their demands—a limitation on slave state power, though it still gave them substantial advantage. With about 93% of the nation’s enslaved population concentrated in just five southern states, this compromise increased the South’s congressional delegation by 42%.

James Madison later recognized the compromise’s significance. He wrote after the convention: “It seems now to be pretty well understood that the real difference of interests lies not between the large and small but between the northern and southern states. The institution of slavery and its consequences form the line of discrimination.”

Could the Constitution Have Happened Without It?

Here’s where I need to speculate, but I’m fairly confident in this assessment: no, the Constitution would not have been ratified without the three-fifths compromise and related concessions on slavery.

The evidence is overwhelming. South Carolina and Georgia delegates stated explicitly and repeatedly that they would not join any union that restricted slavery. Alexander Hamilton himself later acknowledged that “no union could possibly have been formed” without the three-fifths compromise. Even delegates who despised slavery, like Roger Sherman of Connecticut, argued it was “better to let the Southern States import slaves than to part with them.”

The convention negotiated three major slavery compromises, all linked. Beyond the three-fifths clause, they agreed Congress couldn’t ban the international slave trade until 1808, and they included the Fugitive Slave Clause requiring the return of escaped enslaved people even from free states. These deals were struck together on August 29, 1787, in what Madison’s notes reveal was a package negotiation between northern and southern delegates.

Without these compromises, the convention would likely have collapsed. The alternative wouldn’t have been a better Constitution—it would have been no Constitution at all, potentially leaving the thirteen states as separate nations or weak confederations. Whether that would have been preferable is a profound counterfactual question that historians still debate.

The Impact on Early American Politics

The three-fifths compromise didn’t just affect one document—it shaped American politics for decades. Its effects were immediate and substantial.

The most famous early example came in the presidential election of 1800. Thomas Jefferson defeated John Adams in what’s often called the “Revolution of 1800″—the first peaceful transfer of power between opposing political parties. But Jefferson’s victory owed directly to the three-fifths compromise. Virginia’s enslaved population gave the state extra electoral votes that proved decisive. Historian Garry Wills has speculated that without these additional slave-state votes, Jefferson would have lost. Pennsylvania had a free population 10% larger than Virginia’s, yet received 20% fewer electoral votes because Virginia’s numbers were inflated by the compromise.

The impact extended far beyond that single election. Research shows the three-fifths clause changed the outcome of over 55% of legislative votes in the Sixth Congress (1799-1801). (The additional southern representatives—about 18 more than their free population warranted—gave the South what became known as the “Slave Power” in Congress.

This power influenced major legislation throughout the antebellum period. The Indian Removal Act of 1830, which forcibly relocated Native Americans to open land for plantation agriculture, passed because of margins provided by these extra southern representatives. The Missouri Compromise, the Kansas-Nebraska Act, and numerous other slavery-related measures bore the fingerprints of this constitutional imbalance.

The compromise also affected Supreme Court appointments and federal patronage. Southern-dominated Congresses ensured pro-slavery justices and policies that protected the institution. The sectional tensions it created led directly to later compromises—the Missouri Compromise of 1820, the Compromise of 1850—each one a temporary bandage on a wound that wouldn’t heal.

By the 1850s, the artificial political power granted to slave states had become intolerable to many northerners. When Abraham Lincoln won the presidency in 1860 without carrying a single southern state, southern political leaders recognized they had lost control of the federal government. Senator Louis Wigfall of Texas complained that non-slaveholding states now controlled Congress and the Electoral College. Ten southern states seceded in large part because they believed the three-fifths compromise no longer protected their interests.

The Bitter Legacy

The framers consciously avoided using the words “slave” or “slavery” in the Constitution, recognizing it would “sully the document.” But the euphemisms fooled no one. They had built slavery into the structure of American government, trading moral principles for political union.

The Civil War finally resolved what the Constitutional Convention had delayed. The Thirteenth Amendment abolished slavery in 1865, but not until 1868 did the Fourteenth Amendment finally strike the three-fifths clause from the Constitution, requiring that representation be based on counting the “whole number of persons” in each state.

Was it worth it? That’s ultimately a question of values. The Constitution created a stronger national government that eventually abolished slavery, but it took 78 years and a war that killed over 600,000 Americans. As Thurgood Marshall noted on the Constitution’s bicentennial, the framers “consented to a document which laid a foundation for the tragic events which were to follow.”

The convention delegates knew what they were doing. They chose union over justice, pragmatism over principle. Whether that choice was necessary, wise, or moral remains one of the most contested questions in American history.

____________________________________________________

Sources

  1. https://www.battlefields.org/learn/articles/slavery-and-constitution
  2. https://en.wikipedia.org/wiki/Luther_Martin
  3. https://schistorynewsletter.substack.com/p/7-october-2024
  4. https://www.americanacorner.com/blog/constitutional-convention-slavery
  5. https://www.nps.gov/articles/000/constitutionalconvention-august22.htm
  6. https://en.wikipedia.org/wiki/Three-fifths_Compromise
  7. https://www.brennancenter.org/our-work/analysis-opinion/electoral-colleges-racist-origins
  8. https://www.gilderlehrman.org/history-resources/teaching-resource/historical-context-constitution-and-slavery
  9. https://www.nps.gov/articles/000/constitutionalconvention-august29.htm
  10. https://www.lwv.org/blog/three-fifths-compromise-and-electoral-college
  11. https://www.aaihs.org/a-compact-for-the-good-of-america-slavery-and-the-three-fifths-compromise-part-ii/

Life Below Deck: Enlisted Sailors in America’s Continental Navy

When the Continental Congress established America’s first navy in October 1775, they faced a daunting challenge: how do you build a fleet from scratch when you’re fighting the world’s most powerful naval force? The Continental Navy peaked at around 3,000 men serving on approximately 30 ships, a tiny force compared to Britain’s massive Royal Navy. But who were these sailors who were willing to risk their lives for a fledgling republic?

Where They Came From

The colonial maritime community had extensive seafaring experience, as much of British trade was carried in American vessels, and North Americans made up a significant portion of the Royal Navy’s seamen. Continental Navy sailors came primarily from port cities along the Atlantic coast, particularly New England communities where maritime trades were a way of life. Many had worked as merchant sailors, fishermen, or privateers before joining.

The naval service was notably diverse for its time, including native-born Americans, British deserters, free and enslaved Black sailors, and European immigrants. Unlike the Continental Army, which had periods of banning Black soldiers or sometimes placing them in segregated regiments, the Continental Navy was mostly integrated. At sea, there was less distinction between free and enslaved sailors, and those held in bondage had opportunities to work toward freedom. This maritime tradition of relative equality distinguished naval service from other Revolutionary War experiences.

Getting Into the Service

Recruiting sailors proved to be one of the Continental Navy’s biggest headaches. Navy boards supervised appointing petty officers and enlisting seamen, though these duties were chiefly performed by ship commanders or recruiting agents. The first Marine recruiting station was located at Tun’s Tavern, a bar in Philadelphia.

Enlistment was generally voluntary, though the line between volunteering and impressment—forced service—was sometimes blurred. Recruiting parties would scour port towns seeking able-bodied men, advertising not only pay but also the possibility of capturing British prizes for sale, with proceeds shared among the crew—a powerful incentive.

The problem was competition. Privateering—private ships licensed by congress to seize enemy vessels—was far more attractive to sailors because cruises were shorter and pay could be better. With over 2,000 privateers operating during the war, the Continental Navy struggled constantly to maintain adequate crew sizes. Continental captains often found themselves unable to man their ships due to privateers’ superior inducements.

Landsmen, Seamen, and Petty Officers

At the bottom rung of a Navy crew stood the landsman—a recruit with little or no sea experience. Many were farm boys or tradesmen who had never set foot on a ship. Their days were filled with the hardest labor: hauling ropes, scrubbing decks, and learning basic seamanship.

Above them were ordinary seamen, who had some experience afloat, and the more skilled able seamen who knew their way around sails, rigging, and naval gunnery. These sailors formed the backbone of the Continental Navy. Sailors skilled in managing the ship’s rigging were said to “know the ropes.” Without their knowledge of wind, tide, and timber, ships would have been little more than floating platforms.

The most experienced enlisted men were promoted to petty officers. These weren’t commissioned officers but rather specialists and leaders—boatswain’s mates directing rigging crews, gunner’s mates overseeing cannon fire, and carpenters’ mates keeping the wooden hulls afloat. They were the Navy’s “non-commissioned officers,” long before the U.S. Navy had a formal NCO corps.

Most Continental Navy ships also carried detachments of Continental Marines. These enlisted men were soldiers at sea, tasked with keeping order on deck, manning small arms in combat, and leading boarding parties.

What They Wore

Unlike officers who had prescribed uniforms, enlisted sailors received no standard clothing from the Continental Navy. Due to meager funds and lack of manufacturing capacity, sailors generally provided their own clothing, usually consisting of pantaloons often tied at the knee or knee breeches, a jumper or shirt, neckerchief, short waisted jacket, and low crowned hats. Most sailors went barefoot, and a kerchief was worn either as a sweat band or as a simple collar closure. The short trousers served a practical purpose—they didn’t interfere with climbing the ship’s rigging. This lack of uniforms reflected the Continental Navy’s financial struggles, where everything from ships to ammunition took priority over standardized clothing.

Daily Life at Sea

Shipboard duties for enlisted sailors were grueling and dangerous. Landsmen cleaned the deck, helped raise or lower the anchor, worked in the galley, and assisted other crew members. More experienced sailors handled the complex work of managing sails, operating guns during combat, standing watch, and maintaining the vessel. Specialized roles were filled by experienced hands, and most sailors worked long shifts in harsh conditions, often enduring crowded, wet, and unsanitary quarters below deck.

Living conditions were cramped. Sailors lived in close quarters with limited privacy, shared hammocks on the lower decks, and endured monotonous food rations. Meals were simple, based on salted meat, ship’s biscuit, and whatever could be supplemented from local ports or captured prizes. Leisure was rare, and recreation was often limited to singing, storytelling, or gambling. The work was physically demanding and accidents were common—falling from rigging, being crushed by shifting cargo, or drowning were constant risks.

Discipline and Relations with Officers

Discipline in the Continental Navy was deeply influenced by the British Royal Navy and the “ancient common law of the sea.” The Continental Congress issued articles governing naval discipline, empowering officers to maintain strict order and punish infractions including drunkenness, blasphemy, theft, or disobedience. Punishments included wearing a wooden collar, spending time in irons, receiving pay deductions, confinement on bread and water, or, for serious offenses, flogging.

Flogging was often done with a multi-thonged whip known as the cat o’ nine tails. The most common flogging consisted of between 12 and 24 lashes, though mutineers might receive sentences in the hundreds of lashes—often becoming a death sentence.

Even though officers held absolute authority aboard their vessels, the Continental Navy sometimes suffered from severe discipline problems. Some commanders found it impossible to maintain control over squadrons made up of crews recruited from one area and commanded by officers from another. The relationship between officers and enlisted men reflected the social hierarchies of the time, with a clear divide between the educated officer class and working-class sailors. However, the shared dangers of combat and the sea could create bonds that transcended these divisions.

A Brief but Important Legacy

Enlisted sailors of the Continental Navy came from diverse and often hardscrabble backgrounds, shaped by the hard labor and hazards of maritime life. These men, whose names are mostly lost to history, formed the foundation of America’s first navy and contributed profoundly—through sacrifice and service—to the establishment of American independence.

Of approximately 65 vessels that served in the Continental Navy, only 11 survived the war, and by 1785 Congress had disbanded the Navy and sold the remaining ships. Despite its short existence and limited impact on the war’s outcome, the sailors of the Continental Navy created a foundation for American naval tradition and provided trained seamen who would serve in future conflicts.

Sources:

Personal note: The Grumpy Doc proudly served as an enlisted sailor in the U.S. Navy from 1967 to 1974.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén