Grumpy opinions about everything.

Category: History Page 1 of 9

Grumpy opinions about American history

Henry Knox vs Joseph Plumb Martin: A Case Study in Officer Privilege After the Revolution

Last week I looked at how poorly revolutionary war veterans were treated in general. This week I’d like to take a look at a specific example —the contrast between how generals like Henry Knox and common soldiers like Joseph Plumb Martin fared after the Revolutionary War. It perfectly illustrates the class divide I discussed in my previous post. These two men served in the same army, helped win the same independence, and endured similar hardships—although Martin endured far greater hardship. Their post-war experiences couldn’t have been more different—and in a bitter twist, Knox’s prosperity came partly at Martin’s expense.
Knox’s Golden Parachute
Henry Knox entered the war as a Boston bookseller of modest means whose military knowledge was gained from reading rather than formal training. He rose to become Washington’s chief of artillery and a major general. When the war ended, Knox received benefits that set him up for life—or should have.
As an officer who served until the war’s end, Knox received the 1783 commutation payment: five years’ full pay in the form of government securities bearing six percent annual interest. This came after Knox himself helped lead the officer corps in pressuring Congress for payment during the near-mutiny known as the Newburgh Conspiracy in early 1783. In total, 2,480 officers received these commutation certificates
But Knox’s real windfall came from his marriage and his government connections. His wife Lucy came from a wealthy Loyalist family—her grandfather was Brigadier General Samuel Waldo, who’d gained control of a massive land patent in Maine in the 1730’s. When Lucy’s family fled to England, she became the sole heir to approximately 576,000 acres known as the Waldo Patent.
Knox used his position as the first Secretary of War (earning $3,000 annually in 1793) and his wartime connections to expand his land holdings and business ventures. He was able to ensure that his wife’s family lands were passed to her, rather than being seized by the government, as the holding of many loyalists were. Knox was firmly positioned on the creditor side of the equation, and his political connections helped shield him from the harsh economic reality faced by common soldiers.
He also acquired additional property in the Ohio Valley and engaged in extensive land speculation. He ran multiple businesses: timber operations, shipbuilding, brick-making, quarrying, and extensive real estate development.
After retiring from government in 1795, he built Montpelier, a magnificent three-story mansion in Thomaston, Maine, described as having “beauty, symmetry and magnificence” unequaled in Massachusetts. (My wife and I visited a reconstruction of his mansion this past summer and I can personally testify as to how elaborate a home it was.)
Martin’s Broken Promises
Joseph Plumb Martin’s story is the experience of the roughly 80,000-90,000 common soldiers who did most of the fighting. Martin enlisted at age 15 in 1776 and served seven years—fighting at Brooklyn, White Plains, Monmouth, surviving Valley Forge, and digging trenches at Yorktown. He rose from private to sergeant.
When Martin mustered out, he received certificates of indebtedness instead of actual pay—IOUs that depreciated rapidly. Unlike Knox, enlisted men received no pension, no commutation payment, nothing beyond those nearly worthless certificates. Martin, like many veterans, sold his certificates to speculators at a fraction of their face value just to survive.
After teaching briefly in New York, Martin settled in Maine in the early 1790s.  Based on the promise of a land bounty from Massachusetts, Martin and other “Liberty Men” each claimed 100 acres in Maine, assuming that Loyalist lands would be confiscated and sold cheaply to the current occupants or, perhaps, even treated as vacant lands they could secure by clearing and improving.
Martin married Lucy Clewley in 1794 and started farming. He’d fought for independence and now just wanted to build a modest life in the belief that the country he had fought for would stand by its promises.
When Former Comrades Became Adversaries
Here’s where the story takes a dark turn. In 1794, Henry Knox—Martin’s former commanding general—asserted legal ownership of Martin’s 100-acre farm. Knox claimed the land was part of the Waldo Patent. Martin and other settlers argued they had the right to farm the land they’d improved, especially as it should be payment for their Revolutionary service.
The dispute dragged on for years, with some veterans even forming a guerrilla group called the “White Indians” who attacked Knox’s surveyors. But Knox had wealth, lawyers, and political connections. In 1797, the legal system upheld Knox’s claim. Martin’s farm was appraised at $170—payable over six years in installments.
To put that in perspective, when Martin finally received a pension in 1818—twenty-one years later—it paid only $96 per year. And to get even that meager pension, Martin had to prove he was destitute. The $170 Knox demanded represented nearly two years of the pension Martin wouldn’t receive for another two decades.
Martin begged Knox to let him keep the land. There’s no evidence Knox even acknowledged his letters. By 1811, Martin had lost more than half his farm. By 1818, when he appeared before the Massachusetts General Court with other veterans seeking their long-promised pensions, he owned nothing.
The Irony of “Fair Treatment”
Knox claimed he treated settlers on his Maine lands fairly, though he used intermediaries to evict those who couldn’t pay rent or whom he considered to be squatters. The settlers disagreed so strenuously that they once threatened to burn Montpelier to the ground
The situation’s bitter irony is hard to overstate. Knox had been one of the officers who organized the Society of the Cincinnati in 1783, ostensibly to support widows and orphans of Revolutionary War officers. He’d helped lead the push for officer commutation payments by threatening Congress during the Newburgh affair. Yet when common soldiers like Martin—men who’d literally dug the trenches that won the siege at Yorktown—needed help, Knox showed no mercy.
The Numbers Tell the Story
Let’s compare their situations side by side:
Henry Knox:
              ∙            Officer commutation: Five years’ full pay in securities with 6% interest
              ∙            Secretary of War salary: $3,000 per year (1793)
              ∙            Land holdings: 576,000+ acres in Maine, plus Ohio Valley properties
              ∙            Housing: Three-story mansion with extensive outbuildings
              ∙            Businesses: Multiple ventures in timber, ships, bricks, quarrying, real estate
              ∙            Death: 1806, in debt from failed business ventures but having lived in luxury
Joseph Plumb Martin:
              ∙            Enlisted pay: Mostly unpaid certificates sold at a loss to speculators
              ∙            Pension: None until 1818, then $96 per year (had to be destitute to qualify)
              ∙            Land holdings: Started with 100 acres, lost all most all of it to Knox by 1818
              ∙            Housing: Small farmhouse, struggling to farm 8 of his original 100 acres
              ∙            Income: Subsistence farming, served as town clerk for modest pay
              ∙            Death: 1850 at age 89, having struggled financially his entire post-war life
A Memoir Born of Frustration
In 1830, at age 70, Martin published his memoir anonymously. The full title captured his experience: “A Narrative of Some of the Adventures, Dangers, and Sufferings of a Revolutionary Soldier.” He published it partly to support other veterans fighting for their promised benefits and possibly hoping to earn some money from sales.
The book didn’t sell. It essentially disappeared until a first edition was rediscovered in the 1950s and republished in 1962. Today it’s considered one of the most valuable primary sources we have for understanding what common soldiers experienced during the Revolution. Historians praise it precisely because it’s not written by someone like Washington, Knox, or Greene—it’s the voice of a regular soldier
When Martin died in 1850, a passing platoon of U.S. Light Infantry stopped at his house and fired a salute to honor the Revolutionary War hero. But that gesture of respect came long after the country should have helped Martin when he needed it.
The Broader Pattern
Knox wasn’t unusual among officers, nor was Martin unusual among enlisted men. This was the pattern: officers with education, connections, and capital leveraged their wartime service into political positions, land grants, and business opportunities. Common soldiers received promises, waited decades for minimal pensions, and often lost what little property they had to the very elites who’d commanded them.
It’s worth noting that Knox’s business ventures eventually failed. He died in debt in 1806, having borrowed extensively to fund his speculations. His widow Lucy had to gradually sell off land to survive. But Knox still lived eleven years in a mansion, engaged in enterprises of his choosing, and died surrounded by family on his comfortable estate. Martin outlived him by forty-four years, spending most of them in poverty.
The story of Knox and Martin isn’t one of villainy versus heroism. Knox was a capable general who genuinely contributed to winning independence. Martin was a dedicated soldier who did the same. But the system they operated within distributed the benefits of that shared victory in profoundly unequal ways, and Knox—whether intentionally or not—used that system to take what little they had from soldiers who’d fought under his command. This was not corruption in the modern sense; it was the predictable outcome of a system that rewarded status, education, and proximity to power. Knox’s experience illustrates a broader truth of the post-Revolutionary period: independence redistributed political sovereignty, but economic security flowed upward, not downward.
When we talk about how Continental Army veterans were treated, this is what it looked like on the ground: the officer who led the charge for officer pensions living in a mansion on 600,000 acres, while the sergeant who dug the trenches at Yorktown lost his 100-acre farm and had to prove he was destitute to get $96 a year, decades too late to matter. This will always be a black mark on American history.
 
Illustrations generated by author using ChatGPT.

Personal note: I spent 12 years on active duty, both as an officer and an enlisted man. I’m proud of my service and I’m proud of the people who have served our country. I do not write this in order to condemn our history. I write it in order to make us aware that we need to always support the common people who contribute vitally to our national success and are seldom recognized.

Sources
Martin, Joseph Plumb. “A Narrative of a Revolutionary Soldier: Some of the Adventures, Dangers and Sufferings of Joseph Plumb Martin”
Originally published anonymously in 1830 at Hallowell, Maine as “A narrative of some of the adventures, dangers, and sufferings of a Revolutionary soldier, interspersed with anecdotes of incidents that occurred within his own observation.” The memoir fell into obscurity until a first edition copy was discovered in the 1950s and donated to Morristown National Historical Park. Republished by Little, Brown in 1962 under the title “Private Yankee Doodle” (edited by George F. Scheer). Current edition published 2001. This firsthand account by a Continental Army private who served seven years provides invaluable insight into the common soldier’s experience during the war and the struggles veterans faced afterward, including Martin’s own land dispute with Henry Knox.  I highly recommend this book to anyone with an interest in ordinary people and their role in history.
 
American Battlefield Trust – The Newburgh Conspiracy
https://www.battlefields.org/learn/articles/newburgh-conspiracy
 
Maine Memory Network – Henry Knox: Land Dealings
https://thomaston.mainememory.net/page/735/display.html
 
World History Encyclopedia – Henry Knox
https://www.worldhistory.org/Henry_Knox/
 
Maine: An Encyclopedia – Knox, Henry
https://maineanencyclopedia.com/knox-henry/
 
American Battlefield Trust – Joseph Plumb Martin: Voice of the Common American Soldier
https://www.battlefields.org/learn/articles/joseph-plumb-martin
 
Wikipedia – Joseph Plumb Martin
https://en.wikipedia.org/wiki/Joseph_Plumb_Martin
 
Note on Additional Context: While these were the primary sources directly used in this article, the discussion also drew on information from my earlier Revolutionary War veterans article about the general treatment of enlisted soldiers, pension systems, and the class disparities in how benefits were distributed after the war.

Understanding Critical Race Theory: What It Is—and Why It Divides America

When I first started hearing debates about Critical Race Theory, I thought these people can’t possibly be talking about the same thing. There seemed to be no common ground—even the words they were using seemed to have different meanings.

Critical Race Theory (CRT) has become one of the most contested intellectual concepts in contemporary American culture. Originally developed in law schools during the 1970s and 1980s, CRT has evolved into a broad analytical method of examining how race and racism operate in society. Understanding its origins, core principles, and the political debates surrounding it requires examining both its academic foundations and its journey into public consciousness.

Origins and Early Development

Legal scholars who were dissatisfied with the slow pace of racial progress following the Civil Rights Movement laid the groundwork for CRT. The early figures included Derrick Bell, often considered the father of CRT, along with Alan Freeman, Richard Delgado, Kimberlé Crenshaw, and Cheryl Harris. These scholars were frustrated that despite landmark legislation like the Civil Rights Act of 1964 and the Voting Rights Act of 1965, racial inequality persisted across American institutions.

The intellectual roots of CRT can be traced to Critical Legal Studies, a movement that challenged traditional legal scholarship’s claims of objectivity and neutrality. However, CRT scholars felt that Critical Legal Studies failed to adequately address race and racism. They drew inspiration from various sources, including the work of civil rights lawyers like Charles Hamilton Houston, sociological insights about institutional racism, and postmodern critiques of knowledge and power.

Derrick Bell’s groundbreaking work in the 1970s laid crucial foundation. His “interest convergence” theory, presented in his analysis of Brown v. Board of Education, argued that advances in civil rights occur only when they align with white interests. This insight became central to CRT’s understanding of how racial progress unfolds in American society.

Core Elements and Principles

Critical Race Theory encompasses several key tenets that distinguish it from other approaches to studying race and racism.

First, CRT posits that race is not biologically real; it’s a human invention to justify unequal treatment. It also holds that racism is not merely individual prejudice, but a systemic feature of American society embedded in legal, political, and social institutions. This “structural racism” perspective emphasizes how seemingly neutral policies and practices can perpetuate racial inequality.

Second, CRT challenges the traditional civil rights approach that emphasizes color-blindness and incremental reform. Instead, CRT scholars argue that color-blind approaches often mask and perpetuate racial inequities. They advocate for race-conscious policies and a more aggressive approach to dismantling systemic racism.

Third, CRT emphasizes the importance of lived experience in the form of storytelling and narrative. Scholars use personal narratives, historical accounts, and counter-stories to challenge dominant narratives about race and racism. This methodological approach reflects CRT’s belief that experiential knowledge from communities of color provides crucial insights often overlooked by traditional scholarship.

Fourth, CRT introduces the concept of intersectionality, a term coined by legal scholar Kimberlé Crenshaw. This framework examines how multiple forms of identity and oppression—including race, gender, class, and sexuality—intersect and compound each other’s effects.

Finally, CRT is explicitly activist-oriented with a goal of creating new norms of interracial interaction. Unlike purely descriptive academic theories, CRT aims to understand racism in order to eliminate it. This commitment to social transformation distinguishes CRT from more traditional academic approaches.

Evolution and Expansion

Since its origins in legal studies, CRT has expanded into numerous disciplines including education, sociology, political science, and ethnic studies. In education, scholars like Gloria Ladson-Billings and William Tate applied CRT frameworks to understand racial disparities in schooling. This educational application of CRT examines how school policies, curriculum, and practices contribute to achievement gaps and educational inequality.

Conservative Perspectives

Conservative critics of CRT raise several concerns about the theory and its applications. They argue that CRT’s emphasis on systemic racism is overly deterministic and fails to account for individual differences and the significant progress made in racial equality since the Civil Rights era. Many conservatives contend that CRT promotes a victim mentality that undermines personal responsibility and achievement.

From this perspective, CRT’s race-conscious approach is seen as divisive and potentially counterproductive. Critics argue that emphasizing racial differences rather than common humanity perpetuates division and resentment. They often prefer color-blind approaches that treat all individuals equally regardless of race.

Conservative critics also express concern about CRT’s application in educational settings, arguing that it introduces inappropriate political content into classrooms and may cause students to feel guilt or shame based on their racial identity. Some argue that CRT-influenced curricula amount to indoctrination rather than education.

Additionally, some conservatives view CRT as fundamentally un-American, arguing that its critique of American institutions and emphasis on systemic oppression undermines national unity and patriotism. They contend that CRT presents an overly negative view of American history and society.

Some conservatives go further, calling CRT a form of “anti-American radicalism.” They believe it rejects Enlightenment values—reason, objectivity, and universal rights—in favor of ideology and emotion. Others criticize CRT’s reliance on narrative and lived experience, arguing that it substitutes storytelling for empirical evidence.

Liberal Perspectives

Supporters of CRT argue that it provides essential tools for understanding persistent racial inequalities that other approaches fail to explain adequately. They contend that CRT’s focus on systemic racism accurately describes how racial disparities continue despite formal legal equality.

To them, CRT isn’t about blaming individuals; it’s about recognizing how systems work. Advocates say that color-blind policies often perpetuate inequality because they ignore how race has historically shaped opportunity. They see CRT as empowering marginalized communities to tell their stories and as pushing America closer to its own ideals of justice and equality.

Liberal and progressive thinkers see CRT as a reality check—a necessary tool for understanding and dismantling systemic racism. They argue that laws and policies that seem neutral can still produce racially unequal outcomes—for example disparities in school funding or redlining in housing. (Denying loans or insurance based on neighborhoods rather than individual qualifications.)

From this perspective, CRT’s race-conscious approach is necessary because color-blind policies have proven insufficient to address entrenched racial inequities. Supporters argue that acknowledging and directly confronting racism is more effective than pretending race doesn’t matter.

Liberal defenders of CRT emphasize its scholarly rigor and empirical grounding, arguing that criticism often mischaracterizes or oversimplifies the theory. They point out that CRT is primarily an analytical framework used by scholars and graduate students, not a curriculum taught to elementary school children, as some critics suggest. Progressive educators also note that much of what critics call “CRT in schools” is really teaching about historical facts—slavery, segregation, civil-rights struggles—not law-school theory. They argue that banning CRT is less about protecting students and more about suppressing uncomfortable conversations about race and history.

Supporters also argue that CRT’s emphasis on storytelling and lived experience provides valuable perspectives that have been historically marginalized in academic discourse. They see this as democratizing knowledge production rather than abandoning scholarly standards.

Furthermore, many on the left argue that attacks on CRT represent attempts to silence discussions of racism and maintain the status quo. They view criticism of CRT as part of a broader backlash against racial justice efforts.

Why It Matters

You don’t have to buy every part of CRT to see why it struck a nerve. It forces us to ask uncomfortable but important questions: Why do some inequalities persist even after laws change? How do institutions carry the weight of history?

Whether you agree or disagree with CRT, it’s hard to deny that it has shaped how Americans talk about race. The theory challenges us to look beyond personal prejudice and ask how systems distribute power and privilege. Its critics, in turn, remind us that any theory of justice must preserve individual rights and shared civic values.

The real challenge may be learning to hold both ideas at once: that racism can be systemic, and that individuals should still be treated as individuals. CRT’s greatest value—and its greatest controversy—comes from forcing that tension into the open.

Sources:

JSTOR Daily. “What Is Critical Race Theory?” https://daily.jstor.org/what-is-critical-race-theory/ (Accessed December 3, 2025)

Harvard Law Review Blog. “Derrick Bell’s Interest Convergence and the Permanence of Racism: A Reflection on Resistance.” https://harvardlawreview.org/blog/2020/08/derrick-bells-interest-convergence-and-the-permanence-of-racism-a-reflection-on-resistance/ (March 24, 2023)

Bell, Derrick A., Jr. “Brown v. Board of Education and the Interest-Convergence Dilemma.” Harvard Law Review, Vol. 93, No. 3 (January 1980), pp. 518-533.

Columbia Law School. “Kimberlé Crenshaw on Intersectionality, More than Two Decades Later.” https://www.law.columbia.edu/news/archive/kimberle-crenshaw-intersectionality-more-two-decades-later

Crenshaw, Kimberlé. “Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics.” 1989.

Britannica. “Richard Delgado | American legal scholar.” https://www.britannica.com/biography/Richard-Delgado

Wikipedia. “Critical Race Theory.” https://en.wikipedia.org/wiki/Critical_race_theory (Updated December 31, 2025)

MTSU First Amendment Encyclopedia. “Critical Race Theory.” https://www.mtsu.edu/first-amendment/article/1254/critical-race-theory (July 10, 2024)

Delgado, Richard and Jean Stefancic. “Critical Race Theory: An Introduction.” New York University Press, 2001 (2nd edition 2012, 3rd edition 2018).

Teachers College Press. “Critical Race Theory in Education.” https://www.tcpress.com/critical-race-theory-in-education-9780807765838

American Bar Association. “A Lesson on Critical Race Theory.” https://www.americanbar.org/groups/crsj/publications/human_rights_magazine_home/civil-rights-reimagining-policing/a-lesson-on-critical-race-theory/

NAACP Legal Defense and Educational Fund. “What is Critical Race Theory, Anyway? | FAQs.” https://www.naacpldf.org/critical-race-theory-faq/ (May 6, 2025)

The illustration was generated by the author using Midjourney.

Black Soldiers on Both Sides: The Complex Story of African Americans in the Revolutionary War

When we picture the American Revolution, we often imagine Continental soldiers in blue coats facing off against British redcoats—but this image leaves out thousands of crucial participants. Between 5,000 and 8,000 Black men fought for the Patriot cause, while an estimated 20,000 joined the British forces. Their stories reveal the war’s profound contradictions and the complex choices Black Americans faced when white colonists fought for “liberty” while holding hundreds of thousands of people in bondage. Their participation reflected the Revolution’s central paradox: a war waged in the name of liberty within a society deeply dependent on slavery.

The irony wasn’t lost on anyone at the time. As Abigail Adams wrote in 1774, “it always appeared a most iniquitous scheme to me to fight ourselves for what we are daily robbing and plundering from those who have as good a right to freedom as we have”.

For most Black participants, the key question was which side offered the clearest path out of bondage rather than abstract allegiance to King or Congress.  The tension between revolutionary rhetoric and the reality of slavery shaped every decision Black Americans made about which side to support.  This dynamic meant that enslaved people frequently escaped to British forces, while free Blacks (especially in New England) were more likely, though not exclusively, to enlist with the Patriots where they already had tenuous civic footholds

The British Offer: “Liberty to Slaves”

In November 1775, Virginia’s royal governor Lord Dunmore made a move that sent shockwaves through the colonies. With his military position deteriorating and losing men under his command, Dunmore issued a proclamation offering freedom to any enslaved person who abandoned their Patriot masters and joined British forces. The proclamation declared “all indented servants, Negroes, or others (appertaining to rebels) free, that are able and willing to bear arms”.

The response was immediate. Within a month, an estimated 300 Black men had enlisted in what Dunmore called the “Royal Ethiopian Regiment,” eventually growing to about 800 men.  Their uniforms were emblazoned with the provocative words “Liberty to Slaves.” The name “Ethiopian” wasn’t random—it referenced ancient associations of Ethiopia with wisdom and nobility. These soldiers saw action at the Battle of Kemp’s Landing, where—in a moment rich with symbolic meaning—one previously enslaved soldier captured his former master, militia colonel Joseph Hutchings.

Dunmore’s promise came with devastating costs. The regiment’s only other major battle was the disastrous British defeat at Great Bridge in December 1775. Far worse was the disease that ravaged the Black soldiers’ ranks. As the Virginia Gazette reported in March 1776, “the jail distemper rages with great violence on board Lord Dunmore’s fleet, particularly among the negro forces”. Disease ultimately killed more of Dunmore’s recruits than combat, as was common among all armies of the time. By 1776, Dunmore was forced to flee Virginia, taking only about 300 survivors with him.

The Patriot Response: Reluctant Acceptance

The Continental Army’s relationship with Black soldiers was complicated from the start. Black men fought at Lexington and Concord.  They also distinguished themselves at Bunker Hill, where Black patriot Salem Poor performed so heroically that fourteen officers petitioned the Massachusetts legislature to recognize his “brave and gallant” service.

But in November 1775, just days after Dunmore’s Proclamation, George Washington—himself a Virginia slaveholder—banned the recruitment of all Black men. The ban didn’t last long. The British continued recruiting Black soldiers, and Washington faced a simple reality: he desperately needed troops. By early 1778, after the brutal winter at Valley Forge had decimated his forces, Washington grudgingly allowed states to enlist Black soldiers. Rhode Island led the way with legislation that promised immediate freedom to any “able-bodied negro, mulatto, or Indian man slave” who enlisted, with the state compensating slaveholders for their “property”.

The result was the 1st Rhode Island Regiment, which became known as the “Black Regiment.” Of its roughly 225 soldiers, about 140 were Black or Native American men. The regiment fought at the Battle of Rhode Island in August 1778, where they held their position against repeated British and Hessian charges—a performance that earned them, according to Major General John Sullivan, “a proper share of the day’s honors”. They went on to fight at Yorktown, where they stood alongside southern militiamen whose peacetime job had been hunting runaway slaves.

Throughout the Continental Army, Black soldiers generally served in integrated units. One French officer estimated that a quarter of Washington’s army was Black—though historians believe 10 to 15 percent is more accurate. As one historian noted, “In the rest of the Army, the few blacks who served with each company were fully integrated: They fought, drilled, marched, ate and slept alongside their white counterparts.”

Naval service—on both sides—was often more racially integrated than the army. Black men served as sailors, gunners, and marines in the Royal Navy and the Continental Navy. Maritime labor traditions had long been more flexible on race, and skill mattered more than status.

Free Blacks in northern towns could enlist much like white common citizens, sometimes motivated by pay, local patriotism, and the hope that visible service would strengthen claims to equal rights after the war.  Enslaved men rarely chose independently; Patriot masters often enlisted them as substitutes to avoid service, while Loyalist masters sometimes allowed or forced them to join British units. In both cases emancipation promises were unevenly honored.

Some enslavers freed men in advance of service, others promised manumission afterward and reneged, while still others simply collected bounties or commutation while trying to retain control over Black veterans. On the British side, imperial policy also vacillated, with some officers fully supporting freedom for Black refugees tied to rebel masters, and others quietly returning runaways to Loyalist owners or exploiting them as unpaid labor.​

The Promise and the Betrayal

As the war ended, the gulf between British and American treatment of their Black allies became stark. In 1783, as British forces prepared to evacuate New York, General George Washington demanded the return of all formerly enslaved people as “property” under the Treaty of Paris. British commander Sir Guy Carleton refused. Instead, he created the “Book of Negroes”—a ledger documenting about 3,000 Black Loyalists who were granted certificates of freedom and evacuated to Nova Scotia, England, Germany, and British territories.

The Book provides glimpses of individual journeys. Boston King, who had escaped slavery in South Carolina to join the British, was evacuated with his wife Violet to Nova Scotia. Their entry simply notes Violet as a “stout wench”—a reminder that even their liberators viewed them through racist lenses. Harry Washington, who had escaped from George Washington’s Mount Vernon plantation, also reached Nova Scotia and later became a leader in the resettlement to Sierra Leone.

Nova Scotia proved no paradise. Black Loyalists received inferior land—rocky and infertile compared to what white Loyalists received. They faced discrimination, exploitation, and broken promises about land grants. By 1792, nearly 1,200 Black Loyalists—about half of those in Nova Scotia—accepted an offer to resettle in Sierra Leone, where they founded Freetown.

For Black Patriots, the outcome was often worse. While some white soldiers received up to 100 acres of land and military pensions from Congress, Black soldiers who had been promised freedom often received nothing beyond freedom—and some didn’t even get that. As one historian put it, they were “dumped back into civilian society”. In June 1784, thirteen veterans of the Rhode Island Regiment had to hire a lawyer just to petition for their back pay. The state responded with an act that classified them as “paupers, who heretofore were slaves” and ordered towns to provide charity.

Lieutenant Colonel Jeremiah Olney, who commanded the Rhode Island Regiment after Christopher Greene’s death, spent years advocating for his former soldiers—fighting attempts to re-enslave them and supporting their pension claims. Some soldiers, like Jack Sisson, finally received pensions decades later in 1818—forty years after they’d enlisted, and often too late. Many died before seeing any recognition.

Even more cruelly, many Black soldiers who had been promised freedom by their masters were returned to slavery after the war. Some remained enslaved for a few years until their owners honored their promises; others remained enslaved permanently, having fought for a freedom they would never experience.

It is plausible that the widespread participation of Black soldiers subtly accelerated Northern emancipation by making slavery harder to justify ideologically, even as Southern resistance hardened.

The Larger Meaning

The American Revolution was the last time the U.S. military would be significantly integrated until President Truman’s Executive Order 9981 in 1948. In 1792, Congress passed legislation limiting military service to “free, able-bodied, white male citizens”—a restriction that would last for generations.

Yet the Revolutionary War period saw more enslaved people gain their freedom than any other time before the Civil War. Historian Gary Nash estimates that between 80,000 and 100,000 enslaved people escaped throughout the thirteen colonies during the war—not all joined the military, but the war created opportunities for flight that many seized.

As historian Edward Countryman notes, the Revolution forced Americans to confront a question that Black Americans had been raising all along: “What does the revolutionary promise of freedom and democracy mean for African Americans?” The white founders failed to answer that question satisfactorily, but the thousands of Black soldiers who fought—on both sides—had already answered it with their lives. They understood that liberty was worth fighting for, even when the people promising it had no intention of extending it to everyone.

Image generated by author using ChatGPT.

Sources

  • “African Americans in the Revolutionary War,” Wikipedia.
  • Museum of the American Revolution, “Black Patriots and Loyalists” and “Black Founders: Black Soldiers and Sailors in the Revolutionary War.”​
  • Gilder Lehrman Institute, “African American Patriots in the Revolution.”​
  • National Archives blog, “African Americans and the American War for Independence.”​
  • Douglas R. Egerton, Death or Liberty: African Americans and Revolutionary America (individual stories on both Patriot and Loyalist sides).
  • Edward Countryman, The American Revolution.
  • Gary B. Nash, The Forgotten Fifth: African Americans in the Age of Revolution.
  • Alan Gilbert, Black Patriots and Loyalists: Fighting for Emancipation in the War for Independence.​
  • DAR, Forgotten Patriots – African American and American Indian Patriots in the Revolutionary War: A Guide to Service, Sources, and Studies).​
  • NYPL LibGuide, “Black Experience of the American Revolution”
  • American Battlefield Trust, “10 Facts: Black Patriots in the American Revolution.”​
  • Massachusetts Historical Society, “Revolutionary Participation: African Americans in the American Revolution.”
  • Fraunces Tavern Museum, “Enlistment of Freed and Enslaved Blacks in the Continental Army.”​
  • American Independence Museum, “African-American Soldiers’ Service During the Revolutionary War.”​
  • Encyclopedia Virginia, “Lord Dunmore’s Ethiopian Regiment.”​
  • Mount Vernon, “Dunmore’s Proclamation and Black Loyalists” and “The Ethiopian Regiment.”​
  • American Battlefield Trust, “Lord Dunmore’s Ethiopian Regiment”
  • Lord Dunmore’s Proclamation (1775), in transcription with context at Gilder Lehrman, Encyclopedia Virginia, and Mount Vernon.​
  • “Book of Negroes” (1783 evacuation ledger of Black Loyalists to Nova Scotia; digital copies and discussions via BlackPast and Dictionary of Canadian Biography).​
  • Boston King, “Memoirs of the Life of Boston King, a Black Preacher,” Methodist Magazine (1798)
  • NYPL “Black Experience of the American Revolution”
  • 1st Rhode Island Regiment, World History Encyclopedia

Who Will Cover City Hall Now? Democracy in the Age of News Deserts

Were it left to me to decide whether we should have a government without newspapers, or newspapers without a government, I should not hesitate a moment to prefer the latter. But I should mean that every man should receive those papers and be capable of reading them. —Thomas Jefferson


I originally posted this article about a year and a half ago. I was concerned about the future of newspapers then and I’m even more concerned now. I’ve updated my original post to reflect recent losses of newspapers.
When I was growing up in Charleston WV in the 1950s and early 1960s, we had two daily newspapers. The Gazette was delivered in the morning and the Daily Mail was delivered in the afternoon. One of my first jobs as a boy was delivering The Gazette. It worked out to be about 50 cents an hour, but I was glad to have the job. (It was good money at the time.)
Ostensibly, the Gazette was a Democratic newspaper, and the Daily Mail was a Republican one. However, given the politics of the day there was not a significant difference between the two, and most people subscribed to both.
There weren’t a lot of options for news at the time. Of course, there were no 24-hour news channels. National news on the three networks was about 30 minutes an evening with local news at about 15 minutes. By the late 1960s national news had increased to 60 minutes and most local news to about 30 minutes. Although, given the limitations of time on the local stations, most of the broadcast was taken up with weather, sports, and human interest stories with little time left to expand on hard news stories.
We depended on our newspapers for news of our cities, counties, and states. And the newspapers delivered the news we needed. Almost everyone subscribed to and read the local papers. They kept us informed about our local politicians and government and provided local insight on national events. They were also our source for information about births, deaths, marriages, high school graduations and everything we wanted to know about our community.
In the 21st century there are many more supposed news options. There are 24-hour news networks as I’ve talked about in a previous post.  And of course, there are Instagram, Facebook, X and the other online entities that claim to provide news.
There has been one positive development in television news. Local news, at least in Charleston, has expanded to two hours most evenings. There is some repetition between the first and second hour and it is still heavily weighted to sports, weather, and human interest, but there is some increased coverage of local hard news. However, this is somewhat akin to reading the headlines and the first paragraph in a newspaper story. It doesn’t provide in-depth coverage, but it is improved over what otherwise is available to those who don’t watch a dedicated news show. Hopefully, it motivates people to find out more about events that concern them.
The situation has become dire in recent months. The crisis that was building when I first wrote about newspapers has now reached catastrophic proportions. On December 31, 2025, the Atlanta Journal-Constitution published its last print edition after 157 years, making Atlanta the largest U.S. metro area without a printed daily newspaper. Think about that—a major American city, home to over six million people in its metro area, now has no physical newspaper you can hold in your hands.
Just weeks ago in February 2025, the Newark Star-Ledger, New Jersey’s largest newspaper, stopped printing after nearly 200 years. The Jersey Journal, which had served Hudson County for 157 years, closed entirely. These weren’t small-town weeklies—these were major metropolitan dailies that once served millions of readers. The Pittsburgh Post Gazette, founded in 1786, has announced that it will cease publication effective May 3, 2026.
Even more alarming is what just happened at the Washington Post. Just days ago, in early February 2026, owner Jeff Bezos ordered the elimination of roughly one-third of the newspaper’s workforce—approximately 300 journalists. The Post closed its entire sports section, shuttered its books department, gutted its foreign bureaus and metro desk, and canceled its flagship daily podcast. This is the same newspaper that brought down a presidency with its Watergate coverage and has won dozens of Pulitzer Prizes. The Post’s metro desk, which once had 40 reporters covering the nation’s capital, now has just a dozen. All the paper’s photojournalists were laid off. The entire Middle East team was eliminated.
Former Washington Post executive editor Martin Baron, who led the paper from 2013 to 2021, called the cuts devastating and blamed poor management decisions, including Bezos’s decision to spike the newspaper’s presidential endorsement in 2024, which led to the cancellation of hundreds of thousands of subscriptions. The Post lost an estimated $100 million in 2024.
The numbers tell a grim story. Since 2005, more than 3,200 newspapers have closed in the United States—that’s over one-third of all the newspapers that existed just twenty years ago. Newspapers continue to disappear at a rate of more than two per week. In the past year alone, 136 newspapers shut their doors.
Fewer than 5,600 newspapers now remain in America, and less than 1,000 of those are dailies. Even among those “dailies,” more than 80 percent print fewer than seven days a week. We now have 213 counties that are complete “news deserts”—places with no local news source at all. Another 1,524 counties have only one remaining news source, usually a struggling weekly newspaper. Taken together, about 50 million Americans now have limited or no access to local news.
Will TV news be able to provide the details about our community? The format of the newspaper allows for more detailed presentations and for a larger variety of stories. The reader can pick which stories to read, when to read them and how much of each to read. The very nature of broadcast news doesn’t allow these options.
I beg everyone to please subscribe to your local newspapers if you still have one. Though I still prefer the hands-on, physical newspaper, I understand many people want to keep up with the digital age. If you do, please subscribe to the digital editions of your local newspaper and don’t pretend that the other online sources, such as social media, will provide you with local news. More likely, you’ll just get gossip, or worse.
If we lose our local news, we are in danger of losing our freedom of information and if we lose that, we’re in danger of losing our country. For those of you who think I’m fear mongering, countries that have succumbed to dictatorship have first lost their free press.
I believe that broadcast news will never be the free press that print journalism is. The broadcast is an ethereal thing. You hear it and it’s gone. Of course, it is always possible to record it and play it back, but most people don’t. If you have a newspaper, you can read it, think about it, and read it again. There are times when on my second or third reading of an editorial or an op-ed article, I’ve changed my opinion about either the subject or the writer of the piece. I don’t think a news broadcast lends itself to this type of reflection. In fact, when listening to the broadcast news I often find my mind wandering as something that the broadcaster said sends me in a different direction.
In my opinion, broadcast news is controlled by advertising dollars and viewer ratings. News seems to be treated like any entertainment program, catering to what generates ratings rather than facts. I recognize that this can be the case with newspapers as well, but it seems to me that it’s much easier to detect bias in the written word than in the spoken word. Too often we can get caught up in the emotions of the presenter or in the graphics that accompany the story.
With that in mind, I recommend that if you want unbiased journalism, please support your local newspapers before we lose them. Once they are gone, we will never get them back and we will all be much the poorer as a result.
I will leave you with one last quote.
A free press is the unsleeping guardian of every other right that free men prize; it is the most dangerous foe of tyranny. —Winston Churchill
The only way to preserve freedom is to preserve the free press. Do your part! Subscribe!
And you can quote The Grumpy Doc on that!!!!

Sources
Fortune (August 29, 2025): “Atlanta becomes largest U.S. metro without a printed daily newspaper as Journal-Constitution goes digital”
https://fortune.com/2025/08/29/atlanta-largest-metro-without-printed-newpsaper-digital-journal-constitution/
 
Northwestern University Medill School (2025): “News deserts hit new high and 50 million have limited access to local news, study finds”
https://www.medill.northwestern.edu/news/2025/news-deserts-hit-new-high-and-50-million-have-limited-access-to-local-news-study-finds.html
 
NBC News (February 2026): “Washington Post lays off one-third of its newsroom”
https://www.nbcnews.com/business/media/washington-post-layoffs-sports-rcna257354
 
CNN Business (February 4, 2026): “Jeff Bezos-owned Washington Post conducts widespread layoffs, gutting a third of its staff”
https://www.cnn.com/2026/02/04/media/washington-post-layoffs
 
Northwestern University Medill Local News Initiative (2024): “The State of Local News Report 2024”
https://localnewsinitiative.northwestern.edu/projects/state-of-local-news/2024/report/
 
Northwestern University Medill School (2025): “News deserts hit new high and 50 million have limited access to local news, study finds”
https://www.medill.northwestern.edu/news/2025/news-deserts-hit-new-high-and-50-million-have-limited-access-to-local-news-study-finds.htm

Russel Vought and the War on the Environment

Recently, there’s been a a lot of attention given to RFK Jr. and his war on vaccines. More potentially devastating than that is Russel Vought and his war on environmental science.
Russell Vought hasn’t exactly been working in the shadows. As the director of the Office of Management and Budget since February 2025, he’s been methodically implementing what he outlined years earlier in Project 2025—a blueprint that treats climate science not as settled fact, but as what he calls “climate fanaticism.” The result is undeniably the most aggressive dismantling of environmental protections in American history.
The Man Behind the Plan
Vought’s resume tells you everything you need to know about his approach. He served as OMB director during Trump’s first term, wrote a key chapter of Project 2025 focusing on consolidating presidential power, and has openly stated his goal is to make federal bureaucrats feel “traumatized” when they come to work. His philosophy on climate policy specifically? He’s called climate change a side effect of building the modern world—something to manage through deregulation rather than prevention.
Attacking the Foundation: The Endangerment Finding
The centerpiece of Vought’s climate strategy targets what EPA Administrator Lee Zeldin has called “the holy grail of the climate change religion”—the 2009 Endangerment Finding. This Obama-era scientific determination concluded that six greenhouse gases (carbon dioxide, methane, nitrous oxide, hydrofluorocarbons, perfluorocarbons, and sulfur hexafluoride) endanger public health and welfare. It sounds technical, but it’s the legal foundation for virtually every federal climate regulation enacted over the past fifteen years.
 Just last week EPA Administrator Zeldin announced that the Trump administration has repealed this finding. This action strips EPA’s authority to regulate greenhouse gas emissions under the Clean Air Act—meaning no more federal limits on power plant emissions, no vehicle fuel economy standards tied to climate concerns, and no requirement for industries to measure or report their emissions.  White House press secretary Karoline Leavitt said this action “will be the largest deregulatory action in American history.”
More than 1,000 scientists warned Zeldin not to take this step, and the Environmental Protection Network cautioned last year that repealing the finding would cause “tens of thousands of additional premature deaths due to pollution exposure” and would spark “accelerated climate destabilization.”  Abigail Dillen president of the nonprofit law firm Earthjustice said “there is no way to reconcile EPA’s decision with the law, the science and the reality of the disasters that are hitting us harder every year.” She further said they expect to see the Trump administration in court.  Obviously, the science is less important to Trump, Zeldin and Vought than the politics.
The Thirty-One Targets
In March 2025, Zeldin announced what he proudly called “the greatest day of deregulation in American history”—a plan to roll back or reconsider 31 key environmental rules covering everything from clean air to water quality. The list reads like a regulatory hit parade, including vehicle emission standards (designed to encourage electric vehicles), power plant pollution limits, methane regulations for oil and gas operations, and even particulate matter standards that protect against respiratory disease.
The vehicle standards are particularly revealing. The transportation sector is America’s largest source of greenhouse gas emissions, and the Biden-era rules were crafted to nudge automakers toward producing more electric vehicles. At Vought’s direction, the EPA is now reconsidering these, with Zeldin arguing they “regulate out of existence” segments of the economy and cost Americans “a lot of money.”
Gutting the Science Infrastructure
Vought’s agenda extends beyond specific regulations to the institutions that produce climate science itself. In Project 2025, he proposed abolishing the Office of Domestic Climate Policy and suggested the president should refuse to accept federal scientific research like the U.S. National Climate Assessment (NCA). The NCA, published every few years, involves hundreds of scientists examining how climate change is transforming the United States—research that informs everything from building codes to insurance policies.
According to reporting from E&E News in January, Vought wants the White House to exert tighter control over the next NCA, potentially elevating perspectives from climate deniers and industry representatives while excluding contributions made during the Biden administration.  This is a plan that has been in the works for years. Vought reportedly participated in a White House meeting during Trump’s first term where officials discussed firing the scientists working on the assessment.
The National Oceanic and Atmospheric Administration (NOAA) has also been targeted. In February 2025, about 800 NOAA employees—responsible for weather forecasting, climate monitoring, fisheries management, and marine research were fired. Project 2025 had proposed breaking up NOAA entirely, and concerned staff members have already begun a scramble to preserve massive amounts of climate data in case the agency is dismantled.
Budget Cuts as Policy
Vought’s Center for Renewing America has proposed eliminating the Department of Energy’s Office of Energy Efficiency and Renewable Energy, the EPA’s environmental justice fund, and the Low Income Home Energy Assistance Program. During the first Trump administration, Vought oversaw budgets proposing EPA cuts as steep as 31%—reducing the agency to funding levels not seen in decades. In a 2023 speech, he explained the logic bluntly: “We want their funding to be shut down so that the EPA can’t do all of the rules against our energy industry because they have no bandwidth financially to do so.”
This isn’t just about climate, it is also about fairness and the recognition that environmental policies have had a predominately negative effect on low income areas. EPA has cancelled 400 environmental justice grants, closed environmental justice offices at all 10 regional offices, and put the director of the $27 billion Greenhouse Gas Reduction Fund on administrative leave. The fund had been financing local economic development projects aimed at lowering energy prices and reducing emissions.
Eliminating Climate Considerations from Government
Perhaps more insidious than the high-profile rollbacks are the procedural changes that make climate considerations disappear from federal decision-making. In February, Jeffrey Clark—acting administrator of the Office of Information and Regulatory Affairs (OIRA) under Vought’s OMB—directed federal agencies to stop using the “social cost of carbon” in their analyses. This metric calculates the dollar value of damage caused by one ton of carbon pollution, allowing agencies to accurately assess whether regulations produce net benefits or defects for society.
Vought has also directed agencies to establish sunset dates for environmental regulations—essentially automatic expiration dates after which rules stop being enforced unless renewed. For existing regulations, the sunset comes after one year; for new ones, within five years. The stated goal is forcing agencies to continuously justify their rules, but the practical effect is creating a perpetual cycle of regulatory uncertainty.
The Real-World Stakes
The timing of these rollbacks offers a grim irony. As Vought was pushing to weaken the National Climate Assessment in January 2025, the Eaton and Palisades fires were devastating Los Angeles—exactly the type of climate-intensified disaster the assessment is designed to help communities prepare for. The administration’s response? Energy Secretary Chris Wright described climate change as “a side effect of building the modern world” at an industry conference.
An analysis by Energy Innovation, a nonpartisan think tank, found that Project 2025’s proposals to gut federal policies encouraging renewable electricity and electric vehicles would increase U.S. household spending on fuel and utilities by about $240 per year over the next five years. That’s before accounting for the health costs of increased air pollution or the economic damage from unmitigated climate change.
Environmental groups have vowed to challenge these changes in court, and the legal battles will likely stretch on for years. The D.C. Circuit Court of Appeals will hear many cases initially, though the Supreme Court will probably issue final decisions. Legal experts note that while Trump’s EPA moved with unprecedented speed on proposals in 2025, finalizing these rules through the required regulatory process will take much longer. As of December, none of the major climate rule repeals had been submitted to OMB for final review, partly due to what EPA called a 43-day government shutdown (which EPA blamed on Democrats, though the characterization is widely disputed).
What Makes This Different
Previous administrations have certainly rolled back environmental regulations, but Vought’s approach differs in both scope and philosophy. Rather than tweaking specific rules or relaxing enforcement, he’s systematically attacking the scientific and legal foundations that make climate regulation possible. It’s the difference between turning down the thermostat and ripping out the entire heating system.
The Environmental Defense Fund, which rarely comments on political appointees, strongly opposed Vought’s confirmation, with Executive Director Amanda Leland stating: “Russ Vought has made clear his contempt for the people working every day to ensure their fellow Americans have clean air, clean water and a safer climate.”
Looking Forward
Whether Vought’s vision becomes permanent depends largely on how courts rule on these changes. The 2007 Supreme Court decision in Massachusetts v. EPA established that the agency has authority to regulate greenhouse gases as air pollutants under the Clean Air Act—the very authority Vought is now trying to eliminate. Overturning established precedent is difficult, though the current Supreme Court’s composition makes the outcome possible, if not likely.
What we’re witnessing is essentially a test of whether one administration can permanently disable the federal government’s capacity to address climate change, or if these changes represent a temporary setback that future administrations can reverse. The stakes couldn’t be higher: atmospheric CO2 concentrations continue rising, global temperatures are breaking records, and climate-related disasters are becoming more frequent and severe. Nothing less than the future of our way of life is at stake. We must take action now.
 
Full disclosure: my undergraduate degree is in meteorology, but I would never call myself a meteorologist since I have never worked in the field. But I still maintain an interest, from both a meteorological and a medical perspective. The Grump Doc is never lacking in opinions.
 
Illustration generated by author using Midjourney.
 
Sources:
Lisa Friedman and Maxine Joselow, “Trump Allies Near ‘Total Victory’ in Wiping Out U.S. Climate Regulation,” New York Times, Feb. 9, 2026.[nytimes +1]
Lisa Friedman, “The Conservative Activists Behind One of Trump’s Biggest Climate Moves,” New York Times, Feb. 10, 2026.[nytimes +1]
Bob Sussman, “The Anti-Climate Fanaticism of the Second Trump Term (Part 1: The Purge of Climate from All Federal Programs),” Environmental Law Institute, May 7, 2025.[eli]
U.S. Environmental Protection Agency, “Trump EPA Kicks Off Formal Reconsideration of Endangerment Finding,” EPA News Release, Mar. 13, 2025.[epa]
Trump’s Climate and Clean Energy Rollback Tracker, Act On Climate/NRDC coalition, updated Jan. 11, 2026.[actonclimate]
“Trump to Repeal Landmark Climate Finding in Huge Regulatory Rollback,” Wall Street Journal, Feb. 9, 2026.[wsj]
Valerie Volcovici, “Trump Set to Repeal Landmark Climate Finding in Huge Regulatory Rollback,” Reuters, Feb. 9, 2026.[reuters]
Alex Guillén, “Trump EPA to Take Its Biggest Swing Yet Against Climate Change Rules,” Politico, Feb. 10, 2026.[politico]
“EPA Urges White House to Strike Down Landmark Climate Finding,” Washington Post, Feb. 26, 2025.[washingtonpost]
“Trump Allies Near ‘Total Victory’ in Wiping Out U.S. Climate Regulation,” Seattle Times reprint, Feb. 10, 2026.[seattletimes]
“Trump Wants to Dismantle Key Climate Research Hub in Colorado,” Earth.org, Dec. 17, 2025.[earth]
“Vought Says National Science Foundation to Break Up Federal Climate Research Center,” The Hill, Dec. 17, 2025.[thehill]
Rachel Cleetus, “One Year of the Trump Administration’s All-Out Assault on Climate and Clean Energy,” Union of Concerned Scientists, Jan. 13, 2026.[ucs]
Environmental Protection Network, “Environmental Protection Network Speaks Out Against Vought Cabinet Consideration,” Nov. 20, 2024.[environmentalprotectionnetwork]
“From Disavowal to Delivery: The Trump Administration’s Rapid Implementation of Project 2025 on Public Lands,” Center for Western Priorities, Jan. 28, 2026.[westernpriorities]
“Russ Vought Nominated for Office of Management and Budget Director,” Environmental Defense Fund statement, Mar. 6, 2025.[edf]
“Project 2025,” Heritage Foundation/Project 2025 backgrounder (as summarized in the Project 2025 Wikipedia entry).[wikipedia]
“EPA to repeal finding that serves as basis for climate change,” The Associated Press, Matthew Daly
https://vitalsigns.edf.org/story/trump-nominee-and-project-2025-architect-russell-vought-has-drastic-plans-reshape-america
https://en.wikipedia.org/wiki/Russell_Vought
https://www.commondreams.org/news/warnings-of-permanent-damage-to-people-and-planet-as-trump-epa-set-to-repeal-key-climate-rule
https://www.eenews.net/articles/trump-team-takes-aim-at-crown-jewel-of-us-climate-research/
https://www.epa.gov/newsreleases/epa-launches-biggest-deregulatory-action-us-history
https://www.pbs.org/newshour/nation/trump-administration-moves-to-repeal-epa-rule-that-allows-climate-regulation
https://www.scientificamerican.com/article/trump-epa-unveils-aggressive-plans-to-dismantle-climate-regulation/
https://www.bloomberg.com/news/articles/2026-02-10/trump-s-epa-to-scrap-landmark-emissions-policy-in-major-rollback​​​​​​​​​​​​​​​​
 
 
 
 

The Fatal Meeting: When Hamilton and Burr Settled Fifteen Years of Rivalry with Pistols

The story of the Hamilton-Burr duel has all the elements of a Greek tragedy: brilliant men, political ambition, an unforgiving honor culture, and an ending that destroyed both victor and vanquished alike. When Aaron Burr shot Alexander Hamilton on the morning of July 11, 1804, he didn’t just kill one of America’s founding architects—he also ended his own political career and helped doom the entire Federalist Party to irrelevance. Let’s rewind the clock more than a decade to try and understand how these two gifted lawyers and Revolutionary War veterans ended up facing each other with loaded pistols.

The Long Road to Weehawken

Hamilton and Burr moved in the same elite New York political circles from the 1790s onward, but they had remarkably different temperaments and political beliefs. Hamilton was ideological, prolific, and combative—often too much so for his own good. Burr was pragmatic, opaque, self-serving, and famously hard to pin down on principle. They distrusted each other deeply.

Their rivalry stretched back to 1791, when Burr defeated Philip Schuyler for a U.S. Senate seat representing New York. This wasn’t just any political defeat for Hamilton—Schuyler was his father-in-law and a crucial Federalist ally on whom Hamilton had counted to support his ambitious financial programs. Hamilton, who was serving in George Washington’s cabinet as Treasury Secretary, never forgave Burr for this loss. In correspondence from June 1804, Hamilton himself referenced “a course of fifteen years competition” between the two men.  

Their philosophical differences ran deep. Hamilton was an ideological Federalist who dreamed of transforming the United States into a modern economic power rivaling European empires through strong central government, industrial development, and military strength. Burr, by contrast, approached politics more pragmatically—he saw it as a vehicle for advancing his own interests and those of his allies rather than as a way to implement sweeping political visions. As Burr himself allegedly said, politics were nothing more than “fun and honor and profit”. Hamilton viewed Burr as fundamentally dangerous due to his lack of fixed ideological principles. Hamilton wrote in 1792 that he considered it his “religious duty to keep this man from office”

The election of 1800 brought their animosity to a boiling point. Due to a quirk in the original Constitution’s electoral system, Thomas Jefferson and his running mate Aaron Burr tied in the Electoral College with 73 votes each, allowing the Federalists to briefly consider elevating Burr to the presidency.  The decision went to the House of Representatives, and Hamilton—despite despising Jefferson’s Democratic-Republican politics—campaigned hard to ensure Jefferson won the presidency rather than Burr. Hamilton argued that Jefferson, however wrong in policy, had convictions, whereas Burr had none.  In the end, Jefferson gained the presidency and Burr became Vice President, but their relationship was never collegial and Burr was excluded from any meaningful participation in Jefferson’s administration.

By 1804, it was clear Jefferson would not consider Burr for a second term as Vice President. Desperate to salvage his political career, Burr made a surprising move: he sought the Federalist nomination for governor of New York, switching from his Democratic-Republican affiliation. It was a strange gambit—essentially betting that his political enemies might support him if it served their interests. Hamilton, predictably, worked vigorously to block Burr’s ambitions yet again. Although Hamilton’s opposition wasn’t the only factor, Burr lost badly to Morgan Lewis, the Democratic-Republican candidate, in April 1804.

The Cooper Letter and the Challenge

The immediate trigger for the duel came from a relatively minor slight in the context of their long feud. In February 1804, Dr. Charles Cooper attended a dinner party where Hamilton spoke forcefully against Burr’s candidacy. Cooper later wrote to Philip Schuyler describing Hamilton’s comments, noting that Hamilton had called Burr “a dangerous man” and referenced an even “more despicable opinion” of him. This letter was published in the Albany Register in April, after Burr’s electoral defeat.

When the newspaper reached Burr, he was already politically ruined—still Vice President of the United States, but with no prospects for future office. He demanded that Hamilton acknowledge or deny the statements attributed to him. What followed was a formal exchange of letters between the two men and their representatives that lasted through June. Hamilton refused to give Burr the straightforward denial he sought, explaining that he couldn’t reasonably be expected to account for everything he might have said about a political opponent during fifteen years of competition. Burr, seeing his honor impugned and his options exhausted, invoked the code of honor and issued a formal challenge to duel.

Hamilton found himself in an impossible position. If he admitted to the insults, which were substantially true, he would lose his honor. If he refused to duel, the result would be the same—his political career would effectively end. Hamilton had personal and moral objections to dueling. His eldest son Philip had died in a duel just three years earlier, at the same Weehawken location where Hamilton and Burr would meet. Hamilton calculated that his ability to maintain his political influence required him to conform to the codes of honor that governed gentlemen’s behavior in early America.

Dawn at Weehawken

At 5:00 AM on the morning of July 11, 1804, the men departed Manhattan from separate docks. They were each rowed across the Hudson River to the Heights of Weehawken, New Jersey—a popular dueling ground where at least 18 known duels took place between 1700 and 1845. They chose New Jersey because while dueling had been outlawed in both New York and New Jersey, the New Jersey penalties were less severe.

Burr arrived first around 6:30 AM, with Hamilton landing about thirty minutes later. Each man was accompanied by his “second”—an assistant responsible for ensuring the duel followed proper protocols. Hamilton brought Nathaniel Pendleton, a Revolutionary War veteran and Georgia district court judge, while Burr’s second was William Van Ness, a New York federal judge. Hamilton also brought Dr. David Hosack, a Columbia College professor of medicine and botany, in case medical attention proved necessary.

Shortly after 7 a.m., the seconds measured out ten paces, loaded the .56‑caliber pistols, and explained the firing rules before Hamilton and Burr took their positions. What exactly happened next remains one of history’s enduring mysteries. The seconds gave conflicting accounts, and historians still debate the sequence and meaning of events.

In a written statement before the duel, Hamilton expressed religious and moral objections to dueling, worry for his family and creditors, and professed no personal hatred of Burr, yet concluded that honor and future public usefulness compelled him to accept. By some accounts, Hamilton had also written to confidants indicating his intention to “throw away my shot”—essentially to deliberately miss Burr, satisfying the requirements of honor without attempting to kill his opponent. Burr, by contrast, appears to have aimed directly at Hamilton.

Some accounts suggest Hamilton fired first, with his shot hitting a tree branch above and behind Burr’s head. Other versions claim Burr shot first. There’s even a theory that Hamilton’s pistol had a hair trigger that caused an accidental discharge after Burr wounded him.

What’s undisputed is the outcome: Burr’s shot struck Hamilton in the lower abdomen, with the bullet lodging near his spine. Hamilton fell, and Burr reportedly started toward his fallen opponent before Van Ness held him back, worried about the legal consequences of lingering at the scene. The two parties crossed back to Manhattan in their respective boats, with Hamilton taken to the home of William Bayard Jr. in what is now Greenwich Village.

Hamilton survived long enough to say goodbye to his wife Eliza and their children. He died at 2 PM on July 12, 1804, approximately 31 hours after being shot.

Political Aftershocks

The nation was outraged. While duels were relatively common in early America, they rarely resulted in death, and the killing of someone as prominent as Alexander Hamilton sparked widespread condemnation. The political consequences proved catastrophic for everyone involved—and reshaped American politics for the next two decades.

Hamilton’s death turned him into a Federalist martyr. Even many who had disliked his arrogance now praised his intellect, service, and sacrifice. His economic vision, already embedded in American institutions, gained a kind of posthumous authority.

For Aaron Burr, the duel destroyed him politically and socially. Murder charges were filed against him in both New York and New Jersey, though neither reached trial—a grand jury in Bergen County, New Jersey indicted him for murder in November 1804, but the New Jersey Supreme Court quashed the indictment. Nevertheless, Burr fled to St. Simons Island, Georgia, staying at the plantation of Pierce Butler, before returning to Washington to complete his term as Vice President.

Rather than restoring his reputation as he’d hoped, the duel made Burr a pariah. He would never hold elected office again. His subsequent attempt to regain power through what historians call the “Burr Conspiracy”—an alleged plan to create an independent nation along the Mississippi River by separating territories from the United States and Spain—led to a treason trial in 1807. Chief Justice John Marshall presided and Burr was ultimately acquitted, but the trial further cemented Burr’s reputation as a dangerous schemer. He spent his later years quietly practicing law in New York.

For the Federalist Party, Hamilton’s death proved even more devastating than Burr’s personal ruin. Hamilton had been the party’s intellectual architect and most effective leader. At the time of his death, the Federalists were attempting a comeback after their national defeat in the 1800 election. Without Hamilton’s energy, strategic thinking, and ability to articulate a compelling vision for the country, the Federalists lost direction. As one historian put it, “The Federalists would be unable to find another leader as forceful and energetic as Hamilton had been, and their movement would slowly suffocate before finally petering out in the early 1820s”. The party’s decline ended what historians consider the first round of partisan struggles in American history.

An interesting footnote: while many Federalists wanted to portray Hamilton as a political martyr, Federalist clergy broke with the party line to condemn dueling itself as a violation of the sixth commandment. These ministers used Hamilton’s death as an opportunity to wage a moral crusade against the practice of dueling, helping to accelerate its decline in American culture—particularly in the northern states where it was already losing favor.

The duel produced a triple tragedy: Hamilton dead at age 47 (or 49—his birth year remains disputed), Burr politically destroyed despite being acquitted of murder charges, and the Federalist Party fatally weakened at a critical moment in American political development.

The Hamilton–Burr duel sits at the intersection of politics, personality, and culture. It reminds us that the early republic was not a calm, rational experiment run by marble statues—but a volatile environment shaped by ego, fear, and ambition. Institutions were young, norms were fragile, and reputations were all important. What began as fifteen years of professional rivalry and personal enmity ended with two brilliant men eliminating each other from the political stage, neither achieving what they’d hoped for through their fatal meeting on the heights of Weehawken.

Sources

Encyclopedia Britannica “Burr-Hamilton duel | Summary, Background, & Facts” https://www.britannica.com/event/Burr-Hamilton-duel

History.com “Aaron Burr slays Alexander Hamilton in duel” https://www.history.com/this-day-in-history/july-11/burr-slays-hamilton-in-duel

Library of Congress “Today in History – July 11” https://www.loc.gov/item/today-in-history/july-11

National Constitution Center “The Burr vs. Hamilton duel happened on this day” https://constitutioncenter.org/blog/burr-vs-hamilton-behind-the-ultimate-political-feud

National Park Service “Hamilton-Burr Duel” https://www.nps.gov/articles/000/hamilton-burr-duel.htm

PBS American Experience “Alexander Hamilton and Aaron Burr’s Duel” https://www.pbs.org/wgbh/americanexperience/features/duel-alexander-hamilton-and-aaron-burrs-duel/

The Gospel Coalition “American Prophets: Federalist Clergy’s Response to the Hamilton–Burr Duel of 1804” https://www.thegospelcoalition.org/themelios/article/american-prophets-federalist-clergys-response-to-the-hamilton-burr-duel-of-1804/

Wikipedia “Burr–Hamilton duel” https://en.wikipedia.org/wiki/Burr–Hamilton_duel

World History Encyclopedia “Hamilton-Burr Duel” https://www.worldhistory.org/article/2548/hamilton-burr-duel/​​​​​​​​​​​​​​​​

For more information about the history of dueling in early America see my earlier post: Pistols at Dawn, The Rise and Fall of the Code Duello.

Images generated by author using ChatGPT.

The Founding Feuds: When America’s Heroes Couldn’t Stand Each Other

The mythology of the founding fathers often portrays them as a harmonious band of brothers united in noble purpose. The reality was far messier—these brilliant, ambitious men engaged in bitter personal feuds that sometimes threatened the very republic they were creating.  In some ways, the American revolution was as much of a battle of egos as it was a war between King and colonists.

The Revolutionary War Years: Hancock, Adams, and Washington’s Critics

The tensions began even before independence was declared. John Hancock and Samuel Adams, both Massachusetts firebrands, developed a rivalry that simmered throughout the Revolution. Adams, the older political strategist, had been the dominant figure in Boston’s resistance movement. When Hancock—wealthy, vain, and eager for glory—was elected president of the Continental Congress in 1775, the austere Adams felt his protégé had grown too big for his britches. Hancock’s request for a leave of absence from the presidency of Congress in 1777 coupled with his desire for an honorific military escort home, struck Adams as a relapse into vanity. Adams even opposed a resolution of thanks for Hancock’s service, signaling open estrangement. Their relationship continued to deteriorate to the point where they barely spoke, with Adams privately mocking Hancock’s pretensions and Hancock using his position to undercut Adams politically.

The choice of Washington as commander sparked its own controversies. John Adams had nominated Washington, partly to unite the colonies by giving Virginia the top military role. Washington’s command was anything but universally admired and as the war dragged on with mixed results many critics emerged.

After the victory at Saratoga in 1777, General Horatio Gates became the focal point of what’s known as the Conway Cabal—a loose conspiracy aimed at having Gates replace Washington as commander-in-chief. General Thomas Conway wrote disparaging letters about Washington’s military abilities. Some members of Congress, including Samuel Adams, Thomas Mifflin, and Richard Henry Lee, questioned whether Washington’s defensive strategy was too cautious and if his battlefield performance was lacking. Gates himself played a duplicitous game, publicly supporting Washington while privately positioning himself as an alternative.

When Washington discovered the intrigue, his response was characteristically measured but firm.  Rather than lobbying Congress or forming a counter-faction, Washington leaned heavily on reputation and restraint. He continued to communicate respectfully with Congress, emphasizing the army’s needs rather than defending his own position.  Washington did not respond with denunciations or public accusations. Instead, he handled the situation largely behind the scenes. When he learned that Conway had written a critical letter praising Gates, Washington calmly informed him that he was aware of the letter—quoting it verbatim.

The conspiracy collapsed, in part because Washington’s personal reputation with the rank and file and with key political figures proved more resilient than his critics had anticipated. But the episode exposed deep fractures over strategy, leadership, and regional loyalties within the revolutionary coalition.

The Ideological Split: Hamilton vs. Jefferson and Madison

Perhaps the most consequential feud emerged in the 1790s between Alexander Hamilton and Thomas Jefferson, with James Madison eventually siding with Jefferson. This wasn’t just personal animosity—it represented a fundamental disagreement about America’s future.

Hamilton, Washington’s Treasury Secretary, envisioned an industrialized commercial nation with a strong central government, a national bank, and close ties to Britain. Jefferson, the Secretary of State, championed an agrarian republic of small farmers with minimal federal power and friendship with Revolutionary France. Their cabinet meetings became so contentious that Washington had to mediate. Hamilton accused Jefferson of being a dangerous radical who would destroy public credit. Jefferson called Hamilton a monarchist who wanted to recreate British aristocracy in America.

The conflict got personal. Hamilton leaked damaging information about Jefferson to friendly newspapers. Jefferson secretly funded a journalist, James Callender, to attack Hamilton in print. When Hamilton’s extramarital affair with Maria Reynolds became public in 1797, Jefferson’s allies savored every detail. The feud split the nation into the first political parties: Hamilton’s Federalists and Jefferson’s Democratic-Republicans. Madison, once Hamilton’s ally in promoting the Constitution, switched sides completely, becoming Jefferson’s closest political partner and Hamilton’s implacable foe.

The Adams-Jefferson Friendship, Rivalry, and Reconciliation

John Adams and Thomas Jefferson experienced one of history’s most remarkable personal relationships. They were close friends during the Revolution, working together in Congress and on the committee to draft the Declaration of Independence (though Jefferson did the actual writing). Both served diplomatic posts in Europe and developed deep mutual respect.

But the election of 1796 turned them into rivals. Adams won the presidency with Jefferson finishing second, making Jefferson vice president under the original constitutional system—imagine your closest competitor becoming your deputy. By the 1800 election, they were bitter enemies. The campaign was vicious, with Jefferson’s supporters calling Adams a “hideous hermaphroditical character” and Adams’s allies claiming Jefferson was an atheist who would destroy Christianity.

Jefferson won in 1800, and the two men didn’t speak for over a decade. Their relationship was so bitter that Adams left Washington early in the morning, before Jefferson’s inauguration. What makes their story extraordinary is the reconciliation. In 1812, mutual friends convinced them to resume correspondence. Their letters over the next fourteen years—158 of them—became one of the great intellectual exchanges in American history, discussing philosophy, politics, and their memories of the Revolution. Both men died on July 4, 1826, the fiftieth anniversary of the Declaration of Independence, with Adams’s last words reportedly being “Thomas Jefferson survives” (though Jefferson had actually died hours earlier).

Franklin vs. Adams: A Clash of Styles

In Paris, the relationship between Benjamin Franklin and John Adams was a tense blend of grudging professional reliance and deep personal irritation, rooted in radically different diplomatic styles and temperaments. Franklin, already a celebrated figure at Versailles, cultivated French support through charm, sociability, and patient maneuvering in salons and at court, a method that infuriated Adams. He equated such “nuances” with evasiveness and preferred direct argument, formal memorandums, and hard‑edged ultimatums. Sharing lodgings outside Paris only intensified Adams’s resentment as he watched Franklin rise late, receive endless visitors, and seemingly mix pleasure with business, leading Adams to complain that nothing would ever get done unless he did it himself, while Franklin privately judged Adams “always an honest man, often a wise one, but sometimes and in some things, absolutely out of his senses.” Their French ally, Foreign Minister Vergennes, reinforced the imbalance by insisting on dealing primarily with Franklin and effectively sidelining Adams in formal diplomacy. This deepened Adams’s sense that Franklin was both overindulged by the French and insufficiently assertive on America’s behalf. Yet despite their mutual loss of respect, the two ultimately cooperated—often uneasily—in the peace negotiations with Britain, and both signatures appear on the 1783 Treaty of Paris, a testament to the way personal feud and shared national purpose coexisted within the American diplomatic mission.

Hamilton and Burr: From Political Rivalry to Fatal Duel

The Hamilton-Burr feud ended in the most dramatic way possible: a duel at Weehawken, New Jersey, on July 11, 1804, where Hamilton was mortally wounded and Burr destroyed his own political career.

Their rivalry had been building for years. Both were New York lawyers and politicians, but Hamilton consistently blocked Burr’s ambitions. When Burr ran for governor of New York in 1804, Hamilton campaigned against him with particular venom, calling Burr dangerous and untrustworthy at a dinner party. When Burr read accounts of Hamilton’s remarks in a newspaper, he demanded an apology. Hamilton refused to apologize or deny the comments, leading to the duel challenge.

What made this especially tragic was that Hamilton’s oldest son, Philip, had been killed in a duel three years earlier defending his father’s honor. Hamilton reportedly planned to withhold his fire, but he either intentionally shot into the air or missed. Burr’s shot struck Hamilton in the abdomen, and he died the next day. Burr was charged with murder in both New York and New Jersey and fled to the South.  Though he later returned to complete his term as vice president, his political career was finished.

Adams vs. Hamilton: The Federalist Crack-Up

One of the most destructive feuds happened within the same party. John Adams and Alexander Hamilton were both Federalists, but their relationship became poisonous during Adams’s presidency (1797-1801).

Hamilton, though not in government, tried to control Adams’s cabinet from behind the scenes. When Adams pursued peace negotiations with France (the “Quasi-War” with France was raging), Hamilton wanted war. Adams discovered that several of his cabinet members were more loyal to Hamilton than to him and fired them. In the 1800 election, Hamilton wrote a fifty-four-page pamphlet attacking Adams’s character and fitness for office—extraordinary since they were in the same party. The pamphlet was meant for limited circulation among Federalist leaders, but Jefferson’s allies got hold of it and published it widely, devastating both Adams’s re-election chances and Hamilton’s reputation. The feud helped Jefferson win and essentially destroyed the Federalist Party.

Washington and Jefferson: The Unacknowledged Tension

While Washington and Jefferson never had an open feud, their relationship cooled significantly during Washington’s presidency. Jefferson, as Secretary of State, increasingly opposed the administration’s policies, particularly Hamilton’s financial program. When Washington supported the Jay Treaty with Britain in 1795—which Jefferson saw as a betrayal of France and Republican principles—Jefferson became convinced Washington had fallen under Hamilton’s spell.

Jefferson resigned from the cabinet in 1793, partly from policy disagreements but also from discomfort with what he saw as Washington’s monarchical tendencies (the formal receptions and the ceremonial aspects of the presidency). Washington, in turn, came to view Jefferson as disloyal, especially when he learned Jefferson had been secretly funding attacks on the administration in opposition newspapers and had even put a leading critic on the federal payroll. By the time Washington delivered his Farewell Address in 1796, warning against political parties and foreign entanglements, many saw it as a rebuke of Jefferson’s philosophy. They maintained outward courtesy, but their warm relationship never recovered.

Why These Feuds Mattered

These weren’t just personal squabbles—they shaped American democracy in profound ways. The Hamilton-Jefferson rivalry created our two-party system (despite Washington’s warnings). The Adams-Hamilton split showed that parties could fracture from within. The Adams-Jefferson reconciliation demonstrated that political enemies could find common ground after leaving power.

The founding fathers were human, with all the ambition, pride, jealousy, and pettiness that entails. They fought over power, principles, and personal slights. What’s remarkable isn’t that they agreed on everything—they clearly didn’t—but that despite their bitter divisions, they created a system robust enough to survive their feuds. The Constitution itself, with its checks and balances, almost seems designed to accommodate such disagreements, ensuring that no single person or faction could dominate.

SOURCES

  1. National Archives – Founders Online

https://founders.archives.gov

2.   Massachusetts Historical Society – Adams-Jefferson Letters

https://www.masshist.org/publications/adams-jefferson

       3.    Founders Online – Hamilton’s Letter Concerning John Adams

https://founders.archives.gov/documents/Hamilton/01-25-02-0110

       4.    Gilder Lehrman Institute – Hamilton and Jefferson

https://www.gilderlehrman.org/history-resources/spotlight-primary-source/alexander-hamilton-and-thomas-jefferson

       5.    National Park Service – The Conway Cabal

https://www.nps.gov/articles/000/the-conway-cabal.htm

       6.    American Battlefield Trust – Hamilton-Burr Duel

https://www.battlefields.org/learn/articles/hamilton-burr-duel

        7.   Mount Vernon – Thomas Jefferson

https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/thomas-jefferson

        8.   Monticello – Thomas Jefferson Encyclopedia

https://www.monticello.org/research-education/thomas-jefferson-encyclopedia

        9.   Library of Congress – John Adams Papers

https://www.loc.gov/collections/john-adams-papers

10. Joseph Ellis – “Founding Brothers: The Revolutionary Generation”

https://www.pulitzer.org/winners/joseph-j-ellis

Illustration generated by author using ChatGPT.

13 Stars, Betsy Ross and the Story of the American Flag

On a steamy June day in 1777, the Continental Congress took a brief break from the monumental task of running a revolution to deal with something that seems surprisingly simple in retrospect: what should the American flag look like? The resolution they passed on June 14th was refreshingly concise, stating that “the flag of the United States be thirteen stripes, alternate red and white; that the union be thirteen stars, white in a blue field, representing a new constellation.”

That poetic phrase about a “new constellation” turned out to be both inspiring and maddeningly vague. Congress didn’t specify how the stars should be arranged, how many points they should have, or even whether the flag should start with a red or white stripe at the top. This ambiguity led to one of the interesting aspects of early American flag history—for decades, no two flags looked exactly alike.

The 1777 resolution came out of Congress’s Marine Committee business, and at least some historians caution that it may have been understood initially as a naval ensign, not a fully standardized “national flag for all uses.”

A Constellation of Designs

The lack of official guidance meant that flag makers exercised considerable artistic freedom. Smithsonian researcher Grace Rogers Cooper found at least 17 different examples of 13-star flags dating from 1779 to around 1796, and flag expert Jeff Bridgman has documented 32 different star arrangements from the era. Some makers arranged the stars in neat rows, others formed them into a single large star, and still others created elaborate patterns that spelled out “U.S.” or formed other symbolic shapes.  An official star pattern would not be specified until 1912 and versions of the 13-star flag remained in ceremonial use until the mid-1800s.

The most famous arrangement, of course, is the Betsy Ross design with its circle of 13 stars. What many people don’t realize is that experts date the earliest known example of this circular pattern to 1792—in a painting by John Trumbull, not on an actual flag from 1776.

Did the Continental Army Actually Use This Flag?

Here’s where things get interesting and a bit murky. The short answer is: not much, and not right away. The Continental Army had been fighting for over two years before Congress even adopted the Stars and Stripes, and by that point, individual regiments had already developed their own distinctive colors and banners. These regimental flags served practical military purposes—they helped units identify each other in the chaos of battle and gave soldiers something to rally around.  Additionally, the Continental Army frequently used the Grand Union Flag (13 stripes with a British Union in the canton), which predates the 13-star design.

What’s more revealing is a series of letters from 1779—two full years after the Flag Resolution—between George Washington and Richard Peters, Secretary of the Board of War. In these letters, Peters is essentially asking Washington what flag he wants the army to use. This correspondence raises an obvious question: if Congress had settled the flag issue in 1777, why was Washington still trying to figure it out in 1779? The evidence suggests that variations of the 13-star flag were primarily used by the Navy in those early years, while the Army continued to use various regimental standards.

Navy Captain John Manley expressed this confusion perfectly when he wrote in 1779 that the United States “had no national colors” and that each ship simply flew whatever flag the captain preferred. Even as late as 1779, the War Board hadn’t settled on a standard design for the Army. When they finally wrote to Washington for his input, they proposed a flag that included a serpent and numbers representing different states—a design that never caught on.

National “stars and stripes” banners did exist during the late war years and appear in some period art and descriptions, but clear, securely dated 13‑star Army battle flags are rare and often disputed.13‑star flags are better documented in early federal service such as maritime and lighthouse use in the 1790s than they are in Continental Army field service before 1783.

The Betsy Ross Question

Now we come to one of America’s most enduring flag legend. The story is familiar to most Americans: in 1776, George Washington, Robert Morris, and George Ross visited Philadelphia upholsterer Betsy Ross and asked her to sew the first American flag. She suggested changing the six-pointed stars to five-pointed ones, demonstrated her one-snip technique for making a perfect five-pointed star, and she then produced the first Stars and Stripes.

It’s a great story. There’s just one problem: historians have found virtually no documentary evidence to support it. The tale didn’t surface publicly until 1870—nearly a century after the supposed event—when Betsy Ross’s grandson, William Canby, presented it in a speech to the Historical Society of Pennsylvania. Canby relied entirely on family oral history, including affidavits from Ross’s daughter, granddaughter, and other relatives who claimed they had heard Betsy tell the story herself. But Canby himself admitted that his search through official records revealed nothing to corroborate the account.

Historians don’t dispute that Betsy Ross was a real person who did real work. Documentary evidence shows that on May 29, 1777, the Pennsylvania State Navy Board paid her a substantial sum for “making ships colours.” She ran a successful upholstery business and continued making flags for the government for more than 50 years. But as historian Marla Miller puts it, “The flag, like the Revolution it represents, was the work of many hands.” Modern scholars generally view the question not as whether Ross designed the flag—she almost certainly didn’t—but whether she may have been among the many people who produced early flags.

Who Really Designed It?

If not Betsy Ross, then who? The strongest candidate is Francis Hopkinson, the New Jersey delegate to the Continental Congress who also helped design the Great Seal of the United States and early American currency. In 1780, Hopkinson sent Congress a bill requesting payment for his design work, specifically mentioning “the flag of the United States of America.”    He likely designed a flag with the stars arranged in rows rather than circles, and his bills for payment submitted to Congress mentioned six-pointed stars rather than the five-pointed ones that became standard.

 Unfortunately for Hopkinson, Congress refused to pay him, arguing that he wasn’t the only person on the Navy Committee and therefore shouldn’t receive singular credit or compensation.

The irony is rich: Hopkinson was asking for a quarter cask of wine or £2,700 for designing what would become one of the world’s most recognizable symbols.  Congress essentially told him, “Thanks, but we’re not paying.” There’s a lesson about government contracts in there somewhere.

What Survived

Of the hundreds of flags made and carried during the Revolutionary War, only about 30 are known to survive today. These rare artifacts offer fascinating glimpses into how Americans visualized their new nation. The Museum of the American Revolution brought together 17 of these original flags in a 2025 exhibition—the largest gathering of such flags since 1783.

The most significant surviving 13-star flag is probably Washington’s Headquarters Standard, a small blue silk flag measuring about two feet by three feet. It features 13 white, six-pointed stars on a blue field and descended through George Washington’s family with the tradition that it marked the General’s presence on the battlefield throughout the war. Experts consider it the earliest surviving 13-star American flag. Due to light damage, it can only be displayed on special occasions.

Other surviving flags tell different stories. The Brandywine Flag, used at the September 1777 battle of the same name, is one of the earliest stars and stripes—the flag is red, with a red and white American flag image in the canton.

 The Dansey Flag, captured from the Delaware militia by a British soldier, was taken to England as a war trophy and remained in his family until 1927. The flag features a green field with 13 alternating red and white stripes in the upper left corner signifying the 13 colonies.

These and other flags weren’t just military equipment—they were powerful symbols that people fought under and, sometimes, died defending.

The Bigger Picture

What makes the story of the 13-star flag so compelling isn’t really about who sewed it or exactly when it first flew. It’s about what the flag represented in an era when the very concept of the United States was still being invented. The June 1777 resolution called for stars forming “a new constellation”—a beautiful metaphor for a new nation finding its place among the powers of the world.

The fact that no two early flags looked exactly alike might seem like a problem from our standardized modern perspective, but it tells us something important about the Revolution itself. Just as the colonies were learning to act as united states while maintaining their individual identities, flag makers across the new nation were interpreting a simple congressional resolution in their own ways, creating variations on a shared theme.

As historian Laurel Thatcher Ulrich points out, there was no “first flag” worth arguing over. The American flag evolved organically, shaped by the practical needs of the Navy, the Army, militias, and civilian flag makers who each contributed to its development. Whether Betsy Ross made one of those early flags or not, her story endures because it captures something Americans want to believe about our origins: that ordinary citizens, working in small shops and homes, helped create the symbols of the new republic.

Sources:

History.com: https://www.history.com/this-day-in-history/june-14/congress-adopts-the-stars-and-stripes

Flags of the World: https://www.crwflags.com/fotw/flags/us-1777.html

Wikipedia Flag of the United States: https://en.wikipedia.org/wiki/Flag_of_the_United_States

Museum of the American Revolution: https://www.amrevmuseum.org/

American Battlefield Trust: https://www.battlefields.org/learn/articles/short-history-united-states-flag

US History (Betsy Ross): https://www.ushistory.org/betsy/

Library of Congress “Today in History”: https://www.loc.gov/item/today-in-history/june-14/

Flag images from Wikimedia Commons

The Strange Tale of Spontaneous Human Combustion

Did you ever run into an idea so strange that you can’t quite understand how anyone ever took it seriously? Recently, while reading about historical curiosities in Pseudoscience by Kang and Pederson, I was reminded of one of the most enduring examples: spontaneous human combustion.

The classic image is always the same. Someone enters a room and finds a small pile of ashes where a person once sat. The body is nearly destroyed, yet the chair beneath it is barely scorched and the rest of the room looks strangely untouched. For centuries, this baffling scene was explained by a dramatic idea—that a person could suddenly burst into flames from the inside, without any external fire at all.

It sounds like something lifted straight from a gothic novel, but belief in spontaneous human combustion stretches back to at least the seventeenth century and reached its peak in the Victorian era. To understand why it gained such traction, it helps to look at the social attitudes of the time, the cases that convinced people it was real, and what modern forensic science eventually uncovered.

Much of the early belief rested on moral judgment rather than evidence. In the nineteenth century, spontaneous human combustion was widely accepted as a kind of divine punishment. Many of the alleged victims were described as heavy drinkers, often elderly, overweight, or socially isolated, and women were frequently overrepresented in the reports. To Victorian minds, this pattern felt meaningful. Alcohol was flammable, after all, and it seemed reasonable—at least then—to assume that a body saturated with spirits might somehow ignite. Sensational newspaper reporting amplified the mystery, presenting lurid details while glossing over inconvenient facts.

The idea gained intellectual credibility in 1746 when Paul Rolli, a Fellow of the Royal Society, formally used the term “spontaneous human combustion” while describing the death of Countess Cornelia Zangari Bandi. The involvement of a respected scientific figure gave the concept legitimacy that lingered for generations.

Several cases became canonical. Countess Bandi’s death in 1731 was described as leaving little more than ashes and partially intact legs, still clothed in stockings. In 1966, John Irving Bentley of Pennsylvania was found almost completely burned except for one leg, with his pipe discovered intact nearby. Mary Reeser, known as the “Cinder Woman,” died in Florida in 1951, leaving behind melted fat embedded in the rug near where she had been sitting. As recently as 2010, an Irish coroner ruled that spontaneous human combustion caused the death of Michael Faherty, whose body was found badly burned near a fireplace in a room that showed little fire damage. Over roughly three centuries, about two hundred such cases have been cited worldwide.

Believers proposed explanations that ranged from the scientific-sounding to the overtly theological. Alcoholism was the most popular theory, with some physicians genuinely arguing that chronic drinking made the human body combustible. Earlier medical thinking leaned on imbalances of bodily humors, while later writers speculated about unknown chemical reactions producing internal heat. Religious interpretations framed these deaths as punishment for sin. Even in modern times, a few proponents have suggested that acetone buildup in people with alcoholism, diabetes, or extreme diets could somehow trigger combustion.

The idea was so culturally embedded that Charles Dickens famously killed off the alcoholic character Mr. Krook by spontaneous combustion in Bleak House. When critics objected, Dickens defended the plot choice by citing what he believed were credible historical and medical sources.

The illusion of the supernatural persisted because the circumstances were almost perfectly misleading. Victims were typically alone, elderly, or physically impaired, unable to respond quickly to a smoldering fire. The localized damage looked impossible to the untrained eye. Potential ignition sources were often destroyed in the fire itself. And dramatic storytelling filled in the gaps left by incomplete investigations.

What actually happens in these cases is far less mystical and far more unsettling. Modern forensic science points to an explanation known as the “wick effect.” In this scenario, there is always an external ignition source—often a cigarette, candle, lamp, or fireplace ember. Once clothing catches fire, heat melts the person’s body fat. That liquefied fat soaks into the clothing, which then behaves like a candle wick. The fire burns slowly and steadily, sometimes for hours, consuming much of the body while leaving nearby objects relatively unscathed.

This effect has been demonstrated experimentally. In the 1960s, researchers at Leeds University showed that cloth soaked in human fat could sustain a slow burn for extended periods once ignited. In 1998, forensic scientist John de Haan famously replicated the effect for the BBC by burning a pig carcass wrapped in a blanket. The result closely matched classic spontaneous combustion scenes: severe destruction of the body, with extremities left behind and limited damage to the surrounding room.

The reason these fires don’t usually engulf the entire space is simple physics. Flames rise more easily than they spread sideways, and the heat output of a wick-effect fire is relatively localized. It’s similar to standing near a campfire—you can be close without catching fire yourself.

Investigators Joe Nickell and John F. Fischer examined dozens of historical cases and found that every one involved a plausible ignition source, details that earlier accounts often ignored or downplayed. When these factors are restored to the narrative, the mystery largely disappears.

As science writer Benjamin Radford has pointed out, if spontaneous human combustion were truly spontaneous, we would expect it to occur randomly and frequently, in public places as well as private ones. Instead, it consistently appears in situations involving isolation and an external heat source.

The bottom line is straightforward. There is no credible scientific evidence that humans can burst into flames without an external ignition source. What has been labeled spontaneous human combustion is better understood as a tragic combination of accidental fire and the wick effect. The myth endured because it blended moral judgment, fear, and incomplete science into a compelling story. Today, forensic investigation has replaced superstition with explanation, even if the results remain unsettling.

Spontaneous human combustion survives as a reminder of how easily mystery fills the space where evidence is thin—and how patiently applied science eventually closes that gap.


Sources and Further Reading

Peer-reviewed forensic and medical analyses are available through the National Center for Biotechnology Information, including “So-called Spontaneous Human Combustion” in the Journal of Forensic Sciences (https://pubmed.ncbi.nlm.nih.gov/21392004/) and Koljonen and Kluger’s 2012 review, “Spontaneous human combustion in the light of the 21st century,” published in the Journal of Burn Care & Research (https://pubmed.ncbi.nlm.nih.gov/22269823/).

General scientific and historical overviews can be found in Encyclopædia Britannica’s article “Is Spontaneous Human Combustion Real?” (https://www.britannica.com/story/is-spontaneous-human-combustion-real), Scientific American’s discussion of the wick effect (https://www.scientificamerican.com/blog/cocktail-party-physics/burn-baby-burn-understanding-the-wick-effect/), and Live Science’s summary of facts and theories (https://www.livescience.com/42080-spontaneous-human-combustion.html).

Accessible explanatory pieces are also available from HowStuffWorks (https://science.howstuffworks.com/science-vs-myth/unexplained-phenomena/shc.htm), History.com (https://www.history.com/articles/is-spontaneous-human-combustion-real), Mental Floss (https://www.mentalfloss.com/article/22236/quick-7-seven-cases-spontaneous-human-combustion), and All That’s Interesting (https://allthatsinteresting.com/spontaneous-human-combustion). Wikipedia’s entries on spontaneous human combustion and the wick effect provide comprehensive background and references at https://en.wikipedia.org/wiki/Spontaneous_human_combustion and https://en.wikipedia.org/wiki/Wick_effect.

What “Woke” Really Means: A Look at a Loaded Word

Why everyone’s fighting over a word nobody agrees on

Okay, so you’ve probably heard “woke” thrown around about a million times, right? It’s in political debates, online arguments, your uncle’s Facebook rants—basically everywhere. And here’s the weird part: depending on who’s saying it, it either means you’re enlightened or you’re insufferable.

So let’s figure out what’s actually going on with this word.

Where It All Started

Here’s something most people don’t know: “woke” wasn’t invented by social media activists or liberal college students. It goes way back to the 1930s in Black communities, and it meant something straightforward—stay alert to racism and injustice.

The earliest solid example comes from blues musician Lead Belly. In his song “Scottsboro Boys” (about nine Black teenagers falsely accused of rape in Alabama in 1931), he told Black Americans to “stay woke”—basically meaning watch your back, because the system isn’t on your side. This wasn’t abstract philosophy; it was survival advice in the Jim Crow South.

The term hung around in Black culture for decades. It got a boost in 2008 when Erykah Badu used “I stay woke” in her song “Master Teacher,” where it meant something like staying self-aware and questioning the status quo.

But the big explosion happened around 2014 during the Ferguson protests after Michael Brown was killed. Black Lives Matter activists started using “stay woke” to talk about police brutality and systemic racism. It spread through Black Twitter, then got picked up by white progressives showing solidarity with social justice movements. By the late 2010s, it had expanded to cover sexism, LGBTQ+ issues, and pretty much any social inequality you can think of.

And that’s when conservatives started using it as an insult.

The Liberal Take: It’s About Giving a Damn

For progressives, “woke” still carries that original vibe of awareness. According to a 2023 Ipsos poll, 56% of Americans (and 78% of Democrats) said “woke” means “to be informed, educated, and aware of social injustices.”

From this angle, being woke just means you’re paying attention to how race, gender, sexuality, and class affect people’s lives—and you think we should try to make things fairer. It’s not about shaming people; it’s about understanding the experiences of others.

Liberals see it as continuing the work of the civil rights movement—expanding who we empathize with and include. That might mean supporting diversity programs, using inclusive language, or rethinking how we teach history. To them, it’s just what thoughtful people do in a diverse society.

Here’s the Progressive Argument in a Nutshell

The term literally started as self-defense. Progressives argue the problems are real. Being “woke” is about recognizing that bias, inequality, and discrimination still exist. The data back some of this up—there are documented disparities in policing, sentencing, healthcare, and economic opportunity across racial lines. From this view, pointing these things out isn’t being oversensitive; it’s just stating facts.

They also point out that conservatives weaponized the term. They took a word from Black communities about awareness and justice and turned it into an all-purpose insult for anything they don’t like about the left. Some activists call this a “racial dog whistle”—a way to attack justice movements without being explicitly racist.

The concept naturally expanded from racial justice to other inequalities—sexism, LGBTQ+ discrimination, other forms of unfairness. Supporters see this as logical: if you care about one group being treated badly, why wouldn’t you care about others?

And here’s their final point: what’s the alternative? When you dismiss “wokeness,” you’re often dismissing the underlying concerns. Denying that racism still affects American life can become just another way to ignore real problems.

Bottom line from the liberal side: being “woke” means you’ve opened your eyes to how society works differently for different people, and you think we can do better.

The Conservative Take: It’s About Going Too Far

Conservatives see it completely differently. To them, “woke” isn’t about awareness—it’s about excess and control.

They see “wokeness” as an ideology that forces moral conformity and punishes anyone who disagrees. What started as social awareness has turned into censorship and moral bullying. When a professor loses their job over an unpopular opinion or comedy shows get edited for “offensive” jokes, conservatives point and say: “See? This is exactly what we’re talking about.”  To them, “woke” is just the new version of “politically correct”—except worse. It’s intolerance dressed up as virtue.

Here’s the conservative argument in a nutshell:

Wokeness has moved way beyond awareness into something harmful. They argue it creates a “victimhood culture” where status and that benefits come from claiming you’re oppressed rather than from merit or hard work. Instead of fixing injustice, they say it perpetuates it by elevating people based on identity rather than achievement.

They see it as “an intolerant and moralizing ideology” that threatens free speech. In their view, woke culture only allows viewpoints that align with progressive ideology and “cancels” dissenters or labels them “white supremacists.”

Many conservatives deny that structural racism or widespread discrimination still exists in modern America. They attribute unequal outcomes to factors other than bias. They believe America is fundamentally a great country and reject the idea that there is systematic racism or that capitalism can sometimes be unjust.

They also see real harm in certain progressive positions—like the idea that gender is principally a social construct or that children should self-determine their gender. They view these as threats to traditional values and biological reality.

Ultimately, conservatives argue that wokeness is about gaining power through moral intimidation rather than correcting injustice. In their view, the people rejecting wokeness are the real critical thinkers.

The Heart of the Clash

Here’s what makes this so messy: both sides genuinely believe they’re defending what’s right.

Liberals think “woke” means justice and empathy. Conservatives think it means judgment and control. The exact same thing—a company ad featuring diverse families, a school curriculum change, a social movement—can look like progress to one person and propaganda to another.

One person’s enlightenment is literally another person’s indoctrination.

The Word Nobody Wants Anymore

Here’s the ironic part: almost nobody calls themselves “woke” anymore. Like “politically correct” before it, the word has gotten so loaded that it’s frequently used as an insult—even by people who agree with the underlying ideas. The term has been stretched to cover everything from racial awareness to climate activism to gender identity debates, and the more it’s used, the less anyone knows what it truly means.

Recently though, some progressives have started reclaiming the term—you’re beginning to see “WOKE” on protest signs now.

So, Who’s Right?

Maybe both. Maybe neither.

If “woke” means staying aware of injustice and treating people fairly, that’s good. If it means acting morally superior and shutting down disagreement, that’s not. The truth is probably somewhere in the messy middle.

This whole debate tells us more about America than about the word itself. We’ve always struggled with how to balance freedom with fairness, justice with tolerance. “Woke” is just the latest word we’re using to have that same old argument.

The Bottom Line

Whether you love it or hate it, “woke” isn’t going anywhere soon. It captures our national struggle to figure out what awareness and fairness should look like today.

And honestly? Maybe we’d all be better off spending less time arguing about the word and more time talking about the actual values behind it—what’s fair, what’s free speech, what kind of society do we want?

Being “woke” originally meant recognizing systemic prejudices—racial injustice, discrimination, and social inequities many still experience daily. But the term’s become a cultural flashpoint.  Here’s the thing: real progress requires acknowledging both perspectives exist and finding common ground. It’s not about who’s “right”—it’s about building bridges.

 If being truly woke means staying alert to injustice while remaining open to dialogue with those who see things differently, seeking solutions that work for everyone, caring for others, being empathetic and charitable, then call me WOKE.

Page 1 of 9

Powered by WordPress & Theme by Anders Norén