Grumpy opinions about everything.

Category: History Page 2 of 9

Grumpy opinions about American history

Who Will Cover City Hall Now? Democracy in the Age of News Deserts

Were it left to me to decide whether we should have a government without newspapers, or newspapers without a government, I should not hesitate a moment to prefer the latter. But I should mean that every man should receive those papers and be capable of reading them. —Thomas Jefferson


I originally posted this article about a year and a half ago. I was concerned about the future of newspapers then and I’m even more concerned now. I’ve updated my original post to reflect recent losses of newspapers.
When I was growing up in Charleston WV in the 1950s and early 1960s, we had two daily newspapers. The Gazette was delivered in the morning and the Daily Mail was delivered in the afternoon. One of my first jobs as a boy was delivering The Gazette. It worked out to be about 50 cents an hour, but I was glad to have the job. (It was good money at the time.)
Ostensibly, the Gazette was a Democratic newspaper, and the Daily Mail was a Republican one. However, given the politics of the day there was not a significant difference between the two, and most people subscribed to both.
There weren’t a lot of options for news at the time. Of course, there were no 24-hour news channels. National news on the three networks was about 30 minutes an evening with local news at about 15 minutes. By the late 1960s national news had increased to 60 minutes and most local news to about 30 minutes. Although, given the limitations of time on the local stations, most of the broadcast was taken up with weather, sports, and human interest stories with little time left to expand on hard news stories.
We depended on our newspapers for news of our cities, counties, and states. And the newspapers delivered the news we needed. Almost everyone subscribed to and read the local papers. They kept us informed about our local politicians and government and provided local insight on national events. They were also our source for information about births, deaths, marriages, high school graduations and everything we wanted to know about our community.
In the 21st century there are many more supposed news options. There are 24-hour news networks as I’ve talked about in a previous post.  And of course, there are Instagram, Facebook, X and the other online entities that claim to provide news.
There has been one positive development in television news. Local news, at least in Charleston, has expanded to two hours most evenings. There is some repetition between the first and second hour and it is still heavily weighted to sports, weather, and human interest, but there is some increased coverage of local hard news. However, this is somewhat akin to reading the headlines and the first paragraph in a newspaper story. It doesn’t provide in-depth coverage, but it is improved over what otherwise is available to those who don’t watch a dedicated news show. Hopefully, it motivates people to find out more about events that concern them.
The situation has become dire in recent months. The crisis that was building when I first wrote about newspapers has now reached catastrophic proportions. On December 31, 2025, the Atlanta Journal-Constitution published its last print edition after 157 years, making Atlanta the largest U.S. metro area without a printed daily newspaper. Think about that—a major American city, home to over six million people in its metro area, now has no physical newspaper you can hold in your hands.
Just weeks ago in February 2025, the Newark Star-Ledger, New Jersey’s largest newspaper, stopped printing after nearly 200 years. The Jersey Journal, which had served Hudson County for 157 years, closed entirely. These weren’t small-town weeklies—these were major metropolitan dailies that once served millions of readers. The Pittsburgh Post Gazette, founded in 1786, has announced that it will cease publication effective May 3, 2026.
Even more alarming is what just happened at the Washington Post. Just days ago, in early February 2026, owner Jeff Bezos ordered the elimination of roughly one-third of the newspaper’s workforce—approximately 300 journalists. The Post closed its entire sports section, shuttered its books department, gutted its foreign bureaus and metro desk, and canceled its flagship daily podcast. This is the same newspaper that brought down a presidency with its Watergate coverage and has won dozens of Pulitzer Prizes. The Post’s metro desk, which once had 40 reporters covering the nation’s capital, now has just a dozen. All the paper’s photojournalists were laid off. The entire Middle East team was eliminated.
Former Washington Post executive editor Martin Baron, who led the paper from 2013 to 2021, called the cuts devastating and blamed poor management decisions, including Bezos’s decision to spike the newspaper’s presidential endorsement in 2024, which led to the cancellation of hundreds of thousands of subscriptions. The Post lost an estimated $100 million in 2024.
The numbers tell a grim story. Since 2005, more than 3,200 newspapers have closed in the United States—that’s over one-third of all the newspapers that existed just twenty years ago. Newspapers continue to disappear at a rate of more than two per week. In the past year alone, 136 newspapers shut their doors.
Fewer than 5,600 newspapers now remain in America, and less than 1,000 of those are dailies. Even among those “dailies,” more than 80 percent print fewer than seven days a week. We now have 213 counties that are complete “news deserts”—places with no local news source at all. Another 1,524 counties have only one remaining news source, usually a struggling weekly newspaper. Taken together, about 50 million Americans now have limited or no access to local news.
Will TV news be able to provide the details about our community? The format of the newspaper allows for more detailed presentations and for a larger variety of stories. The reader can pick which stories to read, when to read them and how much of each to read. The very nature of broadcast news doesn’t allow these options.
I beg everyone to please subscribe to your local newspapers if you still have one. Though I still prefer the hands-on, physical newspaper, I understand many people want to keep up with the digital age. If you do, please subscribe to the digital editions of your local newspaper and don’t pretend that the other online sources, such as social media, will provide you with local news. More likely, you’ll just get gossip, or worse.
If we lose our local news, we are in danger of losing our freedom of information and if we lose that, we’re in danger of losing our country. For those of you who think I’m fear mongering, countries that have succumbed to dictatorship have first lost their free press.
I believe that broadcast news will never be the free press that print journalism is. The broadcast is an ethereal thing. You hear it and it’s gone. Of course, it is always possible to record it and play it back, but most people don’t. If you have a newspaper, you can read it, think about it, and read it again. There are times when on my second or third reading of an editorial or an op-ed article, I’ve changed my opinion about either the subject or the writer of the piece. I don’t think a news broadcast lends itself to this type of reflection. In fact, when listening to the broadcast news I often find my mind wandering as something that the broadcaster said sends me in a different direction.
In my opinion, broadcast news is controlled by advertising dollars and viewer ratings. News seems to be treated like any entertainment program, catering to what generates ratings rather than facts. I recognize that this can be the case with newspapers as well, but it seems to me that it’s much easier to detect bias in the written word than in the spoken word. Too often we can get caught up in the emotions of the presenter or in the graphics that accompany the story.
With that in mind, I recommend that if you want unbiased journalism, please support your local newspapers before we lose them. Once they are gone, we will never get them back and we will all be much the poorer as a result.
I will leave you with one last quote.
A free press is the unsleeping guardian of every other right that free men prize; it is the most dangerous foe of tyranny. —Winston Churchill
The only way to preserve freedom is to preserve the free press. Do your part! Subscribe!
And you can quote The Grumpy Doc on that!!!!

Sources
Fortune (August 29, 2025): “Atlanta becomes largest U.S. metro without a printed daily newspaper as Journal-Constitution goes digital”
https://fortune.com/2025/08/29/atlanta-largest-metro-without-printed-newpsaper-digital-journal-constitution/
 
Northwestern University Medill School (2025): “News deserts hit new high and 50 million have limited access to local news, study finds”
https://www.medill.northwestern.edu/news/2025/news-deserts-hit-new-high-and-50-million-have-limited-access-to-local-news-study-finds.html
 
NBC News (February 2026): “Washington Post lays off one-third of its newsroom”
https://www.nbcnews.com/business/media/washington-post-layoffs-sports-rcna257354
 
CNN Business (February 4, 2026): “Jeff Bezos-owned Washington Post conducts widespread layoffs, gutting a third of its staff”
https://www.cnn.com/2026/02/04/media/washington-post-layoffs
 
Northwestern University Medill Local News Initiative (2024): “The State of Local News Report 2024”
https://localnewsinitiative.northwestern.edu/projects/state-of-local-news/2024/report/
 
Northwestern University Medill School (2025): “News deserts hit new high and 50 million have limited access to local news, study finds”
https://www.medill.northwestern.edu/news/2025/news-deserts-hit-new-high-and-50-million-have-limited-access-to-local-news-study-finds.htm

Russel Vought and the War on the Environment

Recently, there’s been a a lot of attention given to RFK Jr. and his war on vaccines. More potentially devastating than that is Russel Vought and his war on environmental science.
Russell Vought hasn’t exactly been working in the shadows. As the director of the Office of Management and Budget since February 2025, he’s been methodically implementing what he outlined years earlier in Project 2025—a blueprint that treats climate science not as settled fact, but as what he calls “climate fanaticism.” The result is undeniably the most aggressive dismantling of environmental protections in American history.
The Man Behind the Plan
Vought’s resume tells you everything you need to know about his approach. He served as OMB director during Trump’s first term, wrote a key chapter of Project 2025 focusing on consolidating presidential power, and has openly stated his goal is to make federal bureaucrats feel “traumatized” when they come to work. His philosophy on climate policy specifically? He’s called climate change a side effect of building the modern world—something to manage through deregulation rather than prevention.
Attacking the Foundation: The Endangerment Finding
The centerpiece of Vought’s climate strategy targets what EPA Administrator Lee Zeldin has called “the holy grail of the climate change religion”—the 2009 Endangerment Finding. This Obama-era scientific determination concluded that six greenhouse gases (carbon dioxide, methane, nitrous oxide, hydrofluorocarbons, perfluorocarbons, and sulfur hexafluoride) endanger public health and welfare. It sounds technical, but it’s the legal foundation for virtually every federal climate regulation enacted over the past fifteen years.
 Just last week EPA Administrator Zeldin announced that the Trump administration has repealed this finding. This action strips EPA’s authority to regulate greenhouse gas emissions under the Clean Air Act—meaning no more federal limits on power plant emissions, no vehicle fuel economy standards tied to climate concerns, and no requirement for industries to measure or report their emissions.  White House press secretary Karoline Leavitt said this action “will be the largest deregulatory action in American history.”
More than 1,000 scientists warned Zeldin not to take this step, and the Environmental Protection Network cautioned last year that repealing the finding would cause “tens of thousands of additional premature deaths due to pollution exposure” and would spark “accelerated climate destabilization.”  Abigail Dillen president of the nonprofit law firm Earthjustice said “there is no way to reconcile EPA’s decision with the law, the science and the reality of the disasters that are hitting us harder every year.” She further said they expect to see the Trump administration in court.  Obviously, the science is less important to Trump, Zeldin and Vought than the politics.
The Thirty-One Targets
In March 2025, Zeldin announced what he proudly called “the greatest day of deregulation in American history”—a plan to roll back or reconsider 31 key environmental rules covering everything from clean air to water quality. The list reads like a regulatory hit parade, including vehicle emission standards (designed to encourage electric vehicles), power plant pollution limits, methane regulations for oil and gas operations, and even particulate matter standards that protect against respiratory disease.
The vehicle standards are particularly revealing. The transportation sector is America’s largest source of greenhouse gas emissions, and the Biden-era rules were crafted to nudge automakers toward producing more electric vehicles. At Vought’s direction, the EPA is now reconsidering these, with Zeldin arguing they “regulate out of existence” segments of the economy and cost Americans “a lot of money.”
Gutting the Science Infrastructure
Vought’s agenda extends beyond specific regulations to the institutions that produce climate science itself. In Project 2025, he proposed abolishing the Office of Domestic Climate Policy and suggested the president should refuse to accept federal scientific research like the U.S. National Climate Assessment (NCA). The NCA, published every few years, involves hundreds of scientists examining how climate change is transforming the United States—research that informs everything from building codes to insurance policies.
According to reporting from E&E News in January, Vought wants the White House to exert tighter control over the next NCA, potentially elevating perspectives from climate deniers and industry representatives while excluding contributions made during the Biden administration.  This is a plan that has been in the works for years. Vought reportedly participated in a White House meeting during Trump’s first term where officials discussed firing the scientists working on the assessment.
The National Oceanic and Atmospheric Administration (NOAA) has also been targeted. In February 2025, about 800 NOAA employees—responsible for weather forecasting, climate monitoring, fisheries management, and marine research were fired. Project 2025 had proposed breaking up NOAA entirely, and concerned staff members have already begun a scramble to preserve massive amounts of climate data in case the agency is dismantled.
Budget Cuts as Policy
Vought’s Center for Renewing America has proposed eliminating the Department of Energy’s Office of Energy Efficiency and Renewable Energy, the EPA’s environmental justice fund, and the Low Income Home Energy Assistance Program. During the first Trump administration, Vought oversaw budgets proposing EPA cuts as steep as 31%—reducing the agency to funding levels not seen in decades. In a 2023 speech, he explained the logic bluntly: “We want their funding to be shut down so that the EPA can’t do all of the rules against our energy industry because they have no bandwidth financially to do so.”
This isn’t just about climate, it is also about fairness and the recognition that environmental policies have had a predominately negative effect on low income areas. EPA has cancelled 400 environmental justice grants, closed environmental justice offices at all 10 regional offices, and put the director of the $27 billion Greenhouse Gas Reduction Fund on administrative leave. The fund had been financing local economic development projects aimed at lowering energy prices and reducing emissions.
Eliminating Climate Considerations from Government
Perhaps more insidious than the high-profile rollbacks are the procedural changes that make climate considerations disappear from federal decision-making. In February, Jeffrey Clark—acting administrator of the Office of Information and Regulatory Affairs (OIRA) under Vought’s OMB—directed federal agencies to stop using the “social cost of carbon” in their analyses. This metric calculates the dollar value of damage caused by one ton of carbon pollution, allowing agencies to accurately assess whether regulations produce net benefits or defects for society.
Vought has also directed agencies to establish sunset dates for environmental regulations—essentially automatic expiration dates after which rules stop being enforced unless renewed. For existing regulations, the sunset comes after one year; for new ones, within five years. The stated goal is forcing agencies to continuously justify their rules, but the practical effect is creating a perpetual cycle of regulatory uncertainty.
The Real-World Stakes
The timing of these rollbacks offers a grim irony. As Vought was pushing to weaken the National Climate Assessment in January 2025, the Eaton and Palisades fires were devastating Los Angeles—exactly the type of climate-intensified disaster the assessment is designed to help communities prepare for. The administration’s response? Energy Secretary Chris Wright described climate change as “a side effect of building the modern world” at an industry conference.
An analysis by Energy Innovation, a nonpartisan think tank, found that Project 2025’s proposals to gut federal policies encouraging renewable electricity and electric vehicles would increase U.S. household spending on fuel and utilities by about $240 per year over the next five years. That’s before accounting for the health costs of increased air pollution or the economic damage from unmitigated climate change.
Environmental groups have vowed to challenge these changes in court, and the legal battles will likely stretch on for years. The D.C. Circuit Court of Appeals will hear many cases initially, though the Supreme Court will probably issue final decisions. Legal experts note that while Trump’s EPA moved with unprecedented speed on proposals in 2025, finalizing these rules through the required regulatory process will take much longer. As of December, none of the major climate rule repeals had been submitted to OMB for final review, partly due to what EPA called a 43-day government shutdown (which EPA blamed on Democrats, though the characterization is widely disputed).
What Makes This Different
Previous administrations have certainly rolled back environmental regulations, but Vought’s approach differs in both scope and philosophy. Rather than tweaking specific rules or relaxing enforcement, he’s systematically attacking the scientific and legal foundations that make climate regulation possible. It’s the difference between turning down the thermostat and ripping out the entire heating system.
The Environmental Defense Fund, which rarely comments on political appointees, strongly opposed Vought’s confirmation, with Executive Director Amanda Leland stating: “Russ Vought has made clear his contempt for the people working every day to ensure their fellow Americans have clean air, clean water and a safer climate.”
Looking Forward
Whether Vought’s vision becomes permanent depends largely on how courts rule on these changes. The 2007 Supreme Court decision in Massachusetts v. EPA established that the agency has authority to regulate greenhouse gases as air pollutants under the Clean Air Act—the very authority Vought is now trying to eliminate. Overturning established precedent is difficult, though the current Supreme Court’s composition makes the outcome possible, if not likely.
What we’re witnessing is essentially a test of whether one administration can permanently disable the federal government’s capacity to address climate change, or if these changes represent a temporary setback that future administrations can reverse. The stakes couldn’t be higher: atmospheric CO2 concentrations continue rising, global temperatures are breaking records, and climate-related disasters are becoming more frequent and severe. Nothing less than the future of our way of life is at stake. We must take action now.
 
Full disclosure: my undergraduate degree is in meteorology, but I would never call myself a meteorologist since I have never worked in the field. But I still maintain an interest, from both a meteorological and a medical perspective. The Grump Doc is never lacking in opinions.
 
Illustration generated by author using Midjourney.
 
Sources:
Lisa Friedman and Maxine Joselow, “Trump Allies Near ‘Total Victory’ in Wiping Out U.S. Climate Regulation,” New York Times, Feb. 9, 2026.[nytimes +1]
Lisa Friedman, “The Conservative Activists Behind One of Trump’s Biggest Climate Moves,” New York Times, Feb. 10, 2026.[nytimes +1]
Bob Sussman, “The Anti-Climate Fanaticism of the Second Trump Term (Part 1: The Purge of Climate from All Federal Programs),” Environmental Law Institute, May 7, 2025.[eli]
U.S. Environmental Protection Agency, “Trump EPA Kicks Off Formal Reconsideration of Endangerment Finding,” EPA News Release, Mar. 13, 2025.[epa]
Trump’s Climate and Clean Energy Rollback Tracker, Act On Climate/NRDC coalition, updated Jan. 11, 2026.[actonclimate]
“Trump to Repeal Landmark Climate Finding in Huge Regulatory Rollback,” Wall Street Journal, Feb. 9, 2026.[wsj]
Valerie Volcovici, “Trump Set to Repeal Landmark Climate Finding in Huge Regulatory Rollback,” Reuters, Feb. 9, 2026.[reuters]
Alex Guillén, “Trump EPA to Take Its Biggest Swing Yet Against Climate Change Rules,” Politico, Feb. 10, 2026.[politico]
“EPA Urges White House to Strike Down Landmark Climate Finding,” Washington Post, Feb. 26, 2025.[washingtonpost]
“Trump Allies Near ‘Total Victory’ in Wiping Out U.S. Climate Regulation,” Seattle Times reprint, Feb. 10, 2026.[seattletimes]
“Trump Wants to Dismantle Key Climate Research Hub in Colorado,” Earth.org, Dec. 17, 2025.[earth]
“Vought Says National Science Foundation to Break Up Federal Climate Research Center,” The Hill, Dec. 17, 2025.[thehill]
Rachel Cleetus, “One Year of the Trump Administration’s All-Out Assault on Climate and Clean Energy,” Union of Concerned Scientists, Jan. 13, 2026.[ucs]
Environmental Protection Network, “Environmental Protection Network Speaks Out Against Vought Cabinet Consideration,” Nov. 20, 2024.[environmentalprotectionnetwork]
“From Disavowal to Delivery: The Trump Administration’s Rapid Implementation of Project 2025 on Public Lands,” Center for Western Priorities, Jan. 28, 2026.[westernpriorities]
“Russ Vought Nominated for Office of Management and Budget Director,” Environmental Defense Fund statement, Mar. 6, 2025.[edf]
“Project 2025,” Heritage Foundation/Project 2025 backgrounder (as summarized in the Project 2025 Wikipedia entry).[wikipedia]
“EPA to repeal finding that serves as basis for climate change,” The Associated Press, Matthew Daly
https://vitalsigns.edf.org/story/trump-nominee-and-project-2025-architect-russell-vought-has-drastic-plans-reshape-america
https://en.wikipedia.org/wiki/Russell_Vought
https://www.commondreams.org/news/warnings-of-permanent-damage-to-people-and-planet-as-trump-epa-set-to-repeal-key-climate-rule
https://www.eenews.net/articles/trump-team-takes-aim-at-crown-jewel-of-us-climate-research/
https://www.epa.gov/newsreleases/epa-launches-biggest-deregulatory-action-us-history
https://www.pbs.org/newshour/nation/trump-administration-moves-to-repeal-epa-rule-that-allows-climate-regulation
https://www.scientificamerican.com/article/trump-epa-unveils-aggressive-plans-to-dismantle-climate-regulation/
https://www.bloomberg.com/news/articles/2026-02-10/trump-s-epa-to-scrap-landmark-emissions-policy-in-major-rollback​​​​​​​​​​​​​​​​
 
 
 
 

The Fatal Meeting: When Hamilton and Burr Settled Fifteen Years of Rivalry with Pistols

The story of the Hamilton-Burr duel has all the elements of a Greek tragedy: brilliant men, political ambition, an unforgiving honor culture, and an ending that destroyed both victor and vanquished alike. When Aaron Burr shot Alexander Hamilton on the morning of July 11, 1804, he didn’t just kill one of America’s founding architects—he also ended his own political career and helped doom the entire Federalist Party to irrelevance. Let’s rewind the clock more than a decade to try and understand how these two gifted lawyers and Revolutionary War veterans ended up facing each other with loaded pistols.

The Long Road to Weehawken

Hamilton and Burr moved in the same elite New York political circles from the 1790s onward, but they had remarkably different temperaments and political beliefs. Hamilton was ideological, prolific, and combative—often too much so for his own good. Burr was pragmatic, opaque, self-serving, and famously hard to pin down on principle. They distrusted each other deeply.

Their rivalry stretched back to 1791, when Burr defeated Philip Schuyler for a U.S. Senate seat representing New York. This wasn’t just any political defeat for Hamilton—Schuyler was his father-in-law and a crucial Federalist ally on whom Hamilton had counted to support his ambitious financial programs. Hamilton, who was serving in George Washington’s cabinet as Treasury Secretary, never forgave Burr for this loss. In correspondence from June 1804, Hamilton himself referenced “a course of fifteen years competition” between the two men.  

Their philosophical differences ran deep. Hamilton was an ideological Federalist who dreamed of transforming the United States into a modern economic power rivaling European empires through strong central government, industrial development, and military strength. Burr, by contrast, approached politics more pragmatically—he saw it as a vehicle for advancing his own interests and those of his allies rather than as a way to implement sweeping political visions. As Burr himself allegedly said, politics were nothing more than “fun and honor and profit”. Hamilton viewed Burr as fundamentally dangerous due to his lack of fixed ideological principles. Hamilton wrote in 1792 that he considered it his “religious duty to keep this man from office”

The election of 1800 brought their animosity to a boiling point. Due to a quirk in the original Constitution’s electoral system, Thomas Jefferson and his running mate Aaron Burr tied in the Electoral College with 73 votes each, allowing the Federalists to briefly consider elevating Burr to the presidency.  The decision went to the House of Representatives, and Hamilton—despite despising Jefferson’s Democratic-Republican politics—campaigned hard to ensure Jefferson won the presidency rather than Burr. Hamilton argued that Jefferson, however wrong in policy, had convictions, whereas Burr had none.  In the end, Jefferson gained the presidency and Burr became Vice President, but their relationship was never collegial and Burr was excluded from any meaningful participation in Jefferson’s administration.

By 1804, it was clear Jefferson would not consider Burr for a second term as Vice President. Desperate to salvage his political career, Burr made a surprising move: he sought the Federalist nomination for governor of New York, switching from his Democratic-Republican affiliation. It was a strange gambit—essentially betting that his political enemies might support him if it served their interests. Hamilton, predictably, worked vigorously to block Burr’s ambitions yet again. Although Hamilton’s opposition wasn’t the only factor, Burr lost badly to Morgan Lewis, the Democratic-Republican candidate, in April 1804.

The Cooper Letter and the Challenge

The immediate trigger for the duel came from a relatively minor slight in the context of their long feud. In February 1804, Dr. Charles Cooper attended a dinner party where Hamilton spoke forcefully against Burr’s candidacy. Cooper later wrote to Philip Schuyler describing Hamilton’s comments, noting that Hamilton had called Burr “a dangerous man” and referenced an even “more despicable opinion” of him. This letter was published in the Albany Register in April, after Burr’s electoral defeat.

When the newspaper reached Burr, he was already politically ruined—still Vice President of the United States, but with no prospects for future office. He demanded that Hamilton acknowledge or deny the statements attributed to him. What followed was a formal exchange of letters between the two men and their representatives that lasted through June. Hamilton refused to give Burr the straightforward denial he sought, explaining that he couldn’t reasonably be expected to account for everything he might have said about a political opponent during fifteen years of competition. Burr, seeing his honor impugned and his options exhausted, invoked the code of honor and issued a formal challenge to duel.

Hamilton found himself in an impossible position. If he admitted to the insults, which were substantially true, he would lose his honor. If he refused to duel, the result would be the same—his political career would effectively end. Hamilton had personal and moral objections to dueling. His eldest son Philip had died in a duel just three years earlier, at the same Weehawken location where Hamilton and Burr would meet. Hamilton calculated that his ability to maintain his political influence required him to conform to the codes of honor that governed gentlemen’s behavior in early America.

Dawn at Weehawken

At 5:00 AM on the morning of July 11, 1804, the men departed Manhattan from separate docks. They were each rowed across the Hudson River to the Heights of Weehawken, New Jersey—a popular dueling ground where at least 18 known duels took place between 1700 and 1845. They chose New Jersey because while dueling had been outlawed in both New York and New Jersey, the New Jersey penalties were less severe.

Burr arrived first around 6:30 AM, with Hamilton landing about thirty minutes later. Each man was accompanied by his “second”—an assistant responsible for ensuring the duel followed proper protocols. Hamilton brought Nathaniel Pendleton, a Revolutionary War veteran and Georgia district court judge, while Burr’s second was William Van Ness, a New York federal judge. Hamilton also brought Dr. David Hosack, a Columbia College professor of medicine and botany, in case medical attention proved necessary.

Shortly after 7 a.m., the seconds measured out ten paces, loaded the .56‑caliber pistols, and explained the firing rules before Hamilton and Burr took their positions. What exactly happened next remains one of history’s enduring mysteries. The seconds gave conflicting accounts, and historians still debate the sequence and meaning of events.

In a written statement before the duel, Hamilton expressed religious and moral objections to dueling, worry for his family and creditors, and professed no personal hatred of Burr, yet concluded that honor and future public usefulness compelled him to accept. By some accounts, Hamilton had also written to confidants indicating his intention to “throw away my shot”—essentially to deliberately miss Burr, satisfying the requirements of honor without attempting to kill his opponent. Burr, by contrast, appears to have aimed directly at Hamilton.

Some accounts suggest Hamilton fired first, with his shot hitting a tree branch above and behind Burr’s head. Other versions claim Burr shot first. There’s even a theory that Hamilton’s pistol had a hair trigger that caused an accidental discharge after Burr wounded him.

What’s undisputed is the outcome: Burr’s shot struck Hamilton in the lower abdomen, with the bullet lodging near his spine. Hamilton fell, and Burr reportedly started toward his fallen opponent before Van Ness held him back, worried about the legal consequences of lingering at the scene. The two parties crossed back to Manhattan in their respective boats, with Hamilton taken to the home of William Bayard Jr. in what is now Greenwich Village.

Hamilton survived long enough to say goodbye to his wife Eliza and their children. He died at 2 PM on July 12, 1804, approximately 31 hours after being shot.

Political Aftershocks

The nation was outraged. While duels were relatively common in early America, they rarely resulted in death, and the killing of someone as prominent as Alexander Hamilton sparked widespread condemnation. The political consequences proved catastrophic for everyone involved—and reshaped American politics for the next two decades.

Hamilton’s death turned him into a Federalist martyr. Even many who had disliked his arrogance now praised his intellect, service, and sacrifice. His economic vision, already embedded in American institutions, gained a kind of posthumous authority.

For Aaron Burr, the duel destroyed him politically and socially. Murder charges were filed against him in both New York and New Jersey, though neither reached trial—a grand jury in Bergen County, New Jersey indicted him for murder in November 1804, but the New Jersey Supreme Court quashed the indictment. Nevertheless, Burr fled to St. Simons Island, Georgia, staying at the plantation of Pierce Butler, before returning to Washington to complete his term as Vice President.

Rather than restoring his reputation as he’d hoped, the duel made Burr a pariah. He would never hold elected office again. His subsequent attempt to regain power through what historians call the “Burr Conspiracy”—an alleged plan to create an independent nation along the Mississippi River by separating territories from the United States and Spain—led to a treason trial in 1807. Chief Justice John Marshall presided and Burr was ultimately acquitted, but the trial further cemented Burr’s reputation as a dangerous schemer. He spent his later years quietly practicing law in New York.

For the Federalist Party, Hamilton’s death proved even more devastating than Burr’s personal ruin. Hamilton had been the party’s intellectual architect and most effective leader. At the time of his death, the Federalists were attempting a comeback after their national defeat in the 1800 election. Without Hamilton’s energy, strategic thinking, and ability to articulate a compelling vision for the country, the Federalists lost direction. As one historian put it, “The Federalists would be unable to find another leader as forceful and energetic as Hamilton had been, and their movement would slowly suffocate before finally petering out in the early 1820s”. The party’s decline ended what historians consider the first round of partisan struggles in American history.

An interesting footnote: while many Federalists wanted to portray Hamilton as a political martyr, Federalist clergy broke with the party line to condemn dueling itself as a violation of the sixth commandment. These ministers used Hamilton’s death as an opportunity to wage a moral crusade against the practice of dueling, helping to accelerate its decline in American culture—particularly in the northern states where it was already losing favor.

The duel produced a triple tragedy: Hamilton dead at age 47 (or 49—his birth year remains disputed), Burr politically destroyed despite being acquitted of murder charges, and the Federalist Party fatally weakened at a critical moment in American political development.

The Hamilton–Burr duel sits at the intersection of politics, personality, and culture. It reminds us that the early republic was not a calm, rational experiment run by marble statues—but a volatile environment shaped by ego, fear, and ambition. Institutions were young, norms were fragile, and reputations were all important. What began as fifteen years of professional rivalry and personal enmity ended with two brilliant men eliminating each other from the political stage, neither achieving what they’d hoped for through their fatal meeting on the heights of Weehawken.

Sources

Encyclopedia Britannica “Burr-Hamilton duel | Summary, Background, & Facts” https://www.britannica.com/event/Burr-Hamilton-duel

History.com “Aaron Burr slays Alexander Hamilton in duel” https://www.history.com/this-day-in-history/july-11/burr-slays-hamilton-in-duel

Library of Congress “Today in History – July 11” https://www.loc.gov/item/today-in-history/july-11

National Constitution Center “The Burr vs. Hamilton duel happened on this day” https://constitutioncenter.org/blog/burr-vs-hamilton-behind-the-ultimate-political-feud

National Park Service “Hamilton-Burr Duel” https://www.nps.gov/articles/000/hamilton-burr-duel.htm

PBS American Experience “Alexander Hamilton and Aaron Burr’s Duel” https://www.pbs.org/wgbh/americanexperience/features/duel-alexander-hamilton-and-aaron-burrs-duel/

The Gospel Coalition “American Prophets: Federalist Clergy’s Response to the Hamilton–Burr Duel of 1804” https://www.thegospelcoalition.org/themelios/article/american-prophets-federalist-clergys-response-to-the-hamilton-burr-duel-of-1804/

Wikipedia “Burr–Hamilton duel” https://en.wikipedia.org/wiki/Burr–Hamilton_duel

World History Encyclopedia “Hamilton-Burr Duel” https://www.worldhistory.org/article/2548/hamilton-burr-duel/​​​​​​​​​​​​​​​​

For more information about the history of dueling in early America see my earlier post: Pistols at Dawn, The Rise and Fall of the Code Duello.

Images generated by author using ChatGPT.

The Founding Feuds: When America’s Heroes Couldn’t Stand Each Other

The mythology of the founding fathers often portrays them as a harmonious band of brothers united in noble purpose. The reality was far messier—these brilliant, ambitious men engaged in bitter personal feuds that sometimes threatened the very republic they were creating.  In some ways, the American revolution was as much of a battle of egos as it was a war between King and colonists.

The Revolutionary War Years: Hancock, Adams, and Washington’s Critics

The tensions began even before independence was declared. John Hancock and Samuel Adams, both Massachusetts firebrands, developed a rivalry that simmered throughout the Revolution. Adams, the older political strategist, had been the dominant figure in Boston’s resistance movement. When Hancock—wealthy, vain, and eager for glory—was elected president of the Continental Congress in 1775, the austere Adams felt his protégé had grown too big for his britches. Hancock’s request for a leave of absence from the presidency of Congress in 1777 coupled with his desire for an honorific military escort home, struck Adams as a relapse into vanity. Adams even opposed a resolution of thanks for Hancock’s service, signaling open estrangement. Their relationship continued to deteriorate to the point where they barely spoke, with Adams privately mocking Hancock’s pretensions and Hancock using his position to undercut Adams politically.

The choice of Washington as commander sparked its own controversies. John Adams had nominated Washington, partly to unite the colonies by giving Virginia the top military role. Washington’s command was anything but universally admired and as the war dragged on with mixed results many critics emerged.

After the victory at Saratoga in 1777, General Horatio Gates became the focal point of what’s known as the Conway Cabal—a loose conspiracy aimed at having Gates replace Washington as commander-in-chief. General Thomas Conway wrote disparaging letters about Washington’s military abilities. Some members of Congress, including Samuel Adams, Thomas Mifflin, and Richard Henry Lee, questioned whether Washington’s defensive strategy was too cautious and if his battlefield performance was lacking. Gates himself played a duplicitous game, publicly supporting Washington while privately positioning himself as an alternative.

When Washington discovered the intrigue, his response was characteristically measured but firm.  Rather than lobbying Congress or forming a counter-faction, Washington leaned heavily on reputation and restraint. He continued to communicate respectfully with Congress, emphasizing the army’s needs rather than defending his own position.  Washington did not respond with denunciations or public accusations. Instead, he handled the situation largely behind the scenes. When he learned that Conway had written a critical letter praising Gates, Washington calmly informed him that he was aware of the letter—quoting it verbatim.

The conspiracy collapsed, in part because Washington’s personal reputation with the rank and file and with key political figures proved more resilient than his critics had anticipated. But the episode exposed deep fractures over strategy, leadership, and regional loyalties within the revolutionary coalition.

The Ideological Split: Hamilton vs. Jefferson and Madison

Perhaps the most consequential feud emerged in the 1790s between Alexander Hamilton and Thomas Jefferson, with James Madison eventually siding with Jefferson. This wasn’t just personal animosity—it represented a fundamental disagreement about America’s future.

Hamilton, Washington’s Treasury Secretary, envisioned an industrialized commercial nation with a strong central government, a national bank, and close ties to Britain. Jefferson, the Secretary of State, championed an agrarian republic of small farmers with minimal federal power and friendship with Revolutionary France. Their cabinet meetings became so contentious that Washington had to mediate. Hamilton accused Jefferson of being a dangerous radical who would destroy public credit. Jefferson called Hamilton a monarchist who wanted to recreate British aristocracy in America.

The conflict got personal. Hamilton leaked damaging information about Jefferson to friendly newspapers. Jefferson secretly funded a journalist, James Callender, to attack Hamilton in print. When Hamilton’s extramarital affair with Maria Reynolds became public in 1797, Jefferson’s allies savored every detail. The feud split the nation into the first political parties: Hamilton’s Federalists and Jefferson’s Democratic-Republicans. Madison, once Hamilton’s ally in promoting the Constitution, switched sides completely, becoming Jefferson’s closest political partner and Hamilton’s implacable foe.

The Adams-Jefferson Friendship, Rivalry, and Reconciliation

John Adams and Thomas Jefferson experienced one of history’s most remarkable personal relationships. They were close friends during the Revolution, working together in Congress and on the committee to draft the Declaration of Independence (though Jefferson did the actual writing). Both served diplomatic posts in Europe and developed deep mutual respect.

But the election of 1796 turned them into rivals. Adams won the presidency with Jefferson finishing second, making Jefferson vice president under the original constitutional system—imagine your closest competitor becoming your deputy. By the 1800 election, they were bitter enemies. The campaign was vicious, with Jefferson’s supporters calling Adams a “hideous hermaphroditical character” and Adams’s allies claiming Jefferson was an atheist who would destroy Christianity.

Jefferson won in 1800, and the two men didn’t speak for over a decade. Their relationship was so bitter that Adams left Washington early in the morning, before Jefferson’s inauguration. What makes their story extraordinary is the reconciliation. In 1812, mutual friends convinced them to resume correspondence. Their letters over the next fourteen years—158 of them—became one of the great intellectual exchanges in American history, discussing philosophy, politics, and their memories of the Revolution. Both men died on July 4, 1826, the fiftieth anniversary of the Declaration of Independence, with Adams’s last words reportedly being “Thomas Jefferson survives” (though Jefferson had actually died hours earlier).

Franklin vs. Adams: A Clash of Styles

In Paris, the relationship between Benjamin Franklin and John Adams was a tense blend of grudging professional reliance and deep personal irritation, rooted in radically different diplomatic styles and temperaments. Franklin, already a celebrated figure at Versailles, cultivated French support through charm, sociability, and patient maneuvering in salons and at court, a method that infuriated Adams. He equated such “nuances” with evasiveness and preferred direct argument, formal memorandums, and hard‑edged ultimatums. Sharing lodgings outside Paris only intensified Adams’s resentment as he watched Franklin rise late, receive endless visitors, and seemingly mix pleasure with business, leading Adams to complain that nothing would ever get done unless he did it himself, while Franklin privately judged Adams “always an honest man, often a wise one, but sometimes and in some things, absolutely out of his senses.” Their French ally, Foreign Minister Vergennes, reinforced the imbalance by insisting on dealing primarily with Franklin and effectively sidelining Adams in formal diplomacy. This deepened Adams’s sense that Franklin was both overindulged by the French and insufficiently assertive on America’s behalf. Yet despite their mutual loss of respect, the two ultimately cooperated—often uneasily—in the peace negotiations with Britain, and both signatures appear on the 1783 Treaty of Paris, a testament to the way personal feud and shared national purpose coexisted within the American diplomatic mission.

Hamilton and Burr: From Political Rivalry to Fatal Duel

The Hamilton-Burr feud ended in the most dramatic way possible: a duel at Weehawken, New Jersey, on July 11, 1804, where Hamilton was mortally wounded and Burr destroyed his own political career.

Their rivalry had been building for years. Both were New York lawyers and politicians, but Hamilton consistently blocked Burr’s ambitions. When Burr ran for governor of New York in 1804, Hamilton campaigned against him with particular venom, calling Burr dangerous and untrustworthy at a dinner party. When Burr read accounts of Hamilton’s remarks in a newspaper, he demanded an apology. Hamilton refused to apologize or deny the comments, leading to the duel challenge.

What made this especially tragic was that Hamilton’s oldest son, Philip, had been killed in a duel three years earlier defending his father’s honor. Hamilton reportedly planned to withhold his fire, but he either intentionally shot into the air or missed. Burr’s shot struck Hamilton in the abdomen, and he died the next day. Burr was charged with murder in both New York and New Jersey and fled to the South.  Though he later returned to complete his term as vice president, his political career was finished.

Adams vs. Hamilton: The Federalist Crack-Up

One of the most destructive feuds happened within the same party. John Adams and Alexander Hamilton were both Federalists, but their relationship became poisonous during Adams’s presidency (1797-1801).

Hamilton, though not in government, tried to control Adams’s cabinet from behind the scenes. When Adams pursued peace negotiations with France (the “Quasi-War” with France was raging), Hamilton wanted war. Adams discovered that several of his cabinet members were more loyal to Hamilton than to him and fired them. In the 1800 election, Hamilton wrote a fifty-four-page pamphlet attacking Adams’s character and fitness for office—extraordinary since they were in the same party. The pamphlet was meant for limited circulation among Federalist leaders, but Jefferson’s allies got hold of it and published it widely, devastating both Adams’s re-election chances and Hamilton’s reputation. The feud helped Jefferson win and essentially destroyed the Federalist Party.

Washington and Jefferson: The Unacknowledged Tension

While Washington and Jefferson never had an open feud, their relationship cooled significantly during Washington’s presidency. Jefferson, as Secretary of State, increasingly opposed the administration’s policies, particularly Hamilton’s financial program. When Washington supported the Jay Treaty with Britain in 1795—which Jefferson saw as a betrayal of France and Republican principles—Jefferson became convinced Washington had fallen under Hamilton’s spell.

Jefferson resigned from the cabinet in 1793, partly from policy disagreements but also from discomfort with what he saw as Washington’s monarchical tendencies (the formal receptions and the ceremonial aspects of the presidency). Washington, in turn, came to view Jefferson as disloyal, especially when he learned Jefferson had been secretly funding attacks on the administration in opposition newspapers and had even put a leading critic on the federal payroll. By the time Washington delivered his Farewell Address in 1796, warning against political parties and foreign entanglements, many saw it as a rebuke of Jefferson’s philosophy. They maintained outward courtesy, but their warm relationship never recovered.

Why These Feuds Mattered

These weren’t just personal squabbles—they shaped American democracy in profound ways. The Hamilton-Jefferson rivalry created our two-party system (despite Washington’s warnings). The Adams-Hamilton split showed that parties could fracture from within. The Adams-Jefferson reconciliation demonstrated that political enemies could find common ground after leaving power.

The founding fathers were human, with all the ambition, pride, jealousy, and pettiness that entails. They fought over power, principles, and personal slights. What’s remarkable isn’t that they agreed on everything—they clearly didn’t—but that despite their bitter divisions, they created a system robust enough to survive their feuds. The Constitution itself, with its checks and balances, almost seems designed to accommodate such disagreements, ensuring that no single person or faction could dominate.

SOURCES

  1. National Archives – Founders Online

https://founders.archives.gov

2.   Massachusetts Historical Society – Adams-Jefferson Letters

https://www.masshist.org/publications/adams-jefferson

       3.    Founders Online – Hamilton’s Letter Concerning John Adams

https://founders.archives.gov/documents/Hamilton/01-25-02-0110

       4.    Gilder Lehrman Institute – Hamilton and Jefferson

https://www.gilderlehrman.org/history-resources/spotlight-primary-source/alexander-hamilton-and-thomas-jefferson

       5.    National Park Service – The Conway Cabal

https://www.nps.gov/articles/000/the-conway-cabal.htm

       6.    American Battlefield Trust – Hamilton-Burr Duel

https://www.battlefields.org/learn/articles/hamilton-burr-duel

        7.   Mount Vernon – Thomas Jefferson

https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/thomas-jefferson

        8.   Monticello – Thomas Jefferson Encyclopedia

https://www.monticello.org/research-education/thomas-jefferson-encyclopedia

        9.   Library of Congress – John Adams Papers

https://www.loc.gov/collections/john-adams-papers

10. Joseph Ellis – “Founding Brothers: The Revolutionary Generation”

https://www.pulitzer.org/winners/joseph-j-ellis

Illustration generated by author using ChatGPT.

13 Stars, Betsy Ross and the Story of the American Flag

On a steamy June day in 1777, the Continental Congress took a brief break from the monumental task of running a revolution to deal with something that seems surprisingly simple in retrospect: what should the American flag look like? The resolution they passed on June 14th was refreshingly concise, stating that “the flag of the United States be thirteen stripes, alternate red and white; that the union be thirteen stars, white in a blue field, representing a new constellation.”

That poetic phrase about a “new constellation” turned out to be both inspiring and maddeningly vague. Congress didn’t specify how the stars should be arranged, how many points they should have, or even whether the flag should start with a red or white stripe at the top. This ambiguity led to one of the interesting aspects of early American flag history—for decades, no two flags looked exactly alike.

The 1777 resolution came out of Congress’s Marine Committee business, and at least some historians caution that it may have been understood initially as a naval ensign, not a fully standardized “national flag for all uses.”

A Constellation of Designs

The lack of official guidance meant that flag makers exercised considerable artistic freedom. Smithsonian researcher Grace Rogers Cooper found at least 17 different examples of 13-star flags dating from 1779 to around 1796, and flag expert Jeff Bridgman has documented 32 different star arrangements from the era. Some makers arranged the stars in neat rows, others formed them into a single large star, and still others created elaborate patterns that spelled out “U.S.” or formed other symbolic shapes.  An official star pattern would not be specified until 1912 and versions of the 13-star flag remained in ceremonial use until the mid-1800s.

The most famous arrangement, of course, is the Betsy Ross design with its circle of 13 stars. What many people don’t realize is that experts date the earliest known example of this circular pattern to 1792—in a painting by John Trumbull, not on an actual flag from 1776.

Did the Continental Army Actually Use This Flag?

Here’s where things get interesting and a bit murky. The short answer is: not much, and not right away. The Continental Army had been fighting for over two years before Congress even adopted the Stars and Stripes, and by that point, individual regiments had already developed their own distinctive colors and banners. These regimental flags served practical military purposes—they helped units identify each other in the chaos of battle and gave soldiers something to rally around.  Additionally, the Continental Army frequently used the Grand Union Flag (13 stripes with a British Union in the canton), which predates the 13-star design.

What’s more revealing is a series of letters from 1779—two full years after the Flag Resolution—between George Washington and Richard Peters, Secretary of the Board of War. In these letters, Peters is essentially asking Washington what flag he wants the army to use. This correspondence raises an obvious question: if Congress had settled the flag issue in 1777, why was Washington still trying to figure it out in 1779? The evidence suggests that variations of the 13-star flag were primarily used by the Navy in those early years, while the Army continued to use various regimental standards.

Navy Captain John Manley expressed this confusion perfectly when he wrote in 1779 that the United States “had no national colors” and that each ship simply flew whatever flag the captain preferred. Even as late as 1779, the War Board hadn’t settled on a standard design for the Army. When they finally wrote to Washington for his input, they proposed a flag that included a serpent and numbers representing different states—a design that never caught on.

National “stars and stripes” banners did exist during the late war years and appear in some period art and descriptions, but clear, securely dated 13‑star Army battle flags are rare and often disputed.13‑star flags are better documented in early federal service such as maritime and lighthouse use in the 1790s than they are in Continental Army field service before 1783.

The Betsy Ross Question

Now we come to one of America’s most enduring flag legend. The story is familiar to most Americans: in 1776, George Washington, Robert Morris, and George Ross visited Philadelphia upholsterer Betsy Ross and asked her to sew the first American flag. She suggested changing the six-pointed stars to five-pointed ones, demonstrated her one-snip technique for making a perfect five-pointed star, and she then produced the first Stars and Stripes.

It’s a great story. There’s just one problem: historians have found virtually no documentary evidence to support it. The tale didn’t surface publicly until 1870—nearly a century after the supposed event—when Betsy Ross’s grandson, William Canby, presented it in a speech to the Historical Society of Pennsylvania. Canby relied entirely on family oral history, including affidavits from Ross’s daughter, granddaughter, and other relatives who claimed they had heard Betsy tell the story herself. But Canby himself admitted that his search through official records revealed nothing to corroborate the account.

Historians don’t dispute that Betsy Ross was a real person who did real work. Documentary evidence shows that on May 29, 1777, the Pennsylvania State Navy Board paid her a substantial sum for “making ships colours.” She ran a successful upholstery business and continued making flags for the government for more than 50 years. But as historian Marla Miller puts it, “The flag, like the Revolution it represents, was the work of many hands.” Modern scholars generally view the question not as whether Ross designed the flag—she almost certainly didn’t—but whether she may have been among the many people who produced early flags.

Who Really Designed It?

If not Betsy Ross, then who? The strongest candidate is Francis Hopkinson, the New Jersey delegate to the Continental Congress who also helped design the Great Seal of the United States and early American currency. In 1780, Hopkinson sent Congress a bill requesting payment for his design work, specifically mentioning “the flag of the United States of America.”    He likely designed a flag with the stars arranged in rows rather than circles, and his bills for payment submitted to Congress mentioned six-pointed stars rather than the five-pointed ones that became standard.

 Unfortunately for Hopkinson, Congress refused to pay him, arguing that he wasn’t the only person on the Navy Committee and therefore shouldn’t receive singular credit or compensation.

The irony is rich: Hopkinson was asking for a quarter cask of wine or £2,700 for designing what would become one of the world’s most recognizable symbols.  Congress essentially told him, “Thanks, but we’re not paying.” There’s a lesson about government contracts in there somewhere.

What Survived

Of the hundreds of flags made and carried during the Revolutionary War, only about 30 are known to survive today. These rare artifacts offer fascinating glimpses into how Americans visualized their new nation. The Museum of the American Revolution brought together 17 of these original flags in a 2025 exhibition—the largest gathering of such flags since 1783.

The most significant surviving 13-star flag is probably Washington’s Headquarters Standard, a small blue silk flag measuring about two feet by three feet. It features 13 white, six-pointed stars on a blue field and descended through George Washington’s family with the tradition that it marked the General’s presence on the battlefield throughout the war. Experts consider it the earliest surviving 13-star American flag. Due to light damage, it can only be displayed on special occasions.

Other surviving flags tell different stories. The Brandywine Flag, used at the September 1777 battle of the same name, is one of the earliest stars and stripes—the flag is red, with a red and white American flag image in the canton.

 The Dansey Flag, captured from the Delaware militia by a British soldier, was taken to England as a war trophy and remained in his family until 1927. The flag features a green field with 13 alternating red and white stripes in the upper left corner signifying the 13 colonies.

These and other flags weren’t just military equipment—they were powerful symbols that people fought under and, sometimes, died defending.

The Bigger Picture

What makes the story of the 13-star flag so compelling isn’t really about who sewed it or exactly when it first flew. It’s about what the flag represented in an era when the very concept of the United States was still being invented. The June 1777 resolution called for stars forming “a new constellation”—a beautiful metaphor for a new nation finding its place among the powers of the world.

The fact that no two early flags looked exactly alike might seem like a problem from our standardized modern perspective, but it tells us something important about the Revolution itself. Just as the colonies were learning to act as united states while maintaining their individual identities, flag makers across the new nation were interpreting a simple congressional resolution in their own ways, creating variations on a shared theme.

As historian Laurel Thatcher Ulrich points out, there was no “first flag” worth arguing over. The American flag evolved organically, shaped by the practical needs of the Navy, the Army, militias, and civilian flag makers who each contributed to its development. Whether Betsy Ross made one of those early flags or not, her story endures because it captures something Americans want to believe about our origins: that ordinary citizens, working in small shops and homes, helped create the symbols of the new republic.

Sources:

History.com: https://www.history.com/this-day-in-history/june-14/congress-adopts-the-stars-and-stripes

Flags of the World: https://www.crwflags.com/fotw/flags/us-1777.html

Wikipedia Flag of the United States: https://en.wikipedia.org/wiki/Flag_of_the_United_States

Museum of the American Revolution: https://www.amrevmuseum.org/

American Battlefield Trust: https://www.battlefields.org/learn/articles/short-history-united-states-flag

US History (Betsy Ross): https://www.ushistory.org/betsy/

Library of Congress “Today in History”: https://www.loc.gov/item/today-in-history/june-14/

Flag images from Wikimedia Commons

The Strange Tale of Spontaneous Human Combustion

Did you ever run into an idea so strange that you can’t quite understand how anyone ever took it seriously? Recently, while reading about historical curiosities in Pseudoscience by Kang and Pederson, I was reminded of one of the most enduring examples: spontaneous human combustion.

The classic image is always the same. Someone enters a room and finds a small pile of ashes where a person once sat. The body is nearly destroyed, yet the chair beneath it is barely scorched and the rest of the room looks strangely untouched. For centuries, this baffling scene was explained by a dramatic idea—that a person could suddenly burst into flames from the inside, without any external fire at all.

It sounds like something lifted straight from a gothic novel, but belief in spontaneous human combustion stretches back to at least the seventeenth century and reached its peak in the Victorian era. To understand why it gained such traction, it helps to look at the social attitudes of the time, the cases that convinced people it was real, and what modern forensic science eventually uncovered.

Much of the early belief rested on moral judgment rather than evidence. In the nineteenth century, spontaneous human combustion was widely accepted as a kind of divine punishment. Many of the alleged victims were described as heavy drinkers, often elderly, overweight, or socially isolated, and women were frequently overrepresented in the reports. To Victorian minds, this pattern felt meaningful. Alcohol was flammable, after all, and it seemed reasonable—at least then—to assume that a body saturated with spirits might somehow ignite. Sensational newspaper reporting amplified the mystery, presenting lurid details while glossing over inconvenient facts.

The idea gained intellectual credibility in 1746 when Paul Rolli, a Fellow of the Royal Society, formally used the term “spontaneous human combustion” while describing the death of Countess Cornelia Zangari Bandi. The involvement of a respected scientific figure gave the concept legitimacy that lingered for generations.

Several cases became canonical. Countess Bandi’s death in 1731 was described as leaving little more than ashes and partially intact legs, still clothed in stockings. In 1966, John Irving Bentley of Pennsylvania was found almost completely burned except for one leg, with his pipe discovered intact nearby. Mary Reeser, known as the “Cinder Woman,” died in Florida in 1951, leaving behind melted fat embedded in the rug near where she had been sitting. As recently as 2010, an Irish coroner ruled that spontaneous human combustion caused the death of Michael Faherty, whose body was found badly burned near a fireplace in a room that showed little fire damage. Over roughly three centuries, about two hundred such cases have been cited worldwide.

Believers proposed explanations that ranged from the scientific-sounding to the overtly theological. Alcoholism was the most popular theory, with some physicians genuinely arguing that chronic drinking made the human body combustible. Earlier medical thinking leaned on imbalances of bodily humors, while later writers speculated about unknown chemical reactions producing internal heat. Religious interpretations framed these deaths as punishment for sin. Even in modern times, a few proponents have suggested that acetone buildup in people with alcoholism, diabetes, or extreme diets could somehow trigger combustion.

The idea was so culturally embedded that Charles Dickens famously killed off the alcoholic character Mr. Krook by spontaneous combustion in Bleak House. When critics objected, Dickens defended the plot choice by citing what he believed were credible historical and medical sources.

The illusion of the supernatural persisted because the circumstances were almost perfectly misleading. Victims were typically alone, elderly, or physically impaired, unable to respond quickly to a smoldering fire. The localized damage looked impossible to the untrained eye. Potential ignition sources were often destroyed in the fire itself. And dramatic storytelling filled in the gaps left by incomplete investigations.

What actually happens in these cases is far less mystical and far more unsettling. Modern forensic science points to an explanation known as the “wick effect.” In this scenario, there is always an external ignition source—often a cigarette, candle, lamp, or fireplace ember. Once clothing catches fire, heat melts the person’s body fat. That liquefied fat soaks into the clothing, which then behaves like a candle wick. The fire burns slowly and steadily, sometimes for hours, consuming much of the body while leaving nearby objects relatively unscathed.

This effect has been demonstrated experimentally. In the 1960s, researchers at Leeds University showed that cloth soaked in human fat could sustain a slow burn for extended periods once ignited. In 1998, forensic scientist John de Haan famously replicated the effect for the BBC by burning a pig carcass wrapped in a blanket. The result closely matched classic spontaneous combustion scenes: severe destruction of the body, with extremities left behind and limited damage to the surrounding room.

The reason these fires don’t usually engulf the entire space is simple physics. Flames rise more easily than they spread sideways, and the heat output of a wick-effect fire is relatively localized. It’s similar to standing near a campfire—you can be close without catching fire yourself.

Investigators Joe Nickell and John F. Fischer examined dozens of historical cases and found that every one involved a plausible ignition source, details that earlier accounts often ignored or downplayed. When these factors are restored to the narrative, the mystery largely disappears.

As science writer Benjamin Radford has pointed out, if spontaneous human combustion were truly spontaneous, we would expect it to occur randomly and frequently, in public places as well as private ones. Instead, it consistently appears in situations involving isolation and an external heat source.

The bottom line is straightforward. There is no credible scientific evidence that humans can burst into flames without an external ignition source. What has been labeled spontaneous human combustion is better understood as a tragic combination of accidental fire and the wick effect. The myth endured because it blended moral judgment, fear, and incomplete science into a compelling story. Today, forensic investigation has replaced superstition with explanation, even if the results remain unsettling.

Spontaneous human combustion survives as a reminder of how easily mystery fills the space where evidence is thin—and how patiently applied science eventually closes that gap.


Sources and Further Reading

Peer-reviewed forensic and medical analyses are available through the National Center for Biotechnology Information, including “So-called Spontaneous Human Combustion” in the Journal of Forensic Sciences (https://pubmed.ncbi.nlm.nih.gov/21392004/) and Koljonen and Kluger’s 2012 review, “Spontaneous human combustion in the light of the 21st century,” published in the Journal of Burn Care & Research (https://pubmed.ncbi.nlm.nih.gov/22269823/).

General scientific and historical overviews can be found in Encyclopædia Britannica’s article “Is Spontaneous Human Combustion Real?” (https://www.britannica.com/story/is-spontaneous-human-combustion-real), Scientific American’s discussion of the wick effect (https://www.scientificamerican.com/blog/cocktail-party-physics/burn-baby-burn-understanding-the-wick-effect/), and Live Science’s summary of facts and theories (https://www.livescience.com/42080-spontaneous-human-combustion.html).

Accessible explanatory pieces are also available from HowStuffWorks (https://science.howstuffworks.com/science-vs-myth/unexplained-phenomena/shc.htm), History.com (https://www.history.com/articles/is-spontaneous-human-combustion-real), Mental Floss (https://www.mentalfloss.com/article/22236/quick-7-seven-cases-spontaneous-human-combustion), and All That’s Interesting (https://allthatsinteresting.com/spontaneous-human-combustion). Wikipedia’s entries on spontaneous human combustion and the wick effect provide comprehensive background and references at https://en.wikipedia.org/wiki/Spontaneous_human_combustion and https://en.wikipedia.org/wiki/Wick_effect.

What “Woke” Really Means: A Look at a Loaded Word

Why everyone’s fighting over a word nobody agrees on

Okay, so you’ve probably heard “woke” thrown around about a million times, right? It’s in political debates, online arguments, your uncle’s Facebook rants—basically everywhere. And here’s the weird part: depending on who’s saying it, it either means you’re enlightened or you’re insufferable.

So let’s figure out what’s actually going on with this word.

Where It All Started

Here’s something most people don’t know: “woke” wasn’t invented by social media activists or liberal college students. It goes way back to the 1930s in Black communities, and it meant something straightforward—stay alert to racism and injustice.

The earliest solid example comes from blues musician Lead Belly. In his song “Scottsboro Boys” (about nine Black teenagers falsely accused of rape in Alabama in 1931), he told Black Americans to “stay woke”—basically meaning watch your back, because the system isn’t on your side. This wasn’t abstract philosophy; it was survival advice in the Jim Crow South.

The term hung around in Black culture for decades. It got a boost in 2008 when Erykah Badu used “I stay woke” in her song “Master Teacher,” where it meant something like staying self-aware and questioning the status quo.

But the big explosion happened around 2014 during the Ferguson protests after Michael Brown was killed. Black Lives Matter activists started using “stay woke” to talk about police brutality and systemic racism. It spread through Black Twitter, then got picked up by white progressives showing solidarity with social justice movements. By the late 2010s, it had expanded to cover sexism, LGBTQ+ issues, and pretty much any social inequality you can think of.

And that’s when conservatives started using it as an insult.

The Liberal Take: It’s About Giving a Damn

For progressives, “woke” still carries that original vibe of awareness. According to a 2023 Ipsos poll, 56% of Americans (and 78% of Democrats) said “woke” means “to be informed, educated, and aware of social injustices.”

From this angle, being woke just means you’re paying attention to how race, gender, sexuality, and class affect people’s lives—and you think we should try to make things fairer. It’s not about shaming people; it’s about understanding the experiences of others.

Liberals see it as continuing the work of the civil rights movement—expanding who we empathize with and include. That might mean supporting diversity programs, using inclusive language, or rethinking how we teach history. To them, it’s just what thoughtful people do in a diverse society.

Here’s the Progressive Argument in a Nutshell

The term literally started as self-defense. Progressives argue the problems are real. Being “woke” is about recognizing that bias, inequality, and discrimination still exist. The data back some of this up—there are documented disparities in policing, sentencing, healthcare, and economic opportunity across racial lines. From this view, pointing these things out isn’t being oversensitive; it’s just stating facts.

They also point out that conservatives weaponized the term. They took a word from Black communities about awareness and justice and turned it into an all-purpose insult for anything they don’t like about the left. Some activists call this a “racial dog whistle”—a way to attack justice movements without being explicitly racist.

The concept naturally expanded from racial justice to other inequalities—sexism, LGBTQ+ discrimination, other forms of unfairness. Supporters see this as logical: if you care about one group being treated badly, why wouldn’t you care about others?

And here’s their final point: what’s the alternative? When you dismiss “wokeness,” you’re often dismissing the underlying concerns. Denying that racism still affects American life can become just another way to ignore real problems.

Bottom line from the liberal side: being “woke” means you’ve opened your eyes to how society works differently for different people, and you think we can do better.

The Conservative Take: It’s About Going Too Far

Conservatives see it completely differently. To them, “woke” isn’t about awareness—it’s about excess and control.

They see “wokeness” as an ideology that forces moral conformity and punishes anyone who disagrees. What started as social awareness has turned into censorship and moral bullying. When a professor loses their job over an unpopular opinion or comedy shows get edited for “offensive” jokes, conservatives point and say: “See? This is exactly what we’re talking about.”  To them, “woke” is just the new version of “politically correct”—except worse. It’s intolerance dressed up as virtue.

Here’s the conservative argument in a nutshell:

Wokeness has moved way beyond awareness into something harmful. They argue it creates a “victimhood culture” where status and that benefits come from claiming you’re oppressed rather than from merit or hard work. Instead of fixing injustice, they say it perpetuates it by elevating people based on identity rather than achievement.

They see it as “an intolerant and moralizing ideology” that threatens free speech. In their view, woke culture only allows viewpoints that align with progressive ideology and “cancels” dissenters or labels them “white supremacists.”

Many conservatives deny that structural racism or widespread discrimination still exists in modern America. They attribute unequal outcomes to factors other than bias. They believe America is fundamentally a great country and reject the idea that there is systematic racism or that capitalism can sometimes be unjust.

They also see real harm in certain progressive positions—like the idea that gender is principally a social construct or that children should self-determine their gender. They view these as threats to traditional values and biological reality.

Ultimately, conservatives argue that wokeness is about gaining power through moral intimidation rather than correcting injustice. In their view, the people rejecting wokeness are the real critical thinkers.

The Heart of the Clash

Here’s what makes this so messy: both sides genuinely believe they’re defending what’s right.

Liberals think “woke” means justice and empathy. Conservatives think it means judgment and control. The exact same thing—a company ad featuring diverse families, a school curriculum change, a social movement—can look like progress to one person and propaganda to another.

One person’s enlightenment is literally another person’s indoctrination.

The Word Nobody Wants Anymore

Here’s the ironic part: almost nobody calls themselves “woke” anymore. Like “politically correct” before it, the word has gotten so loaded that it’s frequently used as an insult—even by people who agree with the underlying ideas. The term has been stretched to cover everything from racial awareness to climate activism to gender identity debates, and the more it’s used, the less anyone knows what it truly means.

Recently though, some progressives have started reclaiming the term—you’re beginning to see “WOKE” on protest signs now.

So, Who’s Right?

Maybe both. Maybe neither.

If “woke” means staying aware of injustice and treating people fairly, that’s good. If it means acting morally superior and shutting down disagreement, that’s not. The truth is probably somewhere in the messy middle.

This whole debate tells us more about America than about the word itself. We’ve always struggled with how to balance freedom with fairness, justice with tolerance. “Woke” is just the latest word we’re using to have that same old argument.

The Bottom Line

Whether you love it or hate it, “woke” isn’t going anywhere soon. It captures our national struggle to figure out what awareness and fairness should look like today.

And honestly? Maybe we’d all be better off spending less time arguing about the word and more time talking about the actual values behind it—what’s fair, what’s free speech, what kind of society do we want?

Being “woke” originally meant recognizing systemic prejudices—racial injustice, discrimination, and social inequities many still experience daily. But the term’s become a cultural flashpoint.  Here’s the thing: real progress requires acknowledging both perspectives exist and finding common ground. It’s not about who’s “right”—it’s about building bridges.

 If being truly woke means staying alert to injustice while remaining open to dialogue with those who see things differently, seeking solutions that work for everyone, caring for others, being empathetic and charitable, then call me WOKE.

Bull Markets, Bear Markets and the Story Behind Wall Street’s Most Famous Animals

If you’ve ever caught a business news segment, you’ve probably heard anchors throwing around terms like “bull market” and “bear market” as if everyone just naturally knows what they mean. But beyond the basic idea that one’s good and one’s bad, the real mechanics of these market conditions—and how they got their animal nicknames—are pretty interesting.

How the Stock Market Works (The Quick Version)

Before we dive into bulls and bears, let’s cover the basics. The stock market is essentially a place where people buy and sell ownership stakes in companies. When you buy a share of stock, you’re purchasing a tiny piece of that company. The price of that share goes up or down based on how many people want to buy it versus how many want to sell it—classic supply and demand.

Companies sell shares to raise money for growth, and investors buy them hoping the company will do well and the stock price will increase. The overall “market” is tracked through indexes like the S&P 500 or Dow Jones Industrial Average, which measure how a group of major companies are performing. When most stocks are rising, we say the market is up; when most are falling, the market is down.

What Bull and Bear Markets Actually Mean

A bull market refers to a period when stock prices are rising, typically defined as a 20% or more increase from recent lows. During bull markets, investors are optimistic, companies are generally doing well, and people are more willing to take risks with their money. Bull markets usually are driven by a strong economy with low inflation and optimistic investors. Think of the economic boom of the late 1990s or the recovery after the 2008 financial crisis—those were classic bull markets where prices kept climbing for years.

A bear market is essentially the opposite: a general decline in the stock market over time, usually defined as a 20% or more price decline over at least a two-month period. During bear markets, investors get nervous, sell off their holdings, and pessimism spreads. When a 10% to 20% decline occurs, it’s classified as a correction, and bear territory always precedes a bear market. The Great Depression, the 2008 financial crisis, and the COVID-19 pandemic’s initial impact all triggered bear markets.

The Colorful History Behind the Terms

Now here’s where things get interesting. These terms didn’t come from some modern marketing genius—they trace back to 18th century London, and the story involves everything from old proverbs to violent animal fights to one of history’s biggest financial scandals.

The “bear” term came first. Etymologists point to an old proverb, warning that it is not wise “to sell the bear’s skin before one has caught the bear”. This saying was about the foolishness of counting on something before you actually have it. By the early 1700s, traders who engaged in short selling (betting that prices would fall) were called “bear-skin jobbers” because they sold a bear’s skin—the shares—before catching the bear. The term eventually got shortened to just “bears.”

The real watershed moment came with the South Sea Bubble of 1720. The South Sea Company was a British joint stock company founded by an act of Parliament in 1711, and in 1720, the company assumed most of the British national debt and convinced investors to give up state annuities for company stock sold at a very high premium. When everything collapsed, share prices dropped dramatically, starting a “bear market,” and the story became the topic of many literary works and went down in history as an infamous metaphor.

As for “bull,” the origins are a bit murkier. The first known instance of the market term “bull” popped up in 1714, shortly after the “bear” term emerged. Most historians think it arose as a natural counterpoint to “bear,” possibly influenced by bull-baiting and bear-baiting, two animal fighting sports popular at the time—though I should note that’s somewhat speculative.

There’s also a popular explanation about how the animals attack: bears swipe downward with their paws while bulls thrust upward with their horns, which nicely mirrors market movements. While that’s a helpful memory device, it’s probably more of a convenient coincidence or a retroactive description than the actual origin. The term “bull” originally meant a speculative purchase in the expectation that stock prices would rise, and was later applied to the person making such purchases.

Why This Still Matters

These metaphors have stuck around for three centuries because they work. They’re visceral and easy to remember—you can picture a charging bull or a hibernating bear without needing an economics degree. They also capture something real about market psychology: the aggressive optimism of bulls pushing prices up versus the defensive pessimism of bears hunkering down.

Understanding these terms helps you follow financial news and, more importantly, recognize when markets are shifting. Knowing you’re in a bull market might make you less surprised by rising prices, while recognizing a bear market can help you avoid panic-selling when things look grim.

The Bull and Bear Markets are among those things I’ve heard for years and never knew their origin.  This article is an attempt to explain it to myself.

Sources:

The Sugar Act of 1764: The Tax Cut That Sparked a Revolution

The Sugar Act of 1764: The Tax Cut That Sparked a Revolution

Imagine a time when people rose up in protest of a tax being lowered.  Welcome to the world of the Sugar Act.

The Sugar Act of 1764 stands as one of the most ironic moments in the history of taxation. Here Britain was actually lowering a tax, and yet colonists reacted with a fury that would help spark a revolution. To understanding this paradox, we must understand that this new act represented something far more threatening than any previous attempt by Britain to regulate its American colonies.

The Old System: Benign Neglect

For decades before 1764, Britain had maintained what historians call “salutary neglect” toward its American colonies. The Molasses Act of 1733 had imposed a steep duty of six pence per gallon on foreign molasses imported into the colonies. On paper, this seemed like a significant burden for the rum-distilling industry, which depended heavily on cheap molasses from French and Spanish Caribbean islands. In practice, though, the tax was rarely collected. Colonial merchants either bribed customs officials or simply smuggled the molasses past them. The British government essentially looked the other way, and everyone profited.

This informal arrangement worked because Britain’s primary interest in the colonies was commercial, not fiscal. The Navigation Acts required colonists to ship certain goods only to Britain and to buy manufactured goods from British merchants, which enriched British traders and manufacturers without requiring aggressive tax collection in America. As long as this system funneled wealth toward London, Parliament didn’t care much about collecting relatively small customs duties across the Atlantic.

Everything Changed in 1763

The Seven Years’ War (which Americans call the French and Indian War) changed this comfortable arrangement entirely. Britain won decisively, driving France out of North America and gaining vast new territories. But victory came with a staggering price tag. Britain’s national debt had nearly doubled to £130 million, and annual interest payments alone consumed half the government’s budget. Meanwhile, Britain now needed to maintain 10,000 troops in North America to defend its expanded empire and manage relations with Native American tribes.

Prime Minister George Grenville faced a political problem. British taxpayers, already heavily burdened, were in no mood for additional taxes. The logic seemed obvious: since the colonies had benefited from the war’s outcome and still required military protection, they should help pay for their own defense. Americans, paid far lower taxes than their counterparts in Britain—by some estimates, British residents paid 26 times more per capita in taxes than colonists did.

What the Act Actually Did

The Sugar Act (officially the American Revenue Act of 1764) approached colonial taxation differently than anything before it. First, it cut the duty on foreign molasses from six pence to three pence per gallon—a 50% reduction. Grenville calculated, reasonably, that merchants might actually pay a three-pence duty rather than risk getting caught smuggling, whereas the six-pence duty had been so high it encouraged universal evasion.

But the Act did far more than adjust molasses duties. It added or increased duties on foreign textiles, coffee, indigo, and wine imported into the colonies. colonialtened regulations around the colonial lumber trade and banned the import of foreign rum entirely. Most significantly, the Act included elaborate provisions designed to strictly enforce these duties for the first time.

The enforcement mechanisms represented the real revolution in British policy. Ship captains now had to post bonds before loading cargo and had to maintain detailed written cargo lists. Naval patrols increased dramatically. Smugglers faced having their ships and cargo seized.

Significantly, the burden of proof was shifted to the accused.  They were required to prove their innocence, a reversal of traditional British justice. Most controversially, accused smugglers would be tried in vice-admiralty courts, which had no juries and whose judges received a cut of any fines levied.

The Paradox of the Lower Tax

So why did colonists react so angrily to a tax cut? The answer reveals the fundamental shift in the British-American relationship that the Sugar Act represented.

First, the issue wasn’t the tax rate. It was the certainty of collection. A six-pence tax that no one paid was infinitely preferable to a three-pence tax rigorously enforced. New England’s rum distilling industry, which employed thousands of distillery workers and sailors, depended on cheap molasses from the French West Indies. Even at three pence per gallon, the tax significantly increased operating costs. Many merchants calculated they couldn’t remain profitable if they had to pay it.

Second, and more importantly, colonists recognized that the Act’s purpose had changed relationships. Previous trade regulations, even if they involved taxes, were ostensibly about regulating commerce within the empire. The Sugar Act openly stated its purpose was raising revenue—the preamble declared it was “just and necessary that a revenue be raised” in America. This might seem like a technical distinction, but to colonists it mattered enormously. British constitutional theory held that subjects could only be taxed by their own elected representatives. Colonists elected representatives to their own assemblies but sent no representatives to Parliament. Trade regulations fell under Parliament’s legitimate authority to govern imperial commerce, but taxation for revenue was something else entirely.

Third, the enforcement mechanisms offended colonial sensibilities about justice and traditional British rights. The vice-admiralty courts denied jury trials, which colonists viewed as a fundamental right of British subjects. Having to prove your innocence rather than being presumed innocent violated another core principle. Customs officials and judges profiting from convictions created obvious incentives for abuse.

Implementation and Colonial Response

The Act took effect in September 1764, and Grenville paired it with an aggressive enforcement campaign. The Royal Navy assigned 27 ships to patrol American waters. Britain appointed new customs officials and gave them instructions to strictly do their jobs rather than accept bribes. Admiralty courts in Halifax, Nova Scotia became particularly notorious.  Colonists had to travel hundreds of miles to defend themselves in a court with no jury and with a judge whose income game from convictions.

Colonists responded immediately. Boston merchants drafted a protest arguing that the act would devastate their trade. They explained that New England’s economy depended on a complex triangular trade: they sold lumber and food to the Caribbean in exchange for molasses, which they distilled into rum, which they sold to Africa for slaves, who were sold to Caribbean plantations for molasses, and the cycle repeated. Taxing molasses would break this chain and impoverish the region.

But the economic arguments quickly evolved into constitutional ones. Lawyer James Otis argued that “taxation without representation is tyranny”—a phrase that would echo through the coming decade. Colonial assemblies began passing resolutions asserting their exclusive right to tax their own constituents. They didn’t deny Parliament’s authority to regulate trade, but they drew a clear line: revenue taxation required representation.

The protests went beyond rhetoric. Colonial merchants organized boycotts of British manufactured goods. Women’s groups pledged to wear homespun cloth rather than buy British textiles. These boycotts caused enough economic pain in Britain that London merchants began lobbying Parliament for relief.

The Road to Revolution

The Sugar Act’s significance extends far beyond its immediate economic impact. It established precedents and patterns that would define the next decade of imperial crisis.

Most fundamentally, it shattered the comfortable arrangement of salutary neglect. Once Britain demonstrated it intended to actively govern and tax the colonies, the relationship could never return to its previous informality. The colonists’ constitutional objections—no taxation without representation, right to jury trials, presumption of innocence—would be repeated with increasing urgency as Parliament passed the Stamp Act (1765), Townshend Acts (1767), and Tea Act (1773).

The Sugar Act also revealed the practical difficulties of governing an empire across 3,000 miles of ocean. The vice-admiralty courts became symbols of distant, unaccountable power. When colonists couldn’t get satisfaction through established legal channels, they increasingly turned to extralegal methods, including committees of correspondence, non-importation agreements, and eventually armed resistance.

Perhaps most importantly, the Sugar Act forced colonists to articulate a political theory that ultimately proved incompatible with continued membership in the British Empire. Once they agreed to the principle that they could only be taxed by their own elected representatives, and that Parliament’s authority over them was limited to trade regulation, the logic led inexorably toward independence. Britain couldn’t accept colonial assemblies as co-equal governing bodies since Parliament claimed supreme authority over all British subjects. The colonists couldn’t accept taxation without representation since they claimed the rights of freeborn Englishmen. These positions couldn’t be reconciled.

The Sugar Act of 1764 represents the point where the British Empire’s century-long success in North America began to unravel. By trying to make the colonies pay a modest share of imperial costs through what seemed like reasonable means, Britain inadvertently set in motion forces that would break the empire apart just twelve years later.

Sources

Mount Vernon Digital Encyclopedia – Sugar Act https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/sugar-act/ Provides overview of the Act’s provisions, economic context, and relationship to British debt from the Seven Years’ War. Includes information on tax burden comparisons between Britain and the colonies.

Britannica – Sugar Act https://www.britannica.com/event/Sugar-Act Covers the specific provisions of the Act, enforcement mechanisms, vice-admiralty courts, and the shift from the Molasses Act of 1733. Useful for technical details of the legislation.

History.com – Sugar Act https://www.history.com/topics/american-revolution/sugar-act Discusses colonial constitutional objections, the “taxation without representation” argument, and the enforcement provisions including burden of proof reversal and jury trial denial.

American Battlefield Trust – Sugar Act https://www.battlefields.org/learn/articles/sugar-act Details colonial response including boycotts, James Otis’s arguments, and the triangular trade system that the Act disrupted.

Additional Recommended Sources

Library of Congress – The Sugar Act https://www.loc.gov/collections/continental-congress-and-constitutional-convention-broadsides/articles-and-essays/continental-congress-broadsides/broadsides-related-to-the-sugar-act/ Primary source collection including contemporary colonial broadsides and protests against the Act.

National Archives – The Sugar Act (Primary Source Text) https://founders.archives.gov/about/Sugar-Act The actual text of the American Revenue Act of 1764, useful for verifying specific provisions and language.

Yale Law School – Avalon Project: Resolutions of the Continental Congress (October 19, 1765) https://avalon.law.yale.edu/18th_century/resolu65.asp Colonial responses to the Sugar and Stamp Acts, showing how the arguments evolved.

Massachusetts Historical Society – James Otis’s Rights of the British Colonies Asserted and Proved (1764) https://www.masshist.org/digitalhistory/revolution/taxation-without-representation Primary source for the “taxation without representation” argument that emerged from Sugar Act opposition.

Colonial Williamsburg Foundation – Sugar Act of 1764 https://www.colonialwilliamsburg.org/learn/deep-dives/sugar-act-1764/ Discusses economic impact on colonial merchants and the rum distilling industry.

Slavery and the Constitutional Convention: The Compromise That Shaped a Nation

When fifty-five delegates gathered in Philadelphia during the sweltering summer of 1787, they faced a challenge that would haunt American politics for the next eight decades. The question wasn’t whether slavery was morally right—many delegates privately acknowledged its evil—but whether a unified nation could exist with slavery as a part of it. That summer, the institution of slavery nearly killed the Constitution before it was born.

The Battle Lines

The convention revealed a stark divide. On one side stood delegates who spoke forcefully against slavery, though they represented a minority voice. Gouverneur Morris of Pennsylvania delivered some of the most scathing condemnations, calling slavery a “nefarious institution” and “the curse of heaven on the states where it prevailed.” According to James Madison’s notes, Morris argued passionately that counting enslaved people for representation would mean that someone “who goes to the Coast of Africa, and in defiance of the most sacred laws of humanity tears away his fellow creatures from their dearest connections & damns them to the most cruel bondages, shall have more votes in a Government instituted for protection of the rights of mankind.”

Luther Martin of Maryland, himself a slaveholder, joined Morris in opposition. He declared the slave trade “inconsistent with the principles of the revolution and dishonorable to the American character.”.  Even George Mason of Virginia, who owned over 200 enslaved people, denounced slavery at the convention, warning that “every master of slaves is born a petty tyrant” and that it would bring “the judgment of heaven on a country.”

The Southern Coalition

Facing these critics stood delegates from the Deep South—primarily South Carolina and Georgia—who made it abundantly clear that protecting slavery was non-negotiable. The South Carolina delegation was particularly unified and aggressive in defending the institution. All four of their delegates—John Rutledge, Charles Pinckney, Charles Cotesworth Pinckney, and Pierce Butler—owned slaves, and they spoke with one voice.

Charles Cotesworth Pinckney stated bluntly: “South Carolina and Georgia cannot do without slaves.” John Rutledge framed it even more starkly: “The true question at present is, whether the Southern States shall or shall not be parties to the Union.” The message was unmistakable—attempt to restrict slavery, and there would be no Constitution and perhaps no United States.

The Southern states didn’t just defend slavery; they threatened to walk out repeatedly. When debates over the slave trade heated up on August 22, delegates from North Carolina, South Carolina, and Georgia stated they would “never be such fools as to give up” their right to import enslaved Africans.  These weren’t idle threats—they were credible enough to force compromise.

The Three-Fifths Compromise

The central flashpoint came over representation in Congress. The new Constitution would base representation on population, but should enslaved people count? Southern states wanted every enslaved person counted fully, which would dramatically increase their congressional power. Northern states argued that enslaved people—who had no rights and couldn’t vote—shouldn’t count at all.

The three-fifths ratio had actually been debated before. Back in 1783, Congress had considered using it to calculate state tax obligations under the Articles of Confederation, though that proposal failed. James Wilson of Pennsylvania resurrected the idea at the Constitutional Convention, suggesting that representation be based on the free population plus three-fifths of “all other persons”—the euphemism they used to avoid writing the word “slave” in the Constitution.

The compromise passed eight states to two. New Jersey and Delaware are generally identified as the states voting against the compromise, New Hampshire is not listed as taking part in the vote. Rhode Island did not send a delegation to the convention and by the time of the vote New York no longer had a functioning delegation.

Though the South ultimately accepted the compromise, it wasn’t what they wanted. Southern delegates had pushed to count enslaved people equally with free persons—but otherwise ignored on all issues of human rights. The three-fifths ratio was a reduction from their demands—a limitation on slave state power, though it still gave them substantial advantage. With about 93% of the nation’s enslaved population concentrated in just five southern states, this compromise increased the South’s congressional delegation by 42%.

James Madison later recognized the compromise’s significance. He wrote after the convention: “It seems now to be pretty well understood that the real difference of interests lies not between the large and small but between the northern and southern states. The institution of slavery and its consequences form the line of discrimination.”

Could the Constitution Have Happened Without It?

Here’s where I need to speculate, but I’m fairly confident in this assessment: no, the Constitution would not have been ratified without the three-fifths compromise and related concessions on slavery.

The evidence is overwhelming. South Carolina and Georgia delegates stated explicitly and repeatedly that they would not join any union that restricted slavery. Alexander Hamilton himself later acknowledged that “no union could possibly have been formed” without the three-fifths compromise. Even delegates who despised slavery, like Roger Sherman of Connecticut, argued it was “better to let the Southern States import slaves than to part with them.”

The convention negotiated three major slavery compromises, all linked. Beyond the three-fifths clause, they agreed Congress couldn’t ban the international slave trade until 1808, and they included the Fugitive Slave Clause requiring the return of escaped enslaved people even from free states. These deals were struck together on August 29, 1787, in what Madison’s notes reveal was a package negotiation between northern and southern delegates.

Without these compromises, the convention would likely have collapsed. The alternative wouldn’t have been a better Constitution—it would have been no Constitution at all, potentially leaving the thirteen states as separate nations or weak confederations. Whether that would have been preferable is a profound counterfactual question that historians still debate.

The Impact on Early American Politics

The three-fifths compromise didn’t just affect one document—it shaped American politics for decades. Its effects were immediate and substantial.

The most famous early example came in the presidential election of 1800. Thomas Jefferson defeated John Adams in what’s often called the “Revolution of 1800″—the first peaceful transfer of power between opposing political parties. But Jefferson’s victory owed directly to the three-fifths compromise. Virginia’s enslaved population gave the state extra electoral votes that proved decisive. Historian Garry Wills has speculated that without these additional slave-state votes, Jefferson would have lost. Pennsylvania had a free population 10% larger than Virginia’s, yet received 20% fewer electoral votes because Virginia’s numbers were inflated by the compromise.

The impact extended far beyond that single election. Research shows the three-fifths clause changed the outcome of over 55% of legislative votes in the Sixth Congress (1799-1801). (The additional southern representatives—about 18 more than their free population warranted—gave the South what became known as the “Slave Power” in Congress.

This power influenced major legislation throughout the antebellum period. The Indian Removal Act of 1830, which forcibly relocated Native Americans to open land for plantation agriculture, passed because of margins provided by these extra southern representatives. The Missouri Compromise, the Kansas-Nebraska Act, and numerous other slavery-related measures bore the fingerprints of this constitutional imbalance.

The compromise also affected Supreme Court appointments and federal patronage. Southern-dominated Congresses ensured pro-slavery justices and policies that protected the institution. The sectional tensions it created led directly to later compromises—the Missouri Compromise of 1820, the Compromise of 1850—each one a temporary bandage on a wound that wouldn’t heal.

By the 1850s, the artificial political power granted to slave states had become intolerable to many northerners. When Abraham Lincoln won the presidency in 1860 without carrying a single southern state, southern political leaders recognized they had lost control of the federal government. Senator Louis Wigfall of Texas complained that non-slaveholding states now controlled Congress and the Electoral College. Ten southern states seceded in large part because they believed the three-fifths compromise no longer protected their interests.

The Bitter Legacy

The framers consciously avoided using the words “slave” or “slavery” in the Constitution, recognizing it would “sully the document.” But the euphemisms fooled no one. They had built slavery into the structure of American government, trading moral principles for political union.

The Civil War finally resolved what the Constitutional Convention had delayed. The Thirteenth Amendment abolished slavery in 1865, but not until 1868 did the Fourteenth Amendment finally strike the three-fifths clause from the Constitution, requiring that representation be based on counting the “whole number of persons” in each state.

Was it worth it? That’s ultimately a question of values. The Constitution created a stronger national government that eventually abolished slavery, but it took 78 years and a war that killed over 600,000 Americans. As Thurgood Marshall noted on the Constitution’s bicentennial, the framers “consented to a document which laid a foundation for the tragic events which were to follow.”

The convention delegates knew what they were doing. They chose union over justice, pragmatism over principle. Whether that choice was necessary, wise, or moral remains one of the most contested questions in American history.

____________________________________________________

Sources

  1. https://www.battlefields.org/learn/articles/slavery-and-constitution
  2. https://en.wikipedia.org/wiki/Luther_Martin
  3. https://schistorynewsletter.substack.com/p/7-october-2024
  4. https://www.americanacorner.com/blog/constitutional-convention-slavery
  5. https://www.nps.gov/articles/000/constitutionalconvention-august22.htm
  6. https://en.wikipedia.org/wiki/Three-fifths_Compromise
  7. https://www.brennancenter.org/our-work/analysis-opinion/electoral-colleges-racist-origins
  8. https://www.gilderlehrman.org/history-resources/teaching-resource/historical-context-constitution-and-slavery
  9. https://www.nps.gov/articles/000/constitutionalconvention-august29.htm
  10. https://www.lwv.org/blog/three-fifths-compromise-and-electoral-college
  11. https://www.aaihs.org/a-compact-for-the-good-of-america-slavery-and-the-three-fifths-compromise-part-ii/

Page 2 of 9

Powered by WordPress & Theme by Anders Norén