Grumpy opinions about everything.

Month: January 2026

Bull Markets, Bear Markets and the Story Behind Wall Street’s Most Famous Animals

If you’ve ever caught a business news segment, you’ve probably heard anchors throwing around terms like “bull market” and “bear market” as if everyone just naturally knows what they mean. But beyond the basic idea that one’s good and one’s bad, the real mechanics of these market conditions—and how they got their animal nicknames—are pretty interesting.

How the Stock Market Works (The Quick Version)

Before we dive into bulls and bears, let’s cover the basics. The stock market is essentially a place where people buy and sell ownership stakes in companies. When you buy a share of stock, you’re purchasing a tiny piece of that company. The price of that share goes up or down based on how many people want to buy it versus how many want to sell it—classic supply and demand.

Companies sell shares to raise money for growth, and investors buy them hoping the company will do well and the stock price will increase. The overall “market” is tracked through indexes like the S&P 500 or Dow Jones Industrial Average, which measure how a group of major companies are performing. When most stocks are rising, we say the market is up; when most are falling, the market is down.

What Bull and Bear Markets Actually Mean

A bull market refers to a period when stock prices are rising, typically defined as a 20% or more increase from recent lows. During bull markets, investors are optimistic, companies are generally doing well, and people are more willing to take risks with their money. Bull markets usually are driven by a strong economy with low inflation and optimistic investors. Think of the economic boom of the late 1990s or the recovery after the 2008 financial crisis—those were classic bull markets where prices kept climbing for years.

A bear market is essentially the opposite: a general decline in the stock market over time, usually defined as a 20% or more price decline over at least a two-month period. During bear markets, investors get nervous, sell off their holdings, and pessimism spreads. When a 10% to 20% decline occurs, it’s classified as a correction, and bear territory always precedes a bear market. The Great Depression, the 2008 financial crisis, and the COVID-19 pandemic’s initial impact all triggered bear markets.

The Colorful History Behind the Terms

Now here’s where things get interesting. These terms didn’t come from some modern marketing genius—they trace back to 18th century London, and the story involves everything from old proverbs to violent animal fights to one of history’s biggest financial scandals.

The “bear” term came first. Etymologists point to an old proverb, warning that it is not wise “to sell the bear’s skin before one has caught the bear”. This saying was about the foolishness of counting on something before you actually have it. By the early 1700s, traders who engaged in short selling (betting that prices would fall) were called “bear-skin jobbers” because they sold a bear’s skin—the shares—before catching the bear. The term eventually got shortened to just “bears.”

The real watershed moment came with the South Sea Bubble of 1720. The South Sea Company was a British joint stock company founded by an act of Parliament in 1711, and in 1720, the company assumed most of the British national debt and convinced investors to give up state annuities for company stock sold at a very high premium. When everything collapsed, share prices dropped dramatically, starting a “bear market,” and the story became the topic of many literary works and went down in history as an infamous metaphor.

As for “bull,” the origins are a bit murkier. The first known instance of the market term “bull” popped up in 1714, shortly after the “bear” term emerged. Most historians think it arose as a natural counterpoint to “bear,” possibly influenced by bull-baiting and bear-baiting, two animal fighting sports popular at the time—though I should note that’s somewhat speculative.

There’s also a popular explanation about how the animals attack: bears swipe downward with their paws while bulls thrust upward with their horns, which nicely mirrors market movements. While that’s a helpful memory device, it’s probably more of a convenient coincidence or a retroactive description than the actual origin. The term “bull” originally meant a speculative purchase in the expectation that stock prices would rise, and was later applied to the person making such purchases.

Why This Still Matters

These metaphors have stuck around for three centuries because they work. They’re visceral and easy to remember—you can picture a charging bull or a hibernating bear without needing an economics degree. They also capture something real about market psychology: the aggressive optimism of bulls pushing prices up versus the defensive pessimism of bears hunkering down.

Understanding these terms helps you follow financial news and, more importantly, recognize when markets are shifting. Knowing you’re in a bull market might make you less surprised by rising prices, while recognizing a bear market can help you avoid panic-selling when things look grim.

The Bull and Bear Markets are among those things I’ve heard for years and never knew their origin.  This article is an attempt to explain it to myself.

Sources:

Supply-Side Economics and Trickle-Down: What Actually Happened?

The Basic Question

You’ve probably heard politicians arguing about tax cuts—some promising they’ll supercharge the economy, others dismissing them as giveaways to the rich. These debates usually involve two terms that get thrown around like political footballs: “supply-side economics” and “trickle-down economics.” But what do these terms actually mean, and more importantly, do they work? After four decades of real-world experiments, we finally have enough data to answer that question.

Understanding Supply-Side Economics

Supply-side economics is a legitimate economic theory that emerged in the 1970s when the U.S. economy was struggling with both high inflation and high unemployment—a combination that traditional economic theories said shouldn’t happen. The core idea is straightforward: economic growth comes from producing more goods and services (the “supply” side), not just from boosting consumer demand.

The theory rests on three main pillars. First, lower taxes—the thinking is that if people and businesses keep more of their money, they’ll work harder, invest more, and create jobs. According to economist Arthur Laffer’s famous curve, there’s supposedly a sweet spot where lower tax rates can actually generate more government revenue because the economy grows so much. Second, less regulation removes government restrictions so businesses can innovate and operate more efficiently. Third, smart monetary policy keeps inflation in check while maintaining enough money in the economy to fuel growth.

All of this sounds reasonable in theory. After all, who wouldn’t work harder if they kept more of their paycheck?

The Political Rebranding: Enter “Trickle-Down”

Here’s where economic theory meets political messaging. “Trickle-down economics” isn’t an academic term—it’s essentially a catchphrase, and not a complimentary one. Critics use it to describe supply-side policies when those policies mainly benefit wealthy people and corporations. The idea behind the name: give tax breaks to rich people and big companies, and the benefits will eventually “trickle down” to everyone else through job creation, higher wages, and economic growth.

Here’s the interesting part: no economist actually calls their theory “trickle-down economics.” Even David Stockman, President Reagan’s own budget director, later admitted that “supply-side” was basically a rebranding of “trickle-down” to make tax cuts for the wealthy easier to sell politically. So while they’re not identical concepts, they’re two sides of the same coin.

The Reagan Revolution: Testing the Theory

Ronald Reagan became president in 1981 and implemented the biggest supply-side experiment in U.S. history. He slashed the top tax rate from 70% down to 50%, and eventually to just 28%, arguing this would unleash economic growth that would lift all boats.

The results were genuinely mixed. On one hand, the economy created about 20 million jobs during Reagan’s presidency, unemployment fell from 7.6% to 5.5%, and the economy grew by 26% over eight years. Those aren’t small achievements.

But the picture gets more complicated when you look deeper. The tax cuts didn’t pay for themselves as promised—they reduced government revenue by about 9% initially. Reagan had to backtrack and raise taxes multiple times in 1982, 1983, 1984, and 1987 to address the mounting deficit problem. Income inequality increased significantly during this period, and surprisingly, the poverty rate at the end of Reagan’s term was essentially the same as when he started. Perhaps most telling, government debt more than doubled as a percentage of the economy.

There’s another wrinkle worth mentioning: much of the economic recovery happened because Federal Reserve Chairman Paul Volcker had already broken the back of inflation through tight monetary policy before Reagan’s tax cuts took effect. Disentangling how much credit Reagan’s policies deserve versus Volcker’s groundwork is genuinely difficult.

The Pattern Repeats

The story didn’t end with Reagan. George W. Bush enacted major tax cuts in 2001 and 2003, especially benefiting wealthy Americans. The result? Economic growth remained sluggish, deficits ballooned, and income inequality continued its upward march.

Then there’s Bill Clinton—the plot twist in this story. In 1993, Clinton actually raised taxes on the wealthy, pushing the top rate from 31% back up to 39.6%. Conservative economists predicted economic disaster. Instead, the economy boomed with what was then the longest sustained growth period in U.S. history, creating 22.7 million jobs. Even more remarkably, the government ran a budget surplus for the first time in decades.

Donald Trump’s 2017 tax cuts, focused heavily on corporations, showed minimal wage growth for workers while generating significant stock buybacks that primarily benefited shareholders—and yes, larger deficits. Trump’s subsequent economic policies in his second term have been characterized by such volatility that reasonable long-term assessments remain difficult.

The Kansas Experiment: A Modern Test Case

At the state level, Kansas Governor Sam Brownback implemented one of the boldest modern experiments in supply-side policy between 2012 and 2017, dramatically slashing income taxes especially for businesses. Proponents called it a “real live experiment” that would demonstrate supply-side principles in action.

Instead of unleashing growth, Kansas faced severe budget shortfalls that forced cuts to education and infrastructure. Economic growth actually lagged behind neighboring states that didn’t implement such aggressive cuts, and the state legislature eventually reversed many of the tax reductions. This case has become a frequently cited cautionary tale for critics of supply-side policies.

What Does Half a Century of Data Show?

After 50 years of real-world experiments, researchers finally have enough data to move beyond political rhetoric. A comprehensive study analyzed tax policy changes across 18 developed countries over five decades, looking at what actually happened after major tax cuts for the wealthy.

The findings are remarkably consistent. Tax cuts for the rich reliably increase income inequality—no surprise there. But they show no significant effect on overall economic growth rates and no significant effect on unemployment. Perhaps most damaging to the theory, they don’t “pay for themselves” through increased growth. At best, about one-third of lost revenue gets recovered through expanded economic activity.

In simpler terms: when you cut taxes for wealthy people, wealthy people get wealthier. The promised broader benefits largely fail to materialize. The 2022 World Inequality Report reinforced these conclusions, finding that the world’s richest 10% continue capturing the vast majority of all economic gains, while the bottom half of the population holds just 2% of all wealth.

Why the Theory Doesn’t Match Reality

When you think about it logically, the disconnect makes sense. If you give a tax cut to someone who’s already wealthy, they’ll probably save or invest most of it—they were already buying what they wanted and needed. Their daily spending habits don’t change much. But if you give money to someone who’s struggling to pay bills or afford necessities, they’ll spend it immediately, directly stimulating economic activity.

Economists call this concept “marginal propensity to consume,” and it explains why giving tax breaks to working and middle-class people actually does more to boost the economy than supply-side cuts focused on the wealthy. A dollar in the hands of someone who needs to spend it has more immediate economic impact than a dollar added to an already-substantial investment portfolio.

The Bottom Line

After 40-plus years of repeated experiments, the pattern is clear. Supply-side policies and trickle-down approaches consistently increase deficits, widen inequality, and fail to significantly boost overall economic growth or create more jobs than alternative policies. Meanwhile, periods with higher taxes on the wealthy, like the Clinton years, saw strong growth, robust job creation, and balanced budgets.

The Nuance Worth Keeping

None of this means all tax cuts are bad or that high taxes are always good—economics is rarely that simple. The critical questions are: who receives the tax cuts, and what outcomes do you realistically expect? Targeted tax cuts for working families, small businesses, or specific industries facing genuine challenges can serve as effective policy tools. Child tax credits, research and development incentives, or relief for struggling sectors might accomplish specific goals.

But the evidence accumulated over four decades is clear: broad tax cuts focused primarily on the wealthy and large corporations don’t deliver the promised economic benefits for everyone else. The benefits don’t trickle down in any meaningful way.

You’ll keep hearing these arguments for years to come. Politicians will continue promising that tax cuts for businesses and the wealthy will boost the entire economy. Now you know what the actual evidence shows, and you can judge those promises accordingly.


Sources:

The Sugar Act of 1764: The Tax Cut That Sparked a Revolution

The Sugar Act of 1764: The Tax Cut That Sparked a Revolution

Imagine a time when people rose up in protest of a tax being lowered.  Welcome to the world of the Sugar Act.

The Sugar Act of 1764 stands as one of the most ironic moments in the history of taxation. Here Britain was actually lowering a tax, and yet colonists reacted with a fury that would help spark a revolution. To understanding this paradox, we must understand that this new act represented something far more threatening than any previous attempt by Britain to regulate its American colonies.

The Old System: Benign Neglect

For decades before 1764, Britain had maintained what historians call “salutary neglect” toward its American colonies. The Molasses Act of 1733 had imposed a steep duty of six pence per gallon on foreign molasses imported into the colonies. On paper, this seemed like a significant burden for the rum-distilling industry, which depended heavily on cheap molasses from French and Spanish Caribbean islands. In practice, though, the tax was rarely collected. Colonial merchants either bribed customs officials or simply smuggled the molasses past them. The British government essentially looked the other way, and everyone profited.

This informal arrangement worked because Britain’s primary interest in the colonies was commercial, not fiscal. The Navigation Acts required colonists to ship certain goods only to Britain and to buy manufactured goods from British merchants, which enriched British traders and manufacturers without requiring aggressive tax collection in America. As long as this system funneled wealth toward London, Parliament didn’t care much about collecting relatively small customs duties across the Atlantic.

Everything Changed in 1763

The Seven Years’ War (which Americans call the French and Indian War) changed this comfortable arrangement entirely. Britain won decisively, driving France out of North America and gaining vast new territories. But victory came with a staggering price tag. Britain’s national debt had nearly doubled to £130 million, and annual interest payments alone consumed half the government’s budget. Meanwhile, Britain now needed to maintain 10,000 troops in North America to defend its expanded empire and manage relations with Native American tribes.

Prime Minister George Grenville faced a political problem. British taxpayers, already heavily burdened, were in no mood for additional taxes. The logic seemed obvious: since the colonies had benefited from the war’s outcome and still required military protection, they should help pay for their own defense. Americans, paid far lower taxes than their counterparts in Britain—by some estimates, British residents paid 26 times more per capita in taxes than colonists did.

What the Act Actually Did

The Sugar Act (officially the American Revenue Act of 1764) approached colonial taxation differently than anything before it. First, it cut the duty on foreign molasses from six pence to three pence per gallon—a 50% reduction. Grenville calculated, reasonably, that merchants might actually pay a three-pence duty rather than risk getting caught smuggling, whereas the six-pence duty had been so high it encouraged universal evasion.

But the Act did far more than adjust molasses duties. It added or increased duties on foreign textiles, coffee, indigo, and wine imported into the colonies. colonialtened regulations around the colonial lumber trade and banned the import of foreign rum entirely. Most significantly, the Act included elaborate provisions designed to strictly enforce these duties for the first time.

The enforcement mechanisms represented the real revolution in British policy. Ship captains now had to post bonds before loading cargo and had to maintain detailed written cargo lists. Naval patrols increased dramatically. Smugglers faced having their ships and cargo seized.

Significantly, the burden of proof was shifted to the accused.  They were required to prove their innocence, a reversal of traditional British justice. Most controversially, accused smugglers would be tried in vice-admiralty courts, which had no juries and whose judges received a cut of any fines levied.

The Paradox of the Lower Tax

So why did colonists react so angrily to a tax cut? The answer reveals the fundamental shift in the British-American relationship that the Sugar Act represented.

First, the issue wasn’t the tax rate. It was the certainty of collection. A six-pence tax that no one paid was infinitely preferable to a three-pence tax rigorously enforced. New England’s rum distilling industry, which employed thousands of distillery workers and sailors, depended on cheap molasses from the French West Indies. Even at three pence per gallon, the tax significantly increased operating costs. Many merchants calculated they couldn’t remain profitable if they had to pay it.

Second, and more importantly, colonists recognized that the Act’s purpose had changed relationships. Previous trade regulations, even if they involved taxes, were ostensibly about regulating commerce within the empire. The Sugar Act openly stated its purpose was raising revenue—the preamble declared it was “just and necessary that a revenue be raised” in America. This might seem like a technical distinction, but to colonists it mattered enormously. British constitutional theory held that subjects could only be taxed by their own elected representatives. Colonists elected representatives to their own assemblies but sent no representatives to Parliament. Trade regulations fell under Parliament’s legitimate authority to govern imperial commerce, but taxation for revenue was something else entirely.

Third, the enforcement mechanisms offended colonial sensibilities about justice and traditional British rights. The vice-admiralty courts denied jury trials, which colonists viewed as a fundamental right of British subjects. Having to prove your innocence rather than being presumed innocent violated another core principle. Customs officials and judges profiting from convictions created obvious incentives for abuse.

Implementation and Colonial Response

The Act took effect in September 1764, and Grenville paired it with an aggressive enforcement campaign. The Royal Navy assigned 27 ships to patrol American waters. Britain appointed new customs officials and gave them instructions to strictly do their jobs rather than accept bribes. Admiralty courts in Halifax, Nova Scotia became particularly notorious.  Colonists had to travel hundreds of miles to defend themselves in a court with no jury and with a judge whose income game from convictions.

Colonists responded immediately. Boston merchants drafted a protest arguing that the act would devastate their trade. They explained that New England’s economy depended on a complex triangular trade: they sold lumber and food to the Caribbean in exchange for molasses, which they distilled into rum, which they sold to Africa for slaves, who were sold to Caribbean plantations for molasses, and the cycle repeated. Taxing molasses would break this chain and impoverish the region.

But the economic arguments quickly evolved into constitutional ones. Lawyer James Otis argued that “taxation without representation is tyranny”—a phrase that would echo through the coming decade. Colonial assemblies began passing resolutions asserting their exclusive right to tax their own constituents. They didn’t deny Parliament’s authority to regulate trade, but they drew a clear line: revenue taxation required representation.

The protests went beyond rhetoric. Colonial merchants organized boycotts of British manufactured goods. Women’s groups pledged to wear homespun cloth rather than buy British textiles. These boycotts caused enough economic pain in Britain that London merchants began lobbying Parliament for relief.

The Road to Revolution

The Sugar Act’s significance extends far beyond its immediate economic impact. It established precedents and patterns that would define the next decade of imperial crisis.

Most fundamentally, it shattered the comfortable arrangement of salutary neglect. Once Britain demonstrated it intended to actively govern and tax the colonies, the relationship could never return to its previous informality. The colonists’ constitutional objections—no taxation without representation, right to jury trials, presumption of innocence—would be repeated with increasing urgency as Parliament passed the Stamp Act (1765), Townshend Acts (1767), and Tea Act (1773).

The Sugar Act also revealed the practical difficulties of governing an empire across 3,000 miles of ocean. The vice-admiralty courts became symbols of distant, unaccountable power. When colonists couldn’t get satisfaction through established legal channels, they increasingly turned to extralegal methods, including committees of correspondence, non-importation agreements, and eventually armed resistance.

Perhaps most importantly, the Sugar Act forced colonists to articulate a political theory that ultimately proved incompatible with continued membership in the British Empire. Once they agreed to the principle that they could only be taxed by their own elected representatives, and that Parliament’s authority over them was limited to trade regulation, the logic led inexorably toward independence. Britain couldn’t accept colonial assemblies as co-equal governing bodies since Parliament claimed supreme authority over all British subjects. The colonists couldn’t accept taxation without representation since they claimed the rights of freeborn Englishmen. These positions couldn’t be reconciled.

The Sugar Act of 1764 represents the point where the British Empire’s century-long success in North America began to unravel. By trying to make the colonies pay a modest share of imperial costs through what seemed like reasonable means, Britain inadvertently set in motion forces that would break the empire apart just twelve years later.

Sources

Mount Vernon Digital Encyclopedia – Sugar Act https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/sugar-act/ Provides overview of the Act’s provisions, economic context, and relationship to British debt from the Seven Years’ War. Includes information on tax burden comparisons between Britain and the colonies.

Britannica – Sugar Act https://www.britannica.com/event/Sugar-Act Covers the specific provisions of the Act, enforcement mechanisms, vice-admiralty courts, and the shift from the Molasses Act of 1733. Useful for technical details of the legislation.

History.com – Sugar Act https://www.history.com/topics/american-revolution/sugar-act Discusses colonial constitutional objections, the “taxation without representation” argument, and the enforcement provisions including burden of proof reversal and jury trial denial.

American Battlefield Trust – Sugar Act https://www.battlefields.org/learn/articles/sugar-act Details colonial response including boycotts, James Otis’s arguments, and the triangular trade system that the Act disrupted.

Additional Recommended Sources

Library of Congress – The Sugar Act https://www.loc.gov/collections/continental-congress-and-constitutional-convention-broadsides/articles-and-essays/continental-congress-broadsides/broadsides-related-to-the-sugar-act/ Primary source collection including contemporary colonial broadsides and protests against the Act.

National Archives – The Sugar Act (Primary Source Text) https://founders.archives.gov/about/Sugar-Act The actual text of the American Revenue Act of 1764, useful for verifying specific provisions and language.

Yale Law School – Avalon Project: Resolutions of the Continental Congress (October 19, 1765) https://avalon.law.yale.edu/18th_century/resolu65.asp Colonial responses to the Sugar and Stamp Acts, showing how the arguments evolved.

Massachusetts Historical Society – James Otis’s Rights of the British Colonies Asserted and Proved (1764) https://www.masshist.org/digitalhistory/revolution/taxation-without-representation Primary source for the “taxation without representation” argument that emerged from Sugar Act opposition.

Colonial Williamsburg Foundation – Sugar Act of 1764 https://www.colonialwilliamsburg.org/learn/deep-dives/sugar-act-1764/ Discusses economic impact on colonial merchants and the rum distilling industry.

Understanding Dementia: A Journey Through Memory Loss

When someone tells you they’re having trouble remembering where they put their keys, that’s probably just normal aging. But when they forget what keys are for altogether, that’s when doctors start thinking about dementia. It’s a distinction that matters deeply to millions of families navigating one of medicine’s most challenging conditions.

While reviewing some of my previous articles, I realized that while I have discussed conditions that mimic dementia, I haven’t discussed dementia itself.  This discussion has quite a bit of technical jargon, but it’s unavoidable.

Dementia isn’t a specific disease—it’s an umbrella term describing a decline in mental ability that interferes with daily life. Think of it like how “cancer” describes many different diseases. About 50 percent of people age 85 and older have some form of dementia, making it one of the most pressing health challenges of our aging population.

The Major Players: Types of Dementia

Alzheimer’s Disease stands as the heavyweight champion of dementia causes, accounting for an estimated 50 to 70 percent of all cases. What’s happening in the brain is both complicated and tragic. Beta-amyloid and phosphorylated tau proteins accumulate and spread through distributed neural networks in the brain, causing progressive metabolic abnormalities, neuronal injury, and cellular death all of which disrupt functional connectivity. The hallmark symptoms include problems with short-term memory: paying bills, preparing meals, remembering appointments, or getting lost in familiar areas. Your grandmother might remember vivid details from her childhood but can’t recall what she ate for breakfast or even recognize you.

Vascular Dementia comes in second place, accounting for about 5 to 15 percent of cases. Typical symptoms include slowed thinking, trouble with organization, difficulty planning or following instructions, and in the later phases, gait problems and urinary difficulties.  It results from strokes or other problems with blood flow to the brain.  On occasion it may be the result of a series of subclinical strokes with the victim being unaware of the individual events.  Symptoms gradually become worse as blood vessels get damaged. Imagine the brain like a city—when the roads get blocked, supplies can’t get through, and neighborhoods start to fail.

Lewy Body Dementia Involves the deposit of abnormal alpha-synuclein proteins called Lewy bodies.  It presents a particularly unsettling picture. Many people with this type of dementia experience daytime sleepiness, confusion, fluctuating cognition, staring spells, sleep disturbances, visual hallucinations, or movement problems. The visual hallucinations are generally vivid images of people or animals and often occur when someone is about to fall asleep or wake up.

Frontotemporal Dementia often hits younger people. It is caused by abnormalities in the proteins FUS and TDP-43. Most cases are diagnosed in people aged 45 to 65. Rather than starting with memory loss, early symptoms may include personality changes like reduced sensitivity to others’ feelings, lack of social awareness, making inappropriate jokes, language problems, obsessive behavior, or sudden outbursts of anger. It’s heartbreaking when someone’s personality fundamentally changes before your eyes.

LATE is a newly recognized form of dementia (Limbic-predominant Age-related TDP-43 Encephalopathy), which causes symptoms similar to Alzheimer’s but has different underlying causes involving abnormal clusters of TDP-43 protein. Research suggests that almost 40 percent of people whose age at death was 88 years or greater may have had LATE of varying degrees.

Less Common Forms:  Parkinson’s disease dementia (movement disorder first, dementia later).Normal Pressure Hydrocephalus (NPH)—one of the few reversible types.  Chronic traumatic encephalopathy (CTE)—linked to repeated head injuries.HIV-associated dementia—less common with modern treatmentSevere vitamin deficiencies (e.g., B1 or B12)—reversible if caught early.

Figuring Out What’s Wrong: The Diagnostic Process

Making a dementia diagnosis isn’t like getting a strep test or an Xray—there’s no single definitive test. Physicians use diagnostic tools combined with medical history and other information, including neurological exams, cognitive and functional assessments, brain imaging like MRI or CT, and cerebrospinal fluid or blood tests.

The process starts with your doctor asking detailed questions about your symptoms and medical history. Typical questions include asking about whether dementia runs in the family, how and when symptoms began, changes in behavior and personality, and if the person is taking certain medications that might cause or worsen symptoms.  There are various cognitive tests—like the infamous clock face drawing—that physicians can use to assess the likelihood of dementia.

Brain imaging can play a crucial role for some patients. Structural imaging with MRI or CT is primarily used to rule out other conditions that may cause symptoms similar to dementia but that require different treatment.  They can reveal tumors, evidence of strokes, damage from head trauma, or fluid buildup in the brain. Common MRI findings include brain atrophy, particularly shrinkage of the hippocampus which supports learning and memory and the cortex which supports perception, thought and voluntary action.  Other findings may include white matter changes that affect communication between brain regions.  Lesions from small strokes may be identified.

More sophisticated imaging like PET scans can detect specific proteins associated with Alzheimer’s. Recent advances in molecular imaging allow for visualization of amyloid and tau deposits in a living human brain, bringing us closer to an in vivo (while alive) definitive diagnosis.  This is significant because historically Alzheimer’s could only be definitively diagnosed at autopsy.

Treatment Options: Managing the Unmanageable

Here’s where I need to be honest: there’s no cure for dementia. But that doesn’t mean we’re helpless. Several medications can help manage symptoms and potentially slow progression.

For Alzheimer’s specifically, the FDA has approved two categories of drugs. These include drugs that change disease progression in people living with early Alzheimer’s disease, and drugs that may temporarily mitigate some symptoms. The newer disease-modifying drugs include donanemab and lecanemab.  They are anti-amyloid antibody intravenous infusion therapies that have demonstrated that removing beta-amyloid from the brain reduces cognitive and functional decline in people living with early Alzheimer’s.

More traditional treatments focus on symptom management. Medications such as galantamine, rivastigmine, and donepezil improve communication between nerve cells. Cholinesterase inhibitors work by preventing the breakdown of acetylcholine, a neurotransmitter, which may stabilize dementia symptoms.

Beyond medications, lifestyle modifications matter. Lifestyle changes including eating a balanced diet full of fruits and vegetables may help slow progression. Maintaining a routine to avoid confusion, including regular exercise and sleep, all help keep people with dementia as functional as possible for as long as possible.  Staying mentally active and socially connected can help slow the onset and progression of dementia.

What to Expect: The Prognosis

This is the hardest part to talk about.  The life expectancy of dementia patients varies enormously. Most people older than 65 with Alzheimer’s die within four to eight years of being diagnosed, but some people live for decades, especially if they were diagnosed before turning 65.

Life expectancy depends on a huge range of factors including the type of dementia diagnosed, overall health, and the age of diagnosis. Vascular dementia typically has a shorter life expectancy than Alzheimer’s disease due to underlying cardiovascular problems.

Progression happens in stages. Early symptoms include finding it hard to carry out familiar daily tasks, struggling to follow conversations or find the right word, and getting confused with familiar places. Signs of late-stage dementia include speaking in single words or repeated phrases that don’t make sense, not being able to understand what people are saying, or following things that are happening around them.

Those living with advanced dementia are especially prone to infection, constipation, skin ulcers and blood clots, which can put their life in danger if treatment is delayed.  Dehydration and malnutrition are serious risks for those without a strong support network as they often forget to eat or drink.  They are also more likely to be injured in falls and other accidents.

Ultimately, as you lose more brain function, activities vital to life begin to be affected, including breathing, swallowing, digestion, heart rate and sleep. Most people don’t die directly from dementia but from complications like pneumonia or falls.

A Note on Hope

Reading about dementia can feel depressing, but there’s reason for cautious optimism. While individual prognosis varies significantly and can’t be predicted with precision, early detection of symptoms and an early diagnosis can help with planning ahead to manage the disease.  Scientists continue researching new treatments, particularly regarding new biomarkers and disease modifying drugs.  Life expectancy estimates are improving all the time as many people are diagnosed earlier and receive better treatment and care. 

________________________________________________________________________

Sources

  1. National Institute on Aging – What Is Dementia? Symptoms, Types, and Diagnosis https://www.nia.nih.gov/health/alzheimers-and-dementia/what-dementia-symptoms-types-and-diagnosis
  2. NHS – Symptoms of Dementia https://www.nhs.uk/conditions/dementia/symptoms-and-diagnosis/symptoms/
  3. Cleveland Clinic – Dementia: What It Is, Causes, Symptoms, Treatment & Types https://my.clevelandclinic.org/health/diseases/9170-dementia
  4. CDC – About Dementia https://www.cdc.gov/alzheimers-dementia/about/index.html
  5. Cleveland Clinic – Alzheimer’s Disease: Symptoms & Treatment https://my.clevelandclinic.org/health/diseases/9164-alzheimers-disease
  6. Wikipedia – Dementia https://en.wikipedia.org/wiki/Dementia
  7. Practical Neurology – Brain Imaging in Differential Diagnosis of Dementia https://practicalneurology.com/diseases-diagnoses/imaging-testing/brain-imaging-in-differential-diagnosis-of-dementia/31533/
  8. Healthgrades – Vascular Dementia Life Expectancy: Statistics and Disease Progression https://resources.healthgrades.com/right-care/dementia/vascular-dementia-prognosis-and-life-expectancy
  9. Healthgrades – Dementia Life Expectancy: Stages and Progression https://resources.healthgrades.com/right-care/dementia/dementia-prognosis-and-life-expectancy
  10. Elder – Dementia and Life Expectancy: Planning for the Future https://www.elder.org/dementia-care/dementia-life-expectancy/
  11. Medical News Today – Dementia Life Expectancy: Duration and Stages https://www.medicalnewstoday.com/articles/how-long-does-dementia-last
  12. Alzheimer’s Association – Medications for Memory, Cognition & Dementia-Related Behaviors https://www.alz.org/alzheimers-dementia/treatments/medications-for-memory
  13. Alzheimer’s Association – Medical Tests for Diagnosing Alzheimer’s & Dementia https://www.alz.org/alzheimers-dementia/diagnosis/medical_tests
  14. DRI Health Group – Can MRI Diagnose Dementia? https://drihealthgroup.com/health-tips/can-mri-diagnose-dementia

Slavery and the Constitutional Convention: The Compromise That Shaped a Nation

When fifty-five delegates gathered in Philadelphia during the sweltering summer of 1787, they faced a challenge that would haunt American politics for the next eight decades. The question wasn’t whether slavery was morally right—many delegates privately acknowledged its evil—but whether a unified nation could exist with slavery as a part of it. That summer, the institution of slavery nearly killed the Constitution before it was born.

The Battle Lines

The convention revealed a stark divide. On one side stood delegates who spoke forcefully against slavery, though they represented a minority voice. Gouverneur Morris of Pennsylvania delivered some of the most scathing condemnations, calling slavery a “nefarious institution” and “the curse of heaven on the states where it prevailed.” According to James Madison’s notes, Morris argued passionately that counting enslaved people for representation would mean that someone “who goes to the Coast of Africa, and in defiance of the most sacred laws of humanity tears away his fellow creatures from their dearest connections & damns them to the most cruel bondages, shall have more votes in a Government instituted for protection of the rights of mankind.”

Luther Martin of Maryland, himself a slaveholder, joined Morris in opposition. He declared the slave trade “inconsistent with the principles of the revolution and dishonorable to the American character.”.  Even George Mason of Virginia, who owned over 200 enslaved people, denounced slavery at the convention, warning that “every master of slaves is born a petty tyrant” and that it would bring “the judgment of heaven on a country.”

The Southern Coalition

Facing these critics stood delegates from the Deep South—primarily South Carolina and Georgia—who made it abundantly clear that protecting slavery was non-negotiable. The South Carolina delegation was particularly unified and aggressive in defending the institution. All four of their delegates—John Rutledge, Charles Pinckney, Charles Cotesworth Pinckney, and Pierce Butler—owned slaves, and they spoke with one voice.

Charles Cotesworth Pinckney stated bluntly: “South Carolina and Georgia cannot do without slaves.” John Rutledge framed it even more starkly: “The true question at present is, whether the Southern States shall or shall not be parties to the Union.” The message was unmistakable—attempt to restrict slavery, and there would be no Constitution and perhaps no United States.

The Southern states didn’t just defend slavery; they threatened to walk out repeatedly. When debates over the slave trade heated up on August 22, delegates from North Carolina, South Carolina, and Georgia stated they would “never be such fools as to give up” their right to import enslaved Africans.  These weren’t idle threats—they were credible enough to force compromise.

The Three-Fifths Compromise

The central flashpoint came over representation in Congress. The new Constitution would base representation on population, but should enslaved people count? Southern states wanted every enslaved person counted fully, which would dramatically increase their congressional power. Northern states argued that enslaved people—who had no rights and couldn’t vote—shouldn’t count at all.

The three-fifths ratio had actually been debated before. Back in 1783, Congress had considered using it to calculate state tax obligations under the Articles of Confederation, though that proposal failed. James Wilson of Pennsylvania resurrected the idea at the Constitutional Convention, suggesting that representation be based on the free population plus three-fifths of “all other persons”—the euphemism they used to avoid writing the word “slave” in the Constitution.

The compromise passed eight states to two. New Jersey and Delaware are generally identified as the states voting against the compromise, New Hampshire is not listed as taking part in the vote. Rhode Island did not send a delegation to the convention and by the time of the vote New York no longer had a functioning delegation.

Though the South ultimately accepted the compromise, it wasn’t what they wanted. Southern delegates had pushed to count enslaved people equally with free persons—but otherwise ignored on all issues of human rights. The three-fifths ratio was a reduction from their demands—a limitation on slave state power, though it still gave them substantial advantage. With about 93% of the nation’s enslaved population concentrated in just five southern states, this compromise increased the South’s congressional delegation by 42%.

James Madison later recognized the compromise’s significance. He wrote after the convention: “It seems now to be pretty well understood that the real difference of interests lies not between the large and small but between the northern and southern states. The institution of slavery and its consequences form the line of discrimination.”

Could the Constitution Have Happened Without It?

Here’s where I need to speculate, but I’m fairly confident in this assessment: no, the Constitution would not have been ratified without the three-fifths compromise and related concessions on slavery.

The evidence is overwhelming. South Carolina and Georgia delegates stated explicitly and repeatedly that they would not join any union that restricted slavery. Alexander Hamilton himself later acknowledged that “no union could possibly have been formed” without the three-fifths compromise. Even delegates who despised slavery, like Roger Sherman of Connecticut, argued it was “better to let the Southern States import slaves than to part with them.”

The convention negotiated three major slavery compromises, all linked. Beyond the three-fifths clause, they agreed Congress couldn’t ban the international slave trade until 1808, and they included the Fugitive Slave Clause requiring the return of escaped enslaved people even from free states. These deals were struck together on August 29, 1787, in what Madison’s notes reveal was a package negotiation between northern and southern delegates.

Without these compromises, the convention would likely have collapsed. The alternative wouldn’t have been a better Constitution—it would have been no Constitution at all, potentially leaving the thirteen states as separate nations or weak confederations. Whether that would have been preferable is a profound counterfactual question that historians still debate.

The Impact on Early American Politics

The three-fifths compromise didn’t just affect one document—it shaped American politics for decades. Its effects were immediate and substantial.

The most famous early example came in the presidential election of 1800. Thomas Jefferson defeated John Adams in what’s often called the “Revolution of 1800″—the first peaceful transfer of power between opposing political parties. But Jefferson’s victory owed directly to the three-fifths compromise. Virginia’s enslaved population gave the state extra electoral votes that proved decisive. Historian Garry Wills has speculated that without these additional slave-state votes, Jefferson would have lost. Pennsylvania had a free population 10% larger than Virginia’s, yet received 20% fewer electoral votes because Virginia’s numbers were inflated by the compromise.

The impact extended far beyond that single election. Research shows the three-fifths clause changed the outcome of over 55% of legislative votes in the Sixth Congress (1799-1801). (The additional southern representatives—about 18 more than their free population warranted—gave the South what became known as the “Slave Power” in Congress.

This power influenced major legislation throughout the antebellum period. The Indian Removal Act of 1830, which forcibly relocated Native Americans to open land for plantation agriculture, passed because of margins provided by these extra southern representatives. The Missouri Compromise, the Kansas-Nebraska Act, and numerous other slavery-related measures bore the fingerprints of this constitutional imbalance.

The compromise also affected Supreme Court appointments and federal patronage. Southern-dominated Congresses ensured pro-slavery justices and policies that protected the institution. The sectional tensions it created led directly to later compromises—the Missouri Compromise of 1820, the Compromise of 1850—each one a temporary bandage on a wound that wouldn’t heal.

By the 1850s, the artificial political power granted to slave states had become intolerable to many northerners. When Abraham Lincoln won the presidency in 1860 without carrying a single southern state, southern political leaders recognized they had lost control of the federal government. Senator Louis Wigfall of Texas complained that non-slaveholding states now controlled Congress and the Electoral College. Ten southern states seceded in large part because they believed the three-fifths compromise no longer protected their interests.

The Bitter Legacy

The framers consciously avoided using the words “slave” or “slavery” in the Constitution, recognizing it would “sully the document.” But the euphemisms fooled no one. They had built slavery into the structure of American government, trading moral principles for political union.

The Civil War finally resolved what the Constitutional Convention had delayed. The Thirteenth Amendment abolished slavery in 1865, but not until 1868 did the Fourteenth Amendment finally strike the three-fifths clause from the Constitution, requiring that representation be based on counting the “whole number of persons” in each state.

Was it worth it? That’s ultimately a question of values. The Constitution created a stronger national government that eventually abolished slavery, but it took 78 years and a war that killed over 600,000 Americans. As Thurgood Marshall noted on the Constitution’s bicentennial, the framers “consented to a document which laid a foundation for the tragic events which were to follow.”

The convention delegates knew what they were doing. They chose union over justice, pragmatism over principle. Whether that choice was necessary, wise, or moral remains one of the most contested questions in American history.

____________________________________________________

Sources

  1. https://www.battlefields.org/learn/articles/slavery-and-constitution
  2. https://en.wikipedia.org/wiki/Luther_Martin
  3. https://schistorynewsletter.substack.com/p/7-october-2024
  4. https://www.americanacorner.com/blog/constitutional-convention-slavery
  5. https://www.nps.gov/articles/000/constitutionalconvention-august22.htm
  6. https://en.wikipedia.org/wiki/Three-fifths_Compromise
  7. https://www.brennancenter.org/our-work/analysis-opinion/electoral-colleges-racist-origins
  8. https://www.gilderlehrman.org/history-resources/teaching-resource/historical-context-constitution-and-slavery
  9. https://www.nps.gov/articles/000/constitutionalconvention-august29.htm
  10. https://www.lwv.org/blog/three-fifths-compromise-and-electoral-college
  11. https://www.aaihs.org/a-compact-for-the-good-of-america-slavery-and-the-three-fifths-compromise-part-ii/

America’s Healthcare Paradox: Why We Pay Double and Get Less

The healthcare debate in America often circles back to a fundamental question: should we move toward a single-payer system, or is our current mixed public-private model the better path forward? It’s a conversation that gets heated quickly, but when you strip away the politics and look at how different systems actually function around the world, some interesting patterns emerge.

What We Mean by Single-Payer

A single-payer healthcare system means that one entity—usually the government or a government-related organization—pays for all covered healthcare services. Doctors and hospitals can still be private (and usually are), but instead of dealing with dozens of different insurance companies, they bill one source. It’s a lot like Medicare, which is why proponents often call it “Medicare-for-all”.

The key thing to understand is that single-payer isn’t necessarily the same as socialized medicine. In Canada’s system, for instance, the government pays the bills, but doctors are largely in the private sector and hospitals are controlled by private boards or regional health authorities rather than being part of the national government. Compare that to the UK’s National Health Service, where many hospitals and clinics are government-owned and many doctors are government employees.

America’s Current Patchwork

The United States operates what might charitably be called a “creative” approach to healthcare—a complex mix of employer-sponsored private insurance, government programs like Medicare, Medicaid and the VA system, individual marketplace plans, and direct out-of-pocket payments. Government already pays roughly half of total US health spending, but benefits, cost-sharing, and networks vary widely between plans, with little overall coordination.​ In 2023, private health insurance spending accounted for 30 percent of total national health expenditures, Medicare covered 21 percent, and Medicaid covered 18 percent.  Most of the remainder was either paid out of pocket by private citizens or was written off by providers as uncollectible.

Here’s where it gets expensive. U.S. health care spending grew 7.5 percent in 2023, reaching $4.9 trillion or $14,570 per person, accounting for 17.6 percent of the nation’s GDP, and national health spending for 2024 is expected to have exceeded $5.3 trillion or 18% of GDP, and health spending is expected to grow to 20.3 percent of GDP by 2033.

For a typical American family, the costs are real and rising. In 2024, the estimated cost of healthcare for a family of four in an employer-sponsored health plan was $32,066.

The European Landscape

Europe doesn’t have one healthcare model—it has several, and they’re all quite different from what we have in the States. Most of the 35 countries in the European Union have single-payer healthcare systems, but the details vary considerably.

Countries like the UK, Sweden, and Norway operate what are essentially single-payer systems where it is solely the government who pays for and provides healthcare services and directly owns most facilities and employs most clinical and related staff with funds from tax contributions. Then you have countries like Germany, and Belgium that use “sickness funds”—these are non-profit funds that don’t market, cherry pick patients, set premiums or rates paid to providers, determine benefits, earn profits or have investors. They’re quasi-public institutions, not private insurance companies like we know them in America.  Some systems, such as the Netherlands or Switzerland, rely on mandatory individually purchased private insurance with tight regulation and subsidies, achieving universal coverage with a structured, competitive market.

The French System

France is particularly noted for a successful universal, government-run health insurance system usually described as a single-payer with supplements. All legal residents are automatically covered through the national health insurance program, which is funded by payroll taxes and general taxation.

Most physicians and hospitals are private or nonprofit, not government employees or facilities. Patients generally have free choice of doctors and specialists, though coordinating through a primary care physician improves access and reimbursement. The national insurer pays a large portion of medical costs (often 70–80%), while voluntary private supplemental insurance covers most remaining out-of-pocket expenses such as copays and deductibles.

France is known for spending significantly less per capita than the United States. Cost controls come from nationally negotiated fee schedules and drug pricing rather than limits on access.

What’s striking is that in 2019, US healthcare spending reached $11,072 per person—over double the average of $5,505 across wealthy European nations. Yet despite spending roughly twice as much per person, American health outcomes often lag behind.

The Outcomes Question

This is where the comparison gets uncomfortable for American exceptionalism. The U.S. has the lowest life expectancy at birth among comparable wealthy nations, the highest death rates for avoidable or treatable conditions, and the highest maternal and infant mortality.

In 2023, life expectancy in comparable countries was 82.5 years, which is 4.1 years longer than in the U.S. Japan manages this with healthcare spending at just $5,300 per capita, while Americans spend more than double that amount.

Now, it’s important to note that healthcare systems don’t operate in a vacuum. Life expectancy is influenced by many factors beyond medical care—diet, exercise, smoking, gun violence, drug overdoses, and social determinants of health all play roles. But when you’re spending twice as much and getting worse results, it suggests the system itself might be part of the problem.

Advantages of Single-Payer Systems

The case for single-payer rests on several compelling points. First, administrative simplicity translates to real cost savings. A study found that the administrative burden of health care in the United States was 27 percent of all national health expenditures, with the excess administrative cost of the private insurer system estimated at about $471 billion in 2012 compared to a single-payer system like Canada’s. That’s over $1 out of every $5 of total healthcare spending just going to paperwork, billing disputes, and insurance company profit and overhead before any patient receives care.

Universal coverage is another major advantage. In a properly functioning single-payer system, nobody goes bankrupt from medical bills, nobody delays care because they can’t afford it, and nobody loses coverage when they lose their job. The peace of mind that comes with knowing you’re covered regardless of employment status or pre-existing conditions is difficult to quantify but enormously valuable.

Single-payer systems also have significant negotiating power. When one entity is buying drugs and services for an entire nation, pharmaceutical companies and medical device manufacturers have much less leverage to charge whatever they want. This helps explain why prescription drug prices in other countries are often a fraction of prices in the U.S.

Disadvantages and Trade-offs

The critics of single-payer systems aren’t wrong about everything. Wait times are a genuine concern in some systems. When prices and overall budgets are tightly controlled, some countries experience longer waits for selected elective surgeries, imaging, or specialty visits, especially if investment lags demand.

In 2024, Canadian patients experienced a median wait time of 30 weeks between specialty referral and first treatment, up from 27.2 weeks in 2023, with rural areas facing even longer delays. For procedures like elective orthopedic surgery, patients wait an average of 39 weeks in Canada.

However, it’s crucial to understand that wait times are not a result of the single-payer system itself but of system management, as wait times vary significantly across different single-payer and social insurance systems. Many European countries with universal coverage don’t experience the same wait time issues that plague Canada.

The transition costs are also substantial. Moving from our current system to single-payer would disrupt a massive industry. Over fifteen percent of our economy is related to health care, with half spent by the private sector. Around 160 million Americans currently have insurance through their employers, and transitioning all of them to a government-run plan would be an enormous administrative and political challenge.

A large national payer can be slower to change benefit designs or adopt new payment models; shifting political majorities can affect funding levels and benefit generosity.

Taxes would need to increase significantly to fund such a system, though proponents argue this would be offset by the elimination of insurance premiums, deductibles, and co-pays. It’s essentially a question of whether you’d rather pay through taxes or through premiums—the money has to come from somewhere.

Advantages of America’s Mixed System

Our current system does have some genuine strengths. Innovation thrives in the American healthcare market. The profit motive, for all its flaws, does drive pharmaceutical research and medical device development. American medical schools and research institutions lead the world in many areas of medicine.   Academic medical centers and specialty hospitals deliver advanced procedures and complex care that attract patients internationally.​

The system also offers more choice for those who can afford it. If you have good insurance, you typically face shorter wait times for elective procedures and can often see specialists without lengthy delays. Americans with high-quality employer-sponsored coverage give their plans relatively high ratings.

Competition between providers can theoretically drive quality improvements, though this effect is often undermined by the complexity of the market and the difficulty consumers face in shopping for healthcare.

Disadvantages of the Current U.S. System

The most glaring problem is simple: The United States remains the only developed country without universal healthcare, and 30 million Americans remain uninsured despite gains under the Affordable Care Act, and many of these gains will soon be lost. Being uninsured in America isn’t just an inconvenience—it can be deadly. People delay care, skip medications, and avoid preventive screenings because of cost concerns. 

The administrative complexity is staggering. Doctors spend enormous amounts of time dealing with insurance companies, prior authorizations, and billing disputes. Hospitals employ armies of billing specialists just to navigate the maze of different insurance plans, each with its own rules, formularies, and coverage determinations.  U.S. administrative costs account for ~25% of all healthcare spending, among the highest in the world.

Medical bankruptcy is uniquely American. Even people with insurance can find themselves financially devastated by serious illness. High deductibles, surprise bills, and out-of-network charges create a minefield of potential financial catastrophe.  Studies of U.S. bankruptcy filings over the past two decades have consistently found that medical bills and medical problems are a major factor in a large share of consumer bankruptcies. Recent summaries suggest that roughly two‑thirds of US personal bankruptcies involve medical expenses or illness-related income loss, and around 17% of adults with health care debt report declaring bankruptcy or losing a home because of that debt.

The system is also profoundly inequitable. Quality of care often depends more on your job, your income, and your zip code than on your medical needs. Out-of-pocket costs per capita have increased as compared to previous decades and the burden falls disproportionately on those least able to afford it.

What Europe Shows Us

The European experience demonstrates that there isn’t one “right” way to achieve universal coverage. The UK’s NHS, Germany’s sickness funds, and France’s hybrid system all manage to cover everyone at roughly half the per-capita cost of American healthcare. Universal Health Coverage exists in all European countries, with healthcare financing almost universally government managed, either directly through taxation or semi-directly through mandated and government-subsidized social health insurance.

They’ve accomplished this through various combinations of centralized negotiation of drug prices, global budgets for hospitals, strong primary care systems that serve as gatekeepers to more expensive specialist care, emphasis on preventive services, and regulation that prevents insurance companies from cherry-picking healthy patients.

Are these systems perfect? No. One of the major disadvantages of centralized healthcare systems is long wait lists to access non-urgent care, though Americans often wait as long or longer for routine primary care appointments as do patients in most universal-coverage countries. Many European countries are wrestling with funding challenges as populations age and expensive new treatments become available. But they’ve solved the fundamental problem that America hasn’t: they ensure everyone has access to healthcare without the risk of financial ruin.

The Path Forward?

The debate over healthcare in America often presents false choices. We don’t have to choose between Canadian-style single-payer and our current system—there are multiple models we could adapt. We could move toward a German-style system with heavily regulated non-profit insurers. We could create a robust public option that competes with private insurance. We could expand Medicare gradually by lowering the eligibility age over time.

What’s clear from international comparisons is that the status quo is unusually expensive and produces mediocre results. We’re paying premium prices for economy outcomes. Whether single-payer is the answer depends partly on your priorities. Do you value universal coverage and cost control more than unlimited choice? Are you willing to accept potentially longer wait times for non-urgent care in exchange for lower costs and universal access? How much do you trust government to manage a program this large?

These aren’t easy questions, and reasonable people disagree. But the evidence from Europe suggests that universal coverage at reasonable cost is achievable—it just requires us to make some choices about what we value most in a healthcare system.


Sources:

Life Below Deck: Enlisted Sailors in America’s Continental Navy

When the Continental Congress established America’s first navy in October 1775, they faced a daunting challenge: how do you build a fleet from scratch when you’re fighting the world’s most powerful naval force? The Continental Navy peaked at around 3,000 men serving on approximately 30 ships, a tiny force compared to Britain’s massive Royal Navy. But who were these sailors who were willing to risk their lives for a fledgling republic?

Where They Came From

The colonial maritime community had extensive seafaring experience, as much of British trade was carried in American vessels, and North Americans made up a significant portion of the Royal Navy’s seamen. Continental Navy sailors came primarily from port cities along the Atlantic coast, particularly New England communities where maritime trades were a way of life. Many had worked as merchant sailors, fishermen, or privateers before joining.

The naval service was notably diverse for its time, including native-born Americans, British deserters, free and enslaved Black sailors, and European immigrants. Unlike the Continental Army, which had periods of banning Black soldiers or sometimes placing them in segregated regiments, the Continental Navy was mostly integrated. At sea, there was less distinction between free and enslaved sailors, and those held in bondage had opportunities to work toward freedom. This maritime tradition of relative equality distinguished naval service from other Revolutionary War experiences.

Getting Into the Service

Recruiting sailors proved to be one of the Continental Navy’s biggest headaches. Navy boards supervised appointing petty officers and enlisting seamen, though these duties were chiefly performed by ship commanders or recruiting agents. The first Marine recruiting station was located at Tun’s Tavern, a bar in Philadelphia.

Enlistment was generally voluntary, though the line between volunteering and impressment—forced service—was sometimes blurred. Recruiting parties would scour port towns seeking able-bodied men, advertising not only pay but also the possibility of capturing British prizes for sale, with proceeds shared among the crew—a powerful incentive.

The problem was competition. Privateering—private ships licensed by congress to seize enemy vessels—was far more attractive to sailors because cruises were shorter and pay could be better. With over 2,000 privateers operating during the war, the Continental Navy struggled constantly to maintain adequate crew sizes. Continental captains often found themselves unable to man their ships due to privateers’ superior inducements.

Landsmen, Seamen, and Petty Officers

At the bottom rung of a Navy crew stood the landsman—a recruit with little or no sea experience. Many were farm boys or tradesmen who had never set foot on a ship. Their days were filled with the hardest labor: hauling ropes, scrubbing decks, and learning basic seamanship.

Above them were ordinary seamen, who had some experience afloat, and the more skilled able seamen who knew their way around sails, rigging, and naval gunnery. These sailors formed the backbone of the Continental Navy. Sailors skilled in managing the ship’s rigging were said to “know the ropes.” Without their knowledge of wind, tide, and timber, ships would have been little more than floating platforms.

The most experienced enlisted men were promoted to petty officers. These weren’t commissioned officers but rather specialists and leaders—boatswain’s mates directing rigging crews, gunner’s mates overseeing cannon fire, and carpenters’ mates keeping the wooden hulls afloat. They were the Navy’s “non-commissioned officers,” long before the U.S. Navy had a formal NCO corps.

Most Continental Navy ships also carried detachments of Continental Marines. These enlisted men were soldiers at sea, tasked with keeping order on deck, manning small arms in combat, and leading boarding parties.

What They Wore

Unlike officers who had prescribed uniforms, enlisted sailors received no standard clothing from the Continental Navy. Due to meager funds and lack of manufacturing capacity, sailors generally provided their own clothing, usually consisting of pantaloons often tied at the knee or knee breeches, a jumper or shirt, neckerchief, short waisted jacket, and low crowned hats. Most sailors went barefoot, and a kerchief was worn either as a sweat band or as a simple collar closure. The short trousers served a practical purpose—they didn’t interfere with climbing the ship’s rigging. This lack of uniforms reflected the Continental Navy’s financial struggles, where everything from ships to ammunition took priority over standardized clothing.

Daily Life at Sea

Shipboard duties for enlisted sailors were grueling and dangerous. Landsmen cleaned the deck, helped raise or lower the anchor, worked in the galley, and assisted other crew members. More experienced sailors handled the complex work of managing sails, operating guns during combat, standing watch, and maintaining the vessel. Specialized roles were filled by experienced hands, and most sailors worked long shifts in harsh conditions, often enduring crowded, wet, and unsanitary quarters below deck.

Living conditions were cramped. Sailors lived in close quarters with limited privacy, shared hammocks on the lower decks, and endured monotonous food rations. Meals were simple, based on salted meat, ship’s biscuit, and whatever could be supplemented from local ports or captured prizes. Leisure was rare, and recreation was often limited to singing, storytelling, or gambling. The work was physically demanding and accidents were common—falling from rigging, being crushed by shifting cargo, or drowning were constant risks.

Discipline and Relations with Officers

Discipline in the Continental Navy was deeply influenced by the British Royal Navy and the “ancient common law of the sea.” The Continental Congress issued articles governing naval discipline, empowering officers to maintain strict order and punish infractions including drunkenness, blasphemy, theft, or disobedience. Punishments included wearing a wooden collar, spending time in irons, receiving pay deductions, confinement on bread and water, or, for serious offenses, flogging.

Flogging was often done with a multi-thonged whip known as the cat o’ nine tails. The most common flogging consisted of between 12 and 24 lashes, though mutineers might receive sentences in the hundreds of lashes—often becoming a death sentence.

Even though officers held absolute authority aboard their vessels, the Continental Navy sometimes suffered from severe discipline problems. Some commanders found it impossible to maintain control over squadrons made up of crews recruited from one area and commanded by officers from another. The relationship between officers and enlisted men reflected the social hierarchies of the time, with a clear divide between the educated officer class and working-class sailors. However, the shared dangers of combat and the sea could create bonds that transcended these divisions.

A Brief but Important Legacy

Enlisted sailors of the Continental Navy came from diverse and often hardscrabble backgrounds, shaped by the hard labor and hazards of maritime life. These men, whose names are mostly lost to history, formed the foundation of America’s first navy and contributed profoundly—through sacrifice and service—to the establishment of American independence.

Of approximately 65 vessels that served in the Continental Navy, only 11 survived the war, and by 1785 Congress had disbanded the Navy and sold the remaining ships. Despite its short existence and limited impact on the war’s outcome, the sailors of the Continental Navy created a foundation for American naval tradition and provided trained seamen who would serve in future conflicts.

Sources:

Personal note: The Grumpy Doc proudly served as an enlisted sailor in the U.S. Navy from 1967 to 1974.

Thomas Jefferson: The Philosopher Who Played Hardball

Here’s the thing about Thomas Jefferson that doesn’t always make it into the history textbooks: the guy who wrote those soaring words about liberty and limited government? He was also one of early America’s most skilled—and sometimes underhanded—political operators.

It’s surprising when you think about it. Jefferson genuinely believed in transparency, virtue in public life, and keeping government small. He wrote beautifully about these ideals. But when it came to actual politics? He played the game as hard as anyone, often using tactics that directly contradicted what he preached.

Jefferson’s public philosophy was straightforward. He thought America should be a nation of independent farmers—regular people who owned their own land and weren’t dependent on anyone else. He worried constantly about concentrated power, whether in government or in the hands of wealthy financiers or merchants. He believed people should be informed and engaged, and that government worked best when it stayed out of people’s lives.

His Declaration of Independence wasn’t just pretty rhetoric—it laid out a genuinely revolutionary idea: governments only have power because people agree to give it to them, and when governments stop serving the people, those people have the right to change things.

The Reality: How Jefferson Actually Operated

Here’s where it gets interesting. While Jefferson was writing about virtue and transparency, he was simultaneously running what today we’d recognize as opposition research, planting stories in the press, and organizing political operations—sometimes against people he was supposed to be working with.

The Freneau Setup: Paying for Attacks

The most blatant example happened in 1791. Jefferson was serving as Secretary of State under George Washington, which meant he was part of the administration. At the same time, he arranged for a guy named Philip Freneau to get a government job—technically as a translator. The real purpose? To give Freneau money to run a newspaper that would relentlessly attack Alexander Hamilton and other Federalists.

Think about that for a second. Jefferson was using his government position to fund media attacks on his own colleagues. When people called him out on it, he basically said, “Who, me? I have nothing to do with what Freneau publishes.” But the evidence shows Jefferson was actively encouraging and directing these attacks.

John Beckley: The Original Campaign Fixer

Jefferson also worked closely with John Beckley, who was essentially America’s first professional political operative. Beckley coordinated messaging, spread information (and sometimes misinformation) about opponents, and helped build the grassroots organization that would eventually become the Democratic-Republican Party.

This wasn’t a gentlemanly debate about ideas. This was organized political warfare—pamphlets, coordinated newspaper campaigns, and opposition research. Jefferson and Jame Madison quietly funded much of this work while maintaining public images as above-the-fray philosophers. We can’t know exactly what Jefferson said in every private conversation with Beckley, but the circumstantial evidence of coordination is convincing.

The Hamilton Rivalry: Ideological War

Jefferson’s conflict with Hamilton was both philosophical and deeply personal. Hamilton wanted a strong federal government, a national bank, and close ties with Britain. Jefferson saw all of this as a betrayal of the Revolution—a step toward creating the same kind of corrupt, elite-dominated system they’d just fought to escape.

But rather than just making his arguments publicly, Jefferson worked behind the scenes to undermine Hamilton’s policies. He encouraged Madison to lead opposition in Congress. He fed stories to friendly newspapers. He coordinated with Republican representatives to block Federalist initiatives.

The philosophical disagreement was real, but Jefferson’s methods were pure political calculation.

Turning on Washington: The Ultimate Betrayal?

Maybe the most damaging thing Jefferson did was secretly working against George Washington while still serving in his cabinet. By Washington’s second term, Jefferson had convinced himself that Washington was being manipulated by Hamilton and moving the country toward monarchy.

 Jefferson stayed in the cabinet, maintaining cordial relations with Washington in person, while privately organizing resistance to administration policies. He encouraged attacks on Washington in the press. He coordinated with opposition leaders. And he did all of this while Washington trusted him as a loyal advisor.

When Washington found out, he was devastated. The betrayal broke their relationship permanently.

The Burr Situation: Using People

Jefferson’s handling of Aaron Burr shows just how pragmatic he could be. Jefferson never really trusted Burr—thought he was too ambitious and unprincipled. But in 1800, when Jefferson needed to win the presidency, Burr was useful for delivering New York’s votes.

After winning, Jefferson kept Burr as vice president but froze him out of any real power. Once Burr’s usefulness ended (especially after he killed Hamilton in that duel), Jefferson completely abandoned him, eventually supporting an unsuccessful prosecution for treason.

Deceiving Congress

Another example of Jefferson’s political manipulation was the Louisiana Purchase. This was a massive land acquisition that doubled the size of the United States. Jefferson knew that under the constitution he had no clear authority to acquire territory for the United States.  He was able to secure the purchase by keeping it secret from both congress and his political opponents until after it was finalized. This allowed him to avoid a debate that could have derailed the deal.  Does this sound familiar?

So, What Do We Make of This?

Here’s the uncomfortable question: Was Jefferson a hypocrite, or was he just being realistic about how politics actually works?  Jefferson’s political manipulation was not always ethical, but it was effective. He was able to use his skills to achieve many of his political goals.

You could argue he was doing what he thought necessary to prevent Hamilton’s vision from taking over—that the ends justified the means. You could also argue that by using underhanded tactics, he corrupted the very democratic processes he claimed to be protecting.

My speculation: I think Jefferson was aware of the contradiction and wrestled with it. His private letters show moments of self-justification and lingering doubt. But ultimately, he kept doing it because he believed his vision for America was too important to lose by playing nice.

The Bottom Line

Thomas Jefferson remains one of our most brilliant political thinkers. But he was also willing to play dirty when he thought the stakes were high enough. That duality—beautiful ideals combined with hardball tactics—might actually make him more relevant today than ever. Because let’s be honest, that tension between principles and pragmatism hasn’t gone away in American politics.

Understanding both sides of Jefferson helps us see that even the founders we most revere weren’t simple heroes. They were complicated people operating in a messy political reality, trying to build something new while fighting over what that something should be.

The evidence for Jefferson’s political maneuvering is extensive and well-established by historians. Some interpretations of his motivations involve educated speculation, but the actions themselves are documented in letters, newspaper archives, and contemporary accounts.​​​​​​​​​​​​​​​​

Reference List

Primary Sources

Founders Online – National Archives https://founders.archives.gov/

  • Digital collection of correspondence and papers from George Washington, Thomas Jefferson, Alexander Hamilton, Benjamin Franklin, John Adams, and James Madison. Essential for Jefferson’s own words and contemporaneous accounts of his political activities.

Library of Congress – Thomas Jefferson Exhibition https://www.loc.gov/exhibits/jefferson/

  • Comprehensive digital exhibition covering Jefferson’s life, philosophy, and political career with original documents and interpretive essays.

Thomas Jefferson Encyclopedia – Monticello https://www.monticello.org/site/research-and-collections/

  • Scholarly resource maintained by the Thomas Jefferson Foundation, covering specific topics including Jefferson’s relationships with Aaron Burr and other political figures.

Secondary Sources – Books

Chernow, Ron. Alexander Hamilton. New York: Penguin Press, 2004.

  • Pulitzer Prize-winning biography that extensively covers the Jefferson-Hamilton rivalry and Jefferson’s behind-the-scenes political maneuvering, including the Freneau affair. Particularly strong on the 1790s conflicts within Washington’s cabinet.

Chernow, Ron. Washington: A Life. New York: Penguin Press, 2010.

  • Provides Washington’s perspective on Jefferson’s activities within his administration and the betrayal Washington felt when learning of Jefferson’s covert opposition.

Ellis, Joseph J. American Sphinx: The Character of Thomas Jefferson. New York: Alfred A. Knopf, 1996.

  • National Book Award winner that explores Jefferson’s contradictions and complexities, particularly the gap between his philosophical writings and political practices.

Ferling, John. Jefferson and Hamilton: The Rivalry That Forged a Nation. New York: Bloomsbury Press, 2013.

  • Detailed examination of the ideological and personal conflict between Jefferson and Hamilton, showing how their struggle shaped early American politics and party formation.

Isenberg, Nancy. Fallen Founder: The Life of Aaron Burr. New York: Penguin Books, 2007.

  • Comprehensive biography of Burr that includes extensive coverage of his complex relationship with Jefferson, from their 1800 alliance through Jefferson’s eventual abandonment of his vice president.

Pasley, Jeffrey L. The Tyranny of Printers: Newspaper Politics in the Early American Republic. Charlottesville: University of Virginia Press, 2001.

  • Scholarly examination of how newspapers and partisan press became political weapons in the 1790s, with detailed coverage of Jefferson’s relationship with Philip Freneau and the National Gazette.

Secondary Sources – Journal Articles and Academic Papers

Sharp, James Roger. “The Journalist as Partisan: The National Gazette and the Origins of the First Party System.” The Virginia Magazine of History and Biography 97, no. 4 (1989): 391-420.

  • Academic analysis of Freneau’s National Gazette and its role in forming political opposition, including Jefferson’s involvement in funding and directing the publication.

Cunningham, Noble E., Jr. “John Beckley: An Early American Party Manager.” The William and Mary Quarterly 13, no. 1 (1956): 40-52.

  • Scholarly examination of Beckley’s role as America’s first professional political operative and his work organizing Jefferson’s political machine.

Historiographical Note

The interpretation of Jefferson’s political behavior has evolved over time. Earlier biographies (pre-1960s) tended to minimize or excuse his behind-the-scenes maneuvering, while more recent scholarship has been willing to examine the contradictions between his philosophy and practice more critically. The works cited above represent current historical consensus based on documentary evidence, though historians continue to debate Jefferson’s motivations and whether his tactics were justified given the political stakes he perceived.

Page 2 of 2

Powered by WordPress & Theme by Anders Norén