Grumpy opinions about everything.

Category: Commentary Page 2 of 15

This is the home of grumpy opinions.

The Strange Tale of Spontaneous Human Combustion

Did you ever run into an idea so strange that you can’t quite understand how anyone ever took it seriously? Recently, while reading about historical curiosities in Pseudoscience by Kang and Pederson, I was reminded of one of the most enduring examples: spontaneous human combustion.

The classic image is always the same. Someone enters a room and finds a small pile of ashes where a person once sat. The body is nearly destroyed, yet the chair beneath it is barely scorched and the rest of the room looks strangely untouched. For centuries, this baffling scene was explained by a dramatic idea—that a person could suddenly burst into flames from the inside, without any external fire at all.

It sounds like something lifted straight from a gothic novel, but belief in spontaneous human combustion stretches back to at least the seventeenth century and reached its peak in the Victorian era. To understand why it gained such traction, it helps to look at the social attitudes of the time, the cases that convinced people it was real, and what modern forensic science eventually uncovered.

Much of the early belief rested on moral judgment rather than evidence. In the nineteenth century, spontaneous human combustion was widely accepted as a kind of divine punishment. Many of the alleged victims were described as heavy drinkers, often elderly, overweight, or socially isolated, and women were frequently overrepresented in the reports. To Victorian minds, this pattern felt meaningful. Alcohol was flammable, after all, and it seemed reasonable—at least then—to assume that a body saturated with spirits might somehow ignite. Sensational newspaper reporting amplified the mystery, presenting lurid details while glossing over inconvenient facts.

The idea gained intellectual credibility in 1746 when Paul Rolli, a Fellow of the Royal Society, formally used the term “spontaneous human combustion” while describing the death of Countess Cornelia Zangari Bandi. The involvement of a respected scientific figure gave the concept legitimacy that lingered for generations.

Several cases became canonical. Countess Bandi’s death in 1731 was described as leaving little more than ashes and partially intact legs, still clothed in stockings. In 1966, John Irving Bentley of Pennsylvania was found almost completely burned except for one leg, with his pipe discovered intact nearby. Mary Reeser, known as the “Cinder Woman,” died in Florida in 1951, leaving behind melted fat embedded in the rug near where she had been sitting. As recently as 2010, an Irish coroner ruled that spontaneous human combustion caused the death of Michael Faherty, whose body was found badly burned near a fireplace in a room that showed little fire damage. Over roughly three centuries, about two hundred such cases have been cited worldwide.

Believers proposed explanations that ranged from the scientific-sounding to the overtly theological. Alcoholism was the most popular theory, with some physicians genuinely arguing that chronic drinking made the human body combustible. Earlier medical thinking leaned on imbalances of bodily humors, while later writers speculated about unknown chemical reactions producing internal heat. Religious interpretations framed these deaths as punishment for sin. Even in modern times, a few proponents have suggested that acetone buildup in people with alcoholism, diabetes, or extreme diets could somehow trigger combustion.

The idea was so culturally embedded that Charles Dickens famously killed off the alcoholic character Mr. Krook by spontaneous combustion in Bleak House. When critics objected, Dickens defended the plot choice by citing what he believed were credible historical and medical sources.

The illusion of the supernatural persisted because the circumstances were almost perfectly misleading. Victims were typically alone, elderly, or physically impaired, unable to respond quickly to a smoldering fire. The localized damage looked impossible to the untrained eye. Potential ignition sources were often destroyed in the fire itself. And dramatic storytelling filled in the gaps left by incomplete investigations.

What actually happens in these cases is far less mystical and far more unsettling. Modern forensic science points to an explanation known as the “wick effect.” In this scenario, there is always an external ignition source—often a cigarette, candle, lamp, or fireplace ember. Once clothing catches fire, heat melts the person’s body fat. That liquefied fat soaks into the clothing, which then behaves like a candle wick. The fire burns slowly and steadily, sometimes for hours, consuming much of the body while leaving nearby objects relatively unscathed.

This effect has been demonstrated experimentally. In the 1960s, researchers at Leeds University showed that cloth soaked in human fat could sustain a slow burn for extended periods once ignited. In 1998, forensic scientist John de Haan famously replicated the effect for the BBC by burning a pig carcass wrapped in a blanket. The result closely matched classic spontaneous combustion scenes: severe destruction of the body, with extremities left behind and limited damage to the surrounding room.

The reason these fires don’t usually engulf the entire space is simple physics. Flames rise more easily than they spread sideways, and the heat output of a wick-effect fire is relatively localized. It’s similar to standing near a campfire—you can be close without catching fire yourself.

Investigators Joe Nickell and John F. Fischer examined dozens of historical cases and found that every one involved a plausible ignition source, details that earlier accounts often ignored or downplayed. When these factors are restored to the narrative, the mystery largely disappears.

As science writer Benjamin Radford has pointed out, if spontaneous human combustion were truly spontaneous, we would expect it to occur randomly and frequently, in public places as well as private ones. Instead, it consistently appears in situations involving isolation and an external heat source.

The bottom line is straightforward. There is no credible scientific evidence that humans can burst into flames without an external ignition source. What has been labeled spontaneous human combustion is better understood as a tragic combination of accidental fire and the wick effect. The myth endured because it blended moral judgment, fear, and incomplete science into a compelling story. Today, forensic investigation has replaced superstition with explanation, even if the results remain unsettling.

Spontaneous human combustion survives as a reminder of how easily mystery fills the space where evidence is thin—and how patiently applied science eventually closes that gap.


Sources and Further Reading

Peer-reviewed forensic and medical analyses are available through the National Center for Biotechnology Information, including “So-called Spontaneous Human Combustion” in the Journal of Forensic Sciences (https://pubmed.ncbi.nlm.nih.gov/21392004/) and Koljonen and Kluger’s 2012 review, “Spontaneous human combustion in the light of the 21st century,” published in the Journal of Burn Care & Research (https://pubmed.ncbi.nlm.nih.gov/22269823/).

General scientific and historical overviews can be found in Encyclopædia Britannica’s article “Is Spontaneous Human Combustion Real?” (https://www.britannica.com/story/is-spontaneous-human-combustion-real), Scientific American’s discussion of the wick effect (https://www.scientificamerican.com/blog/cocktail-party-physics/burn-baby-burn-understanding-the-wick-effect/), and Live Science’s summary of facts and theories (https://www.livescience.com/42080-spontaneous-human-combustion.html).

Accessible explanatory pieces are also available from HowStuffWorks (https://science.howstuffworks.com/science-vs-myth/unexplained-phenomena/shc.htm), History.com (https://www.history.com/articles/is-spontaneous-human-combustion-real), Mental Floss (https://www.mentalfloss.com/article/22236/quick-7-seven-cases-spontaneous-human-combustion), and All That’s Interesting (https://allthatsinteresting.com/spontaneous-human-combustion). Wikipedia’s entries on spontaneous human combustion and the wick effect provide comprehensive background and references at https://en.wikipedia.org/wiki/Spontaneous_human_combustion and https://en.wikipedia.org/wiki/Wick_effect.

What “Woke” Really Means: A Look at a Loaded Word

Why everyone’s fighting over a word nobody agrees on

Okay, so you’ve probably heard “woke” thrown around about a million times, right? It’s in political debates, online arguments, your uncle’s Facebook rants—basically everywhere. And here’s the weird part: depending on who’s saying it, it either means you’re enlightened or you’re insufferable.

So let’s figure out what’s actually going on with this word.

Where It All Started

Here’s something most people don’t know: “woke” wasn’t invented by social media activists or liberal college students. It goes way back to the 1930s in Black communities, and it meant something straightforward—stay alert to racism and injustice.

The earliest solid example comes from blues musician Lead Belly. In his song “Scottsboro Boys” (about nine Black teenagers falsely accused of rape in Alabama in 1931), he told Black Americans to “stay woke”—basically meaning watch your back, because the system isn’t on your side. This wasn’t abstract philosophy; it was survival advice in the Jim Crow South.

The term hung around in Black culture for decades. It got a boost in 2008 when Erykah Badu used “I stay woke” in her song “Master Teacher,” where it meant something like staying self-aware and questioning the status quo.

But the big explosion happened around 2014 during the Ferguson protests after Michael Brown was killed. Black Lives Matter activists started using “stay woke” to talk about police brutality and systemic racism. It spread through Black Twitter, then got picked up by white progressives showing solidarity with social justice movements. By the late 2010s, it had expanded to cover sexism, LGBTQ+ issues, and pretty much any social inequality you can think of.

And that’s when conservatives started using it as an insult.

The Liberal Take: It’s About Giving a Damn

For progressives, “woke” still carries that original vibe of awareness. According to a 2023 Ipsos poll, 56% of Americans (and 78% of Democrats) said “woke” means “to be informed, educated, and aware of social injustices.”

From this angle, being woke just means you’re paying attention to how race, gender, sexuality, and class affect people’s lives—and you think we should try to make things fairer. It’s not about shaming people; it’s about understanding the experiences of others.

Liberals see it as continuing the work of the civil rights movement—expanding who we empathize with and include. That might mean supporting diversity programs, using inclusive language, or rethinking how we teach history. To them, it’s just what thoughtful people do in a diverse society.

Here’s the Progressive Argument in a Nutshell

The term literally started as self-defense. Progressives argue the problems are real. Being “woke” is about recognizing that bias, inequality, and discrimination still exist. The data back some of this up—there are documented disparities in policing, sentencing, healthcare, and economic opportunity across racial lines. From this view, pointing these things out isn’t being oversensitive; it’s just stating facts.

They also point out that conservatives weaponized the term. They took a word from Black communities about awareness and justice and turned it into an all-purpose insult for anything they don’t like about the left. Some activists call this a “racial dog whistle”—a way to attack justice movements without being explicitly racist.

The concept naturally expanded from racial justice to other inequalities—sexism, LGBTQ+ discrimination, other forms of unfairness. Supporters see this as logical: if you care about one group being treated badly, why wouldn’t you care about others?

And here’s their final point: what’s the alternative? When you dismiss “wokeness,” you’re often dismissing the underlying concerns. Denying that racism still affects American life can become just another way to ignore real problems.

Bottom line from the liberal side: being “woke” means you’ve opened your eyes to how society works differently for different people, and you think we can do better.

The Conservative Take: It’s About Going Too Far

Conservatives see it completely differently. To them, “woke” isn’t about awareness—it’s about excess and control.

They see “wokeness” as an ideology that forces moral conformity and punishes anyone who disagrees. What started as social awareness has turned into censorship and moral bullying. When a professor loses their job over an unpopular opinion or comedy shows get edited for “offensive” jokes, conservatives point and say: “See? This is exactly what we’re talking about.”  To them, “woke” is just the new version of “politically correct”—except worse. It’s intolerance dressed up as virtue.

Here’s the conservative argument in a nutshell:

Wokeness has moved way beyond awareness into something harmful. They argue it creates a “victimhood culture” where status and that benefits come from claiming you’re oppressed rather than from merit or hard work. Instead of fixing injustice, they say it perpetuates it by elevating people based on identity rather than achievement.

They see it as “an intolerant and moralizing ideology” that threatens free speech. In their view, woke culture only allows viewpoints that align with progressive ideology and “cancels” dissenters or labels them “white supremacists.”

Many conservatives deny that structural racism or widespread discrimination still exists in modern America. They attribute unequal outcomes to factors other than bias. They believe America is fundamentally a great country and reject the idea that there is systematic racism or that capitalism can sometimes be unjust.

They also see real harm in certain progressive positions—like the idea that gender is principally a social construct or that children should self-determine their gender. They view these as threats to traditional values and biological reality.

Ultimately, conservatives argue that wokeness is about gaining power through moral intimidation rather than correcting injustice. In their view, the people rejecting wokeness are the real critical thinkers.

The Heart of the Clash

Here’s what makes this so messy: both sides genuinely believe they’re defending what’s right.

Liberals think “woke” means justice and empathy. Conservatives think it means judgment and control. The exact same thing—a company ad featuring diverse families, a school curriculum change, a social movement—can look like progress to one person and propaganda to another.

One person’s enlightenment is literally another person’s indoctrination.

The Word Nobody Wants Anymore

Here’s the ironic part: almost nobody calls themselves “woke” anymore. Like “politically correct” before it, the word has gotten so loaded that it’s frequently used as an insult—even by people who agree with the underlying ideas. The term has been stretched to cover everything from racial awareness to climate activism to gender identity debates, and the more it’s used, the less anyone knows what it truly means.

Recently though, some progressives have started reclaiming the term—you’re beginning to see “WOKE” on protest signs now.

So, Who’s Right?

Maybe both. Maybe neither.

If “woke” means staying aware of injustice and treating people fairly, that’s good. If it means acting morally superior and shutting down disagreement, that’s not. The truth is probably somewhere in the messy middle.

This whole debate tells us more about America than about the word itself. We’ve always struggled with how to balance freedom with fairness, justice with tolerance. “Woke” is just the latest word we’re using to have that same old argument.

The Bottom Line

Whether you love it or hate it, “woke” isn’t going anywhere soon. It captures our national struggle to figure out what awareness and fairness should look like today.

And honestly? Maybe we’d all be better off spending less time arguing about the word and more time talking about the actual values behind it—what’s fair, what’s free speech, what kind of society do we want?

Being “woke” originally meant recognizing systemic prejudices—racial injustice, discrimination, and social inequities many still experience daily. But the term’s become a cultural flashpoint.  Here’s the thing: real progress requires acknowledging both perspectives exist and finding common ground. It’s not about who’s “right”—it’s about building bridges.

 If being truly woke means staying alert to injustice while remaining open to dialogue with those who see things differently, seeking solutions that work for everyone, caring for others, being empathetic and charitable, then call me WOKE.

Bull Markets, Bear Markets and the Story Behind Wall Street’s Most Famous Animals

If you’ve ever caught a business news segment, you’ve probably heard anchors throwing around terms like “bull market” and “bear market” as if everyone just naturally knows what they mean. But beyond the basic idea that one’s good and one’s bad, the real mechanics of these market conditions—and how they got their animal nicknames—are pretty interesting.

How the Stock Market Works (The Quick Version)

Before we dive into bulls and bears, let’s cover the basics. The stock market is essentially a place where people buy and sell ownership stakes in companies. When you buy a share of stock, you’re purchasing a tiny piece of that company. The price of that share goes up or down based on how many people want to buy it versus how many want to sell it—classic supply and demand.

Companies sell shares to raise money for growth, and investors buy them hoping the company will do well and the stock price will increase. The overall “market” is tracked through indexes like the S&P 500 or Dow Jones Industrial Average, which measure how a group of major companies are performing. When most stocks are rising, we say the market is up; when most are falling, the market is down.

What Bull and Bear Markets Actually Mean

A bull market refers to a period when stock prices are rising, typically defined as a 20% or more increase from recent lows. During bull markets, investors are optimistic, companies are generally doing well, and people are more willing to take risks with their money. Bull markets usually are driven by a strong economy with low inflation and optimistic investors. Think of the economic boom of the late 1990s or the recovery after the 2008 financial crisis—those were classic bull markets where prices kept climbing for years.

A bear market is essentially the opposite: a general decline in the stock market over time, usually defined as a 20% or more price decline over at least a two-month period. During bear markets, investors get nervous, sell off their holdings, and pessimism spreads. When a 10% to 20% decline occurs, it’s classified as a correction, and bear territory always precedes a bear market. The Great Depression, the 2008 financial crisis, and the COVID-19 pandemic’s initial impact all triggered bear markets.

The Colorful History Behind the Terms

Now here’s where things get interesting. These terms didn’t come from some modern marketing genius—they trace back to 18th century London, and the story involves everything from old proverbs to violent animal fights to one of history’s biggest financial scandals.

The “bear” term came first. Etymologists point to an old proverb, warning that it is not wise “to sell the bear’s skin before one has caught the bear”. This saying was about the foolishness of counting on something before you actually have it. By the early 1700s, traders who engaged in short selling (betting that prices would fall) were called “bear-skin jobbers” because they sold a bear’s skin—the shares—before catching the bear. The term eventually got shortened to just “bears.”

The real watershed moment came with the South Sea Bubble of 1720. The South Sea Company was a British joint stock company founded by an act of Parliament in 1711, and in 1720, the company assumed most of the British national debt and convinced investors to give up state annuities for company stock sold at a very high premium. When everything collapsed, share prices dropped dramatically, starting a “bear market,” and the story became the topic of many literary works and went down in history as an infamous metaphor.

As for “bull,” the origins are a bit murkier. The first known instance of the market term “bull” popped up in 1714, shortly after the “bear” term emerged. Most historians think it arose as a natural counterpoint to “bear,” possibly influenced by bull-baiting and bear-baiting, two animal fighting sports popular at the time—though I should note that’s somewhat speculative.

There’s also a popular explanation about how the animals attack: bears swipe downward with their paws while bulls thrust upward with their horns, which nicely mirrors market movements. While that’s a helpful memory device, it’s probably more of a convenient coincidence than the actual origin. The term “bull” originally meant a speculative purchase in the expectation that stock prices would rise, and was later applied to the person making such purchases.

Why This Still Matters

These metaphors have stuck around for three centuries because they work. They’re visceral and easy to remember—you can picture a charging bull or a hibernating bear without needing an economics degree. They also capture something real about market psychology: the aggressive optimism of bulls pushing prices up versus the defensive pessimism of bears hunkering down.

Understanding these terms helps you follow financial news and, more importantly, recognize when markets are shifting. Knowing you’re in a bull market might make you less surprised by rising prices, while recognizing a bear market can help you avoid panic-selling when things look grim.

The Bull and Bear Markets are among those things I’ve heard for years and never knew their origin.  This article is an attempt to explain it to myself.

Sources:

Bull Markets, Bear Markets and the Story Behind Wall Street’s Most Famous Animals

If you’ve ever caught a business news segment, you’ve probably heard anchors throwing around terms like “bull market” and “bear market” as if everyone just naturally knows what they mean. But beyond the basic idea that one’s good and one’s bad, the real mechanics of these market conditions—and how they got their animal nicknames—are pretty interesting.

How the Stock Market Works (The Quick Version)

Before we dive into bulls and bears, let’s cover the basics. The stock market is essentially a place where people buy and sell ownership stakes in companies. When you buy a share of stock, you’re purchasing a tiny piece of that company. The price of that share goes up or down based on how many people want to buy it versus how many want to sell it—classic supply and demand.

Companies sell shares to raise money for growth, and investors buy them hoping the company will do well and the stock price will increase. The overall “market” is tracked through indexes like the S&P 500 or Dow Jones Industrial Average, which measure how a group of major companies are performing. When most stocks are rising, we say the market is up; when most are falling, the market is down.

What Bull and Bear Markets Actually Mean

A bull market refers to a period when stock prices are rising, typically defined as a 20% or more increase from recent lows. During bull markets, investors are optimistic, companies are generally doing well, and people are more willing to take risks with their money. Bull markets usually are driven by a strong economy with low inflation and optimistic investors. Think of the economic boom of the late 1990s or the recovery after the 2008 financial crisis—those were classic bull markets where prices kept climbing for years.

A bear market is essentially the opposite: a general decline in the stock market over time, usually defined as a 20% or more price decline over at least a two-month period. During bear markets, investors get nervous, sell off their holdings, and pessimism spreads. When a 10% to 20% decline occurs, it’s classified as a correction, and bear territory always precedes a bear market. The Great Depression, the 2008 financial crisis, and the COVID-19 pandemic’s initial impact all triggered bear markets.

The Colorful History Behind the Terms

Now here’s where things get interesting. These terms didn’t come from some modern marketing genius—they trace back to 18th century London, and the story involves everything from old proverbs to violent animal fights to one of history’s biggest financial scandals.

The “bear” term came first. Etymologists point to an old proverb, warning that it is not wise “to sell the bear’s skin before one has caught the bear”. This saying was about the foolishness of counting on something before you actually have it. By the early 1700s, traders who engaged in short selling (betting that prices would fall) were called “bear-skin jobbers” because they sold a bear’s skin—the shares—before catching the bear. The term eventually got shortened to just “bears.”

The real watershed moment came with the South Sea Bubble of 1720. The South Sea Company was a British joint stock company founded by an act of Parliament in 1711, and in 1720, the company assumed most of the British national debt and convinced investors to give up state annuities for company stock sold at a very high premium. When everything collapsed, share prices dropped dramatically, starting a “bear market,” and the story became the topic of many literary works and went down in history as an infamous metaphor.

As for “bull,” the origins are a bit murkier. The first known instance of the market term “bull” popped up in 1714, shortly after the “bear” term emerged. Most historians think it arose as a natural counterpoint to “bear,” possibly influenced by bull-baiting and bear-baiting, two animal fighting sports popular at the time—though I should note that’s somewhat speculative.

There’s also a popular explanation about how the animals attack: bears swipe downward with their paws while bulls thrust upward with their horns, which nicely mirrors market movements. While that’s a helpful memory device, it’s probably more of a convenient coincidence or a retroactive description than the actual origin. The term “bull” originally meant a speculative purchase in the expectation that stock prices would rise, and was later applied to the person making such purchases.

Why This Still Matters

These metaphors have stuck around for three centuries because they work. They’re visceral and easy to remember—you can picture a charging bull or a hibernating bear without needing an economics degree. They also capture something real about market psychology: the aggressive optimism of bulls pushing prices up versus the defensive pessimism of bears hunkering down.

Understanding these terms helps you follow financial news and, more importantly, recognize when markets are shifting. Knowing you’re in a bull market might make you less surprised by rising prices, while recognizing a bear market can help you avoid panic-selling when things look grim.

The Bull and Bear Markets are among those things I’ve heard for years and never knew their origin.  This article is an attempt to explain it to myself.

Sources:

Supply-Side Economics and Trickle-Down: What Actually Happened?

The Basic Question

You’ve probably heard politicians arguing about tax cuts—some promising they’ll supercharge the economy, others dismissing them as giveaways to the rich. These debates usually involve two terms that get thrown around like political footballs: “supply-side economics” and “trickle-down economics.” But what do these terms actually mean, and more importantly, do they work? After four decades of real-world experiments, we finally have enough data to answer that question.

Understanding Supply-Side Economics

Supply-side economics is a legitimate economic theory that emerged in the 1970s when the U.S. economy was struggling with both high inflation and high unemployment—a combination that traditional economic theories said shouldn’t happen. The core idea is straightforward: economic growth comes from producing more goods and services (the “supply” side), not just from boosting consumer demand.

The theory rests on three main pillars. First, lower taxes—the thinking is that if people and businesses keep more of their money, they’ll work harder, invest more, and create jobs. According to economist Arthur Laffer’s famous curve, there’s supposedly a sweet spot where lower tax rates can actually generate more government revenue because the economy grows so much. Second, less regulation removes government restrictions so businesses can innovate and operate more efficiently. Third, smart monetary policy keeps inflation in check while maintaining enough money in the economy to fuel growth.

All of this sounds reasonable in theory. After all, who wouldn’t work harder if they kept more of their paycheck?

The Political Rebranding: Enter “Trickle-Down”

Here’s where economic theory meets political messaging. “Trickle-down economics” isn’t an academic term—it’s essentially a catchphrase, and not a complimentary one. Critics use it to describe supply-side policies when those policies mainly benefit wealthy people and corporations. The idea behind the name: give tax breaks to rich people and big companies, and the benefits will eventually “trickle down” to everyone else through job creation, higher wages, and economic growth.

Here’s the interesting part: no economist actually calls their theory “trickle-down economics.” Even David Stockman, President Reagan’s own budget director, later admitted that “supply-side” was basically a rebranding of “trickle-down” to make tax cuts for the wealthy easier to sell politically. So while they’re not identical concepts, they’re two sides of the same coin.

The Reagan Revolution: Testing the Theory

Ronald Reagan became president in 1981 and implemented the biggest supply-side experiment in U.S. history. He slashed the top tax rate from 70% down to 50%, and eventually to just 28%, arguing this would unleash economic growth that would lift all boats.

The results were genuinely mixed. On one hand, the economy created about 20 million jobs during Reagan’s presidency, unemployment fell from 7.6% to 5.5%, and the economy grew by 26% over eight years. Those aren’t small achievements.

But the picture gets more complicated when you look deeper. The tax cuts didn’t pay for themselves as promised—they reduced government revenue by about 9% initially. Reagan had to backtrack and raise taxes multiple times in 1982, 1983, 1984, and 1987 to address the mounting deficit problem. Income inequality increased significantly during this period, and surprisingly, the poverty rate at the end of Reagan’s term was essentially the same as when he started. Perhaps most telling, government debt more than doubled as a percentage of the economy.

There’s another wrinkle worth mentioning: much of the economic recovery happened because Federal Reserve Chairman Paul Volcker had already broken the back of inflation through tight monetary policy before Reagan’s tax cuts took effect. Disentangling how much credit Reagan’s policies deserve versus Volcker’s groundwork is genuinely difficult.

The Pattern Repeats

The story didn’t end with Reagan. George W. Bush enacted major tax cuts in 2001 and 2003, especially benefiting wealthy Americans. The result? Economic growth remained sluggish, deficits ballooned, and income inequality continued its upward march.

Then there’s Bill Clinton—the plot twist in this story. In 1993, Clinton actually raised taxes on the wealthy, pushing the top rate from 31% back up to 39.6%. Conservative economists predicted economic disaster. Instead, the economy boomed with what was then the longest sustained growth period in U.S. history, creating 22.7 million jobs. Even more remarkably, the government ran a budget surplus for the first time in decades.

Donald Trump’s 2017 tax cuts, focused heavily on corporations, showed minimal wage growth for workers while generating significant stock buybacks that primarily benefited shareholders—and yes, larger deficits. Trump’s subsequent economic policies in his second term have been characterized by such volatility that reasonable long-term assessments remain difficult.

The Kansas Experiment: A Modern Test Case

At the state level, Kansas Governor Sam Brownback implemented one of the boldest modern experiments in supply-side policy between 2012 and 2017, dramatically slashing income taxes especially for businesses. Proponents called it a “real live experiment” that would demonstrate supply-side principles in action.

Instead of unleashing growth, Kansas faced severe budget shortfalls that forced cuts to education and infrastructure. Economic growth actually lagged behind neighboring states that didn’t implement such aggressive cuts, and the state legislature eventually reversed many of the tax reductions. This case has become a frequently cited cautionary tale for critics of supply-side policies.

What Does Half a Century of Data Show?

After 50 years of real-world experiments, researchers finally have enough data to move beyond political rhetoric. A comprehensive study analyzed tax policy changes across 18 developed countries over five decades, looking at what actually happened after major tax cuts for the wealthy.

The findings are remarkably consistent. Tax cuts for the rich reliably increase income inequality—no surprise there. But they show no significant effect on overall economic growth rates and no significant effect on unemployment. Perhaps most damaging to the theory, they don’t “pay for themselves” through increased growth. At best, about one-third of lost revenue gets recovered through expanded economic activity.

In simpler terms: when you cut taxes for wealthy people, wealthy people get wealthier. The promised broader benefits largely fail to materialize. The 2022 World Inequality Report reinforced these conclusions, finding that the world’s richest 10% continue capturing the vast majority of all economic gains, while the bottom half of the population holds just 2% of all wealth.

Why the Theory Doesn’t Match Reality

When you think about it logically, the disconnect makes sense. If you give a tax cut to someone who’s already wealthy, they’ll probably save or invest most of it—they were already buying what they wanted and needed. Their daily spending habits don’t change much. But if you give money to someone who’s struggling to pay bills or afford necessities, they’ll spend it immediately, directly stimulating economic activity.

Economists call this concept “marginal propensity to consume,” and it explains why giving tax breaks to working and middle-class people actually does more to boost the economy than supply-side cuts focused on the wealthy. A dollar in the hands of someone who needs to spend it has more immediate economic impact than a dollar added to an already-substantial investment portfolio.

The Bottom Line

After 40-plus years of repeated experiments, the pattern is clear. Supply-side policies and trickle-down approaches consistently increase deficits, widen inequality, and fail to significantly boost overall economic growth or create more jobs than alternative policies. Meanwhile, periods with higher taxes on the wealthy, like the Clinton years, saw strong growth, robust job creation, and balanced budgets.

The Nuance Worth Keeping

None of this means all tax cuts are bad or that high taxes are always good—economics is rarely that simple. The critical questions are: who receives the tax cuts, and what outcomes do you realistically expect? Targeted tax cuts for working families, small businesses, or specific industries facing genuine challenges can serve as effective policy tools. Child tax credits, research and development incentives, or relief for struggling sectors might accomplish specific goals.

But the evidence accumulated over four decades is clear: broad tax cuts focused primarily on the wealthy and large corporations don’t deliver the promised economic benefits for everyone else. The benefits don’t trickle down in any meaningful way.

You’ll keep hearing these arguments for years to come. Politicians will continue promising that tax cuts for businesses and the wealthy will boost the entire economy. Now you know what the actual evidence shows, and you can judge those promises accordingly.


Sources:

America’s Healthcare Paradox: Why We Pay Double and Get Less

The healthcare debate in America often circles back to a fundamental question: should we move toward a single-payer system, or is our current mixed public-private model the better path forward? It’s a conversation that gets heated quickly, but when you strip away the politics and look at how different systems actually function around the world, some interesting patterns emerge.

What We Mean by Single-Payer

A single-payer healthcare system means that one entity—usually the government or a government-related organization—pays for all covered healthcare services. Doctors and hospitals can still be private (and usually are), but instead of dealing with dozens of different insurance companies, they bill one source. It’s a lot like Medicare, which is why proponents often call it “Medicare-for-all”.

The key thing to understand is that single-payer isn’t necessarily the same as socialized medicine. In Canada’s system, for instance, the government pays the bills, but doctors are largely in the private sector and hospitals are controlled by private boards or regional health authorities rather than being part of the national government. Compare that to the UK’s National Health Service, where many hospitals and clinics are government-owned and many doctors are government employees.

America’s Current Patchwork

The United States operates what might charitably be called a “creative” approach to healthcare—a complex mix of employer-sponsored private insurance, government programs like Medicare, Medicaid and the VA system, individual marketplace plans, and direct out-of-pocket payments. Government already pays roughly half of total US health spending, but benefits, cost-sharing, and networks vary widely between plans, with little overall coordination.​ In 2023, private health insurance spending accounted for 30 percent of total national health expenditures, Medicare covered 21 percent, and Medicaid covered 18 percent.  Most of the remainder was either paid out of pocket by private citizens or was written off by providers as uncollectible.

Here’s where it gets expensive. U.S. health care spending grew 7.5 percent in 2023, reaching $4.9 trillion or $14,570 per person, accounting for 17.6 percent of the nation’s GDP, and national health spending for 2024 is expected to have exceeded $5.3 trillion or 18% of GDP, and health spending is expected to grow to 20.3 percent of GDP by 2033.

For a typical American family, the costs are real and rising. In 2024, the estimated cost of healthcare for a family of four in an employer-sponsored health plan was $32,066.

The European Landscape

Europe doesn’t have one healthcare model—it has several, and they’re all quite different from what we have in the States. Most of the 35 countries in the European Union have single-payer healthcare systems, but the details vary considerably.

Countries like the UK, Sweden, and Norway operate what are essentially single-payer systems where it is solely the government who pays for and provides healthcare services and directly owns most facilities and employs most clinical and related staff with funds from tax contributions. Then you have countries like Germany, and Belgium that use “sickness funds”—these are non-profit funds that don’t market, cherry pick patients, set premiums or rates paid to providers, determine benefits, earn profits or have investors. They’re quasi-public institutions, not private insurance companies like we know them in America.  Some systems, such as the Netherlands or Switzerland, rely on mandatory individually purchased private insurance with tight regulation and subsidies, achieving universal coverage with a structured, competitive market.

The French System

France is particularly noted for a successful universal, government-run health insurance system usually described as a single-payer with supplements. All legal residents are automatically covered through the national health insurance program, which is funded by payroll taxes and general taxation.

Most physicians and hospitals are private or nonprofit, not government employees or facilities. Patients generally have free choice of doctors and specialists, though coordinating through a primary care physician improves access and reimbursement. The national insurer pays a large portion of medical costs (often 70–80%), while voluntary private supplemental insurance covers most remaining out-of-pocket expenses such as copays and deductibles.

France is known for spending significantly less per capita than the United States. Cost controls come from nationally negotiated fee schedules and drug pricing rather than limits on access.

What’s striking is that in 2019, US healthcare spending reached $11,072 per person—over double the average of $5,505 across wealthy European nations. Yet despite spending roughly twice as much per person, American health outcomes often lag behind.

The Outcomes Question

This is where the comparison gets uncomfortable for American exceptionalism. The U.S. has the lowest life expectancy at birth among comparable wealthy nations, the highest death rates for avoidable or treatable conditions, and the highest maternal and infant mortality.

In 2023, life expectancy in comparable countries was 82.5 years, which is 4.1 years longer than in the U.S. Japan manages this with healthcare spending at just $5,300 per capita, while Americans spend more than double that amount.

Now, it’s important to note that healthcare systems don’t operate in a vacuum. Life expectancy is influenced by many factors beyond medical care—diet, exercise, smoking, gun violence, drug overdoses, and social determinants of health all play roles. But when you’re spending twice as much and getting worse results, it suggests the system itself might be part of the problem.

Advantages of Single-Payer Systems

The case for single-payer rests on several compelling points. First, administrative simplicity translates to real cost savings. A study found that the administrative burden of health care in the United States was 27 percent of all national health expenditures, with the excess administrative cost of the private insurer system estimated at about $471 billion in 2012 compared to a single-payer system like Canada’s. That’s over $1 out of every $5 of total healthcare spending just going to paperwork, billing disputes, and insurance company profit and overhead before any patient receives care.

Universal coverage is another major advantage. In a properly functioning single-payer system, nobody goes bankrupt from medical bills, nobody delays care because they can’t afford it, and nobody loses coverage when they lose their job. The peace of mind that comes with knowing you’re covered regardless of employment status or pre-existing conditions is difficult to quantify but enormously valuable.

Single-payer systems also have significant negotiating power. When one entity is buying drugs and services for an entire nation, pharmaceutical companies and medical device manufacturers have much less leverage to charge whatever they want. This helps explain why prescription drug prices in other countries are often a fraction of prices in the U.S.

Disadvantages and Trade-offs

The critics of single-payer systems aren’t wrong about everything. Wait times are a genuine concern in some systems. When prices and overall budgets are tightly controlled, some countries experience longer waits for selected elective surgeries, imaging, or specialty visits, especially if investment lags demand.

In 2024, Canadian patients experienced a median wait time of 30 weeks between specialty referral and first treatment, up from 27.2 weeks in 2023, with rural areas facing even longer delays. For procedures like elective orthopedic surgery, patients wait an average of 39 weeks in Canada.

However, it’s crucial to understand that wait times are not a result of the single-payer system itself but of system management, as wait times vary significantly across different single-payer and social insurance systems. Many European countries with universal coverage don’t experience the same wait time issues that plague Canada.

The transition costs are also substantial. Moving from our current system to single-payer would disrupt a massive industry. Over fifteen percent of our economy is related to health care, with half spent by the private sector. Around 160 million Americans currently have insurance through their employers, and transitioning all of them to a government-run plan would be an enormous administrative and political challenge.

A large national payer can be slower to change benefit designs or adopt new payment models; shifting political majorities can affect funding levels and benefit generosity.

Taxes would need to increase significantly to fund such a system, though proponents argue this would be offset by the elimination of insurance premiums, deductibles, and co-pays. It’s essentially a question of whether you’d rather pay through taxes or through premiums—the money has to come from somewhere.

Advantages of America’s Mixed System

Our current system does have some genuine strengths. Innovation thrives in the American healthcare market. The profit motive, for all its flaws, does drive pharmaceutical research and medical device development. American medical schools and research institutions lead the world in many areas of medicine.   Academic medical centers and specialty hospitals deliver advanced procedures and complex care that attract patients internationally.​

The system also offers more choice for those who can afford it. If you have good insurance, you typically face shorter wait times for elective procedures and can often see specialists without lengthy delays. Americans with high-quality employer-sponsored coverage give their plans relatively high ratings.

Competition between providers can theoretically drive quality improvements, though this effect is often undermined by the complexity of the market and the difficulty consumers face in shopping for healthcare.

Disadvantages of the Current U.S. System

The most glaring problem is simple: The United States remains the only developed country without universal healthcare, and 30 million Americans remain uninsured despite gains under the Affordable Care Act, and many of these gains will soon be lost. Being uninsured in America isn’t just an inconvenience—it can be deadly. People delay care, skip medications, and avoid preventive screenings because of cost concerns. 

The administrative complexity is staggering. Doctors spend enormous amounts of time dealing with insurance companies, prior authorizations, and billing disputes. Hospitals employ armies of billing specialists just to navigate the maze of different insurance plans, each with its own rules, formularies, and coverage determinations.  U.S. administrative costs account for ~25% of all healthcare spending, among the highest in the world.

Medical bankruptcy is uniquely American. Even people with insurance can find themselves financially devastated by serious illness. High deductibles, surprise bills, and out-of-network charges create a minefield of potential financial catastrophe.  Studies of U.S. bankruptcy filings over the past two decades have consistently found that medical bills and medical problems are a major factor in a large share of consumer bankruptcies. Recent summaries suggest that roughly two‑thirds of US personal bankruptcies involve medical expenses or illness-related income loss, and around 17% of adults with health care debt report declaring bankruptcy or losing a home because of that debt.

The system is also profoundly inequitable. Quality of care often depends more on your job, your income, and your zip code than on your medical needs. Out-of-pocket costs per capita have increased as compared to previous decades and the burden falls disproportionately on those least able to afford it.

What Europe Shows Us

The European experience demonstrates that there isn’t one “right” way to achieve universal coverage. The UK’s NHS, Germany’s sickness funds, and France’s hybrid system all manage to cover everyone at roughly half the per-capita cost of American healthcare. Universal Health Coverage exists in all European countries, with healthcare financing almost universally government managed, either directly through taxation or semi-directly through mandated and government-subsidized social health insurance.

They’ve accomplished this through various combinations of centralized negotiation of drug prices, global budgets for hospitals, strong primary care systems that serve as gatekeepers to more expensive specialist care, emphasis on preventive services, and regulation that prevents insurance companies from cherry-picking healthy patients.

Are these systems perfect? No. One of the major disadvantages of centralized healthcare systems is long wait lists to access non-urgent care, though Americans often wait as long or longer for routine primary care appointments as do patients in most universal-coverage countries. Many European countries are wrestling with funding challenges as populations age and expensive new treatments become available. But they’ve solved the fundamental problem that America hasn’t: they ensure everyone has access to healthcare without the risk of financial ruin.

The Path Forward?

The debate over healthcare in America often presents false choices. We don’t have to choose between Canadian-style single-payer and our current system—there are multiple models we could adapt. We could move toward a German-style system with heavily regulated non-profit insurers. We could create a robust public option that competes with private insurance. We could expand Medicare gradually by lowering the eligibility age over time.

What’s clear from international comparisons is that the status quo is unusually expensive and produces mediocre results. We’re paying premium prices for economy outcomes. Whether single-payer is the answer depends partly on your priorities. Do you value universal coverage and cost control more than unlimited choice? Are you willing to accept potentially longer wait times for non-urgent care in exchange for lower costs and universal access? How much do you trust government to manage a program this large?

These aren’t easy questions, and reasonable people disagree. But the evidence from Europe suggests that universal coverage at reasonable cost is achievable—it just requires us to make some choices about what we value most in a healthcare system.


Sources:

Happy New Year

From Reagan Conservative to Social Democrat: A Political Evolution

Political beliefs rarely change overnight. Mine certainly didn’t. My journey from Reagan-era conservatism to social democracy unfolded slowly, shaped less by ideology than by lived experience and an accumulating body of evidence about what actually works.

Morning in America

Like many Americans of my generation, my political awakening came during the Reagan years. The message was optimistic and reassuring: limited government, free markets, individual responsibility, and a strong national defense would restore American greatness. Reagan’s charisma made complex economic ideas feel like common sense. Lower taxes would spur growth. Deregulation would unleash innovation. Markets would reward effort and discipline.

That worldview was personally affirming. Success was earned. Failure reflected poor choices. Government’s role should be narrow—defense, public order, and little else. Social programs, we were told, fostered dependency rather than opportunity. It was a coherent framework, and for a time, it seemed to fit the facts.

Cracks in the Foundation

By the 1990s, inconsistencies began to surface. Economic growth continued, but inequality widened. Entire industrial communities collapsed despite residents working hard and playing by the rules. The benefits of “trickle-down” economics were not trickling very far.

Personal experiences made the abstractions impossible to ignore. Families lost health insurance because of pre-existing conditions. Medical bills pushed insured households into bankruptcy. These outcomes weren’t failures of character; they were failures of systems.

The 2008 financial crisis shattered whatever illusions remained. Financial institutions that preached personal responsibility engaged in reckless speculation, then received massive government bailouts, while homeowners were left to face foreclosure. Like millions of others, I lost nearly half of my retirement savings. The contradiction was glaring: socialism for the wealthy, harsh market discipline for everyone else. Individual responsibility meant little when systemic risk brought down the entire economy.

A Turning Point

Job loss during the Great Recession completed the lesson. Despite qualifications and work history, employment opportunities vanished. Unemployment benefits—once easy to dismiss in theory as handouts—became essential in practice. The bootstrap mythology doesn’t hold up when the floor is pulled away.

This period also exposed the fragility of employer-based healthcare and retirement systems. COBRA coverage was unaffordable. 401(k)s evaporated. The safety net that once seemed excessive suddenly looked inadequate. Meanwhile, countries with stronger social protections weathered the recession better than the United States.

Seeing Other Models

Travel and research broadened my perspective further. Nations like Germany, Denmark, France, and Sweden paired market economies with robust social programs—and consistently outperformed the U.S. on measures of health, social mobility, and life satisfaction.

These were not stagnant, overregulated societies. They were thriving capitalist democracies that simply made different choices about public investment and risk-sharing.

Writers like Joseph Stiglitz and Thomas Piketty documented how concentrated wealth undermines both democracy and long-term growth. Historical evidence showed that America’s most prosperous era—the post-World War II boom—coincided with high marginal tax rates, strong unions, and major public investment.

Healthcare Changed Everything

Healthcare ultimately crystallized my shift. The U.S. spends far more per capita than any other nation yet produces worse outcomes on many basic measures.

As a physician, I watched patients struggle with insurance denials, opaque pricing, and medical debt. Healthcare markets don’t function like normal markets. You can’t comparison shop during a heart attack. When insurers profit by denying care, the system aligns against patients. Medical bankruptcy is virtually unknown in countries with universal coverage—for a reason. We have a system where the major goal of health insurance companies is making a profit for their investors—not providing affordable healthcare to their subscribers. 

Climate and Collective Action

Climate change further exposed the limits of market fundamentalism. Individualism and laissez-faire policies have failed to account for shared environmental costs and long-term consequences. Markets alone cannot price long-term environmental harm or coordinate collective action at the necessary scale. Addressing climate risk requires regulation, public investment, and democratic planning.

What Social Democracy Is—and Isn’t

Social democracy is not the rejection of capitalism. It is regulated capitalism with guardrails—markets where they work well, public systems where markets fail. Healthcare, education, infrastructure, and basic income security perform better with strong public involvement.

This differs from democratic socialism, a distinction I’ve explored elsewhere. Social democracy embraces entrepreneurship and competition while preventing monopoly power, protecting workers, and taxing fairly to fund shared prosperity.

As sociologist Lane Kenworthy notes, the U.S. already has elements of social democracy—Social Security, Medicare, public education—we simply underfund them compared to European nations.

A Pragmatic Conclusion

My evolution wasn’t ideological betrayal; it was pragmatic learning. I adjusted my beliefs based on outcomes, not slogans. Countries with strong social democracies routinely outperform the U.S. on health, mobility, education, and even business competitiveness.

True prosperity requires both entrepreneurial freedom and collective investment. The choice isn’t markets or government—it’s how to balance them intelligently. This lesson took me decades to learn, but the evidence now feels hard to ignore.

References

  1. Federal Reserve History – The Great Recession
    Overview of causes, systemic failures, and economic consequences of the 2007–2009 financial crisis.
    https://www.federalreservehistory.org/essays/great-recession
  2. OECD – Social Protection and Economic Resilience
    Comparative data on how countries with stronger social safety nets performed during economic downturns.
    https://www.oecd.org/economy
  3. World Happiness Report (United Nations / Oxford)
    Cross-national comparisons of well-being, social trust, and economic security.
    https://worldhappiness.report
  4. Joseph Stiglitz – Inequality and Economic Growth (IMF Finance & Development)
    Analysis of how income concentration undermines long-term economic performance and democracy.
    https://www.imf.org/en/Publications/fandd/issues/2019/09/inequality-and-economic-growth-stiglitz
  5. Thomas Piketty – Capital in the Twenty-First Century (Data Companion & Summaries)
    Historical evidence on wealth concentration and taxation in advanced economies.
    https://wid.world
  6. Tax Policy Center – Historical Top Marginal Income Tax Rates
    U.S. tax rate history showing high marginal rates during the post-war economic boom.
    https://www.taxpolicycenter.org/statistics/historical-highest-marginal-income-tax-rates
  7. The Commonwealth Fund – U.S. Health Care from a Global Perspective
    Comparative analysis of health spending, outcomes, and access across developed nations.
    https://www.commonwealthfund.org/publications/issue-briefs/2023/jan/us-health-care-global-perspective-2022
  8. OECD Health Statistics
    International comparisons of healthcare costs, outcomes, and system performance.
    https://www.oecd.org/health/health-data.htm
  9. IPCC Sixth Assessment Report – Synthesis Report
    Scientific consensus on climate change risks and the need for coordinated public action.
    https://www.ipcc.ch/report/ar6/syr
  10. Lane Kenworthy – Social Democratic Capitalism
    Comparative research on social democracy, public investment, and economic performance.
    https://lanekenworthy.net

Why We Make Promises to Ourselves Every January: The History of New Year’s Resolutions

New Year’s resolutions—a practice where individuals set goals or make promises to improve their lives in the upcoming year—have a rich and varied history spanning thousands of years. While the concept of self-improvement at the start of a new year feels distinctly modern, its origins are deeply rooted in ancient civilizations and religious traditions that understood the psychological power of fresh starts.

Origins of New Year’s Resolutions

The tradition of making promises at the start of a new year can be traced back over 4,000 years to ancient Babylon. During their 12-day festival called Akitu, held in mid-March to coincide with the spring harvest and planting season, Babylonians made solemn vows to their gods. These promises typically involved practical matters like repaying debts and returning borrowed items, reflecting the agricultural society’s emphasis on community obligations and divine favor. The Babylonians believed that success in fulfilling these promises would curry favor with their deities, ensuring good harvests and prosperity in the year ahead.

The practice evolved significantly when Julius Caesar reformed the Roman calendar in 46 BCE and established January 1 as the official start of the new year. This wasn’t an arbitrary choice—January was named after Janus, the two-faced Roman god of beginnings, endings, doorways, and transitions. The symbolism was perfect: one face looking back at the year past, the other gazing forward to the future. Romans offered sacrifices to Janus and made promises of good conduct for the coming year, combining reflection on past mistakes with optimism about future improvements.

By the Middle Ages, the focus shifted dramatically toward religious observance. In early Christianity, the first day of the year became a time of prayer, spiritual reflection, and making pious resolutions aimed at becoming better Christians. One of the most colorful New Year’s traditions from this era was the “Peacock Vow,” practiced by Christian knights. At the end of the Christmas season, these knights would reaffirm their commitment to knightly virtue while feasting on roast peacock at elaborate New Year’s celebrations. The peacock, a symbol of pride and nobility, served as the centerpiece for vows promising good behavior and chivalric deeds during the coming year.

In the 17th century, Puritans brought particular intensity to the practice of New Year’s resolutions, focusing them squarely on spiritual and moral improvement. Rather than the broad promises of earlier eras, Puritan resolutions were detailed and specific. They committed to avoiding pride and vanity, practicing charity and liberality toward others, refraining from revenge even when wronged, controlling anger in daily interactions, speaking no evil of their neighbors, and living every aspect of their lives aligned with strict religious principles. Beyond these behavioral commitments, they also resolved to study scriptures diligently throughout the year, improve their religious devotion on a weekly basis, and continually renew their dedication to God. These resolutions were taken with utmost seriousness, often recorded in personal journals and reviewed regularly.

In 1740, John Wesley, the founder of Methodism, formalized this spiritual approach by creating the Covenant Renewal Service, traditionally held on New Year’s Eve or New Year’s Day. These powerful gatherings encouraged participants to reflect deeply on the past year’s failings and successes while making resolutions for spiritual growth in the year ahead. This tradition continues in many Methodist churches today.

Interestingly, the first known use of the specific phrase “New Year’s Resolution” appeared in a Boston newspaper called Walker’s Hibernian Magazine in 1813. The article took a humorous tone, discussing how people broke their New Year’s vows almost as soon as they made them—a wry observation that suggests nothing much has changed over the last 212 years.

The Modern Evolution of New Year’s Resolutions

The secularization of New Year’s resolutions accelerated during the 19th and 20th centuries as Western societies became increasingly diverse and less uniformly religious. Self-improvement and personal growth gradually took precedence over religious vows, though the underlying psychology remained similar. The rise of print media played a crucial role in popularizing the practice beyond religious communities. Newspapers and magazines began publishing advice columns on how to set and achieve goals, turning what had been a primarily spiritual practice into a secular ritual of self-betterment.

The industrial revolution and urbanization also influenced the nature of resolutions. As more people moved to cities and took on wage labor, resolutions began to reflect modern concerns like career advancement, financial stability, and managing the stress of urban life. The self-help movement of the 20th century, spurred by books like Dale Carnegie’s “How to Win Friends and Influence People” and Norman Vincent Peale’s “The Power of Positive Thinking,” further embedded the idea that individuals could transform themselves through conscious effort and goal-setting.

By the 21st century, resolutions were firmly established in Western culture as a beloved tradition of hope and renewal, no longer tied to any particular religious framework. The internet age brought new dimensions to the practice, with social media allowing people to publicly declare their resolutions, fitness tracking apps enabling data-driven self-improvement, and online communities providing support and accountability.

Common New Year’s Resolutions

Resolutions tend to reflect both cultural priorities and universal human aspirations. When researchers survey what people resolve to change, recurring themes emerge that tell us something about areas of discontent in contemporary life. Health and fitness consistently dominate the list, with millions of people vowing to lose weight, exercise more regularly, and eat healthier foods. The popularity of these goals reflects our sedentary modern lifestyles, abundant processed foods, and the cultural premium placed on physical appearance and wellness.

Personal development goals are another major category. People promise themselves they will finally learn that new skill they’ve been putting off, read more books instead of scrolling through social media, and manage their time better to reduce stress and increase productivity. These resolutions speak to a desire for intellectual growth and a nagging sense that we’re not living up to our full potential.

Financial goals also rank high on most people’s resolution lists. Many resolve to save more money for the future, pay off debts that have been accumulating, or stick to a budget instead of impulse spending. These financial resolutions often stem from anxiety about economic security and a recognition that small daily choices compound into major financial consequences over time.

Relationship and community-focused resolutions reflect our social nature and the loneliness epidemic affecting many developed nations. People vow to spend more quality time with family and friends rather than staying busy with work and distractions. They plan to volunteer and to give back to their communities in meaningful ways. They hope to strengthen the social bonds that are crucial to happiness and longevity.

Finally, breaking bad habits remains a perennial favorite. Traditional vices like smoking and excessive alcohol consumption still top many lists, but modern resolutions also target newer concerns like limiting screen time and reducing smartphone addiction. These goals acknowledge how difficult it is to maintain healthy habits in an environment designed to encourage overconsumption and instant gratification.

The Success Rate of Resolutions

Despite their enduring popularity, New Year’s resolutions are notoriously difficult to keep. Multiple studies estimate that approximately 80% of resolutions fail by February, often crashing and burning within just a few days of January 1st. The reasons for this high failure rate are both psychological and practical. Many people set overly ambitious goals without considering the realistic constraints of their lives or the sustained effort needed for meaningful change. Others make vague resolutions like “get healthier” without specific action steps or measurable milestones.

Research in behavioral psychology suggests that setting realistic, measurable, and time-bound goals—often called SMART goals (Specific, Measurable, Achievable, Relevant, and Time-bound)—can significantly improve success rates. Rather than resolving to “exercise more,” for example, a SMART goal would be “go to the gym for 30 minutes every Monday, Wednesday, and Friday morning.” The specificity provides clear direction, and the measurability allows for tracking progress and celebrating small victories along the way.

However, it’s worth noting that most people approach their New Year’s resolutions more as a fun tradition than with serious anticipation that they will actually keep them. There’s a ritualistic, almost playful quality to the practice—we know the odds are against us, but we participate anyway, embracing the hopeful symbolism of a fresh start even if we suspect we’ll be back to our old habits before Valentine’s Day.

The Significance of Resolutions Today

New Year’s resolutions persist across centuries and cultures because they align with a fundamental human desire for self-improvement and the psychological comfort of fresh starts. The appeal of marking time with calendars and treating January 1st as somehow special—despite being astronomically arbitrary—speaks to our need for narrative structure in our lives. Whether rooted in ancient Babylonian pledges to repay debts, Roman sacrifices to Janus, Christian vows of spiritual renewal, or modern goals to lose ten pounds, resolutions represent an enduring belief in the potential for change.

The tradition reminds us that humans have always struggled with the gap between who we are and who we aspire to be, and that we’ve always believed, however naively, that marking a new beginning on the calendar might help us bridge that gap. Even if our resolutions fail more often than they succeed, the very act of making them reaffirms our agency and our hope that we can become better versions of ourselves with just a bit of conscious effort.

Sources:

History.com provides comprehensive coverage of New Year’s resolution traditions: https://www.history.com/news/the-history-of-new-years-resolutions

Britannica offers detailed information on Janus and Roman New Year traditions: https://www.britannica.com/topic/Janus-Roman-god

The Smithsonian Magazine explores New Year’s countdown traditions and their historical context: https://www.smithsonianmag.com/science-nature/why-do-we-count-down-to-the-new-year-180961433/

Anthony Aveni’s “The Book of the Year: A Brief History of Our Seasonal Holidays” provides scholarly analysis of New Year’s traditions across cultures.

Kaila Curry’s article “The Ancient History of New Year’s Resolutions” traces the practice from Babylonian times through modern era.

Joshua O’Driscoll’s research on “The Peacock Vows” documents medieval chivalric New Year’s traditions, excerpted in various historical compilations.​​​​​​​​​​​​​​​​

Study The Past If You Would Define The Future—Confucius

I particularly like this quotation. It is similar to the more modern version: Those who don’t learn from the past are doomed to repeat it. However, I much prefer the former because it seems to be more in the form of advice or instruction. The latter seems to be more of a dire warning. Though I suspect, given the current state of the world, a dire warning is in order.

But regardless of whether it comes in the form of advice or warning, people today do not seem to heed the importance of studying the past.  The knowledge of history in our country is woeful. The lack of emphasis on the teaching of history in general and specifically American history, is shameful. While it is tempting to blame it on the lack of interest on the part of the younger generation, I find people my own age also have little appreciation of the events that shaped our nation, the world and their lives. Without this understanding, how can we evaluate what is currently happening and understand what we must do to come together as a nation and as a world.

I have always found history to be a fascinating subject. Biographies and nonfiction historical books remain among my favorite reading. In college I always added one or two history courses every semester to raise my grade point average. Even in college I found it strange that many of my friends hated history courses and took only the minimum. At the time, I didn’t realize just how serious this lack of historical perspective was to become.

Several years ago I became aware of just how little historical knowledge most people had. At the time Jay Leno was still doing his late-night show and he had a segment called Jaywalking. During the segment he would ask people in the street questions that were somewhat esoteric and to which he could expect to get unusual and generally humorous answers. On one show, on the 4th of July, he asked people “From what country did the United States declare independence on the 4th of July?” and of course no one knew the answer.

My first thought was that he must have gone through dozens of people to find the four or five people who did not know the answer to his question. The next day at work, the 5th of July, I decided to ask several people, all of whom were college graduates, the same question. I got not one single correct answer. Although, one person at least realized “I think I should know this”. When I told my wife, a retired teacher, she wasn’t surprised.  For a long time, she had been concerned about the lack of emphasis on social studies and the arts in school curriculums.  I was becoming seriously concerned about the direction of education in our country.

A lot of people are probably thinking “So what, who really cares what a bunch of dead people did 200 years ago?” If we don’t know what they did and why they did it how can we understand its relevance today?  We have no way to judge what actions may support the best interests of society and what might ultimately be detrimental.

Failure to learn from and understand the past results in a me-centric view of everything. If you fail to understand how things have developed, then you certainly cannot understand what the best course is to go forward. Attempting to judge all people and events of the past through your own personal prejudices leads only to continued and worsening conflict.

If you study the past you will see that there has never general agreement on anything. There were many disagreements, debates and even a civil war over differences of opinion.  It helps us to understand that there are no perfect people who always do everything the right way and at the right time. It helps us to appreciate the good that people do while understanding the human weaknesses that led to the things that we consider faults today. In other words, we cannot expect anyone to be a 100% perfect person. They may have accomplished many good and meaningful things and those good and meaningful things should not be discarded because the person was also a human being with human flaws.

Understanding the past does not mean approving of everything that occurred but it also means not condemning everything that does not fit into twenty-first century mores.  Only by recognizing this and seeing what led to the disasters of the past can we hope to avoid repetition of the worst aspects of our history. History teaches lessons in compromise, involvement and understanding. Failure to recognize that leads to strident argument and an unwillingness to cooperate with those who may differ in even the slightest way. Rather than creating the hoped-for perfect society, it simply leads to a new set of problems and a new group of grievances.

In sum, failure to study history is a failure to prepare for the future. We owe it to ourselves and future generations to understand where we came from and how we can best prepare our country and the world for them. They deserve nothing less than a full understanding of the past and a rational way forward. 

This was my first post after I started my blog in 2021.  I believe it is even more relevant now.

Page 2 of 15

Powered by WordPress & Theme by Anders Norén