Grumpy opinions about everything.

Category: Commentary Page 3 of 15

This is the home of grumpy opinions.

Supply-Side Economics and Trickle-Down: What Actually Happened?

The Basic Question

You’ve probably heard politicians arguing about tax cuts—some promising they’ll supercharge the economy, others dismissing them as giveaways to the rich. These debates usually involve two terms that get thrown around like political footballs: “supply-side economics” and “trickle-down economics.” But what do these terms actually mean, and more importantly, do they work? After four decades of real-world experiments, we finally have enough data to answer that question.

Understanding Supply-Side Economics

Supply-side economics is a legitimate economic theory that emerged in the 1970s when the U.S. economy was struggling with both high inflation and high unemployment—a combination that traditional economic theories said shouldn’t happen. The core idea is straightforward: economic growth comes from producing more goods and services (the “supply” side), not just from boosting consumer demand.

The theory rests on three main pillars. First, lower taxes—the thinking is that if people and businesses keep more of their money, they’ll work harder, invest more, and create jobs. According to economist Arthur Laffer’s famous curve, there’s supposedly a sweet spot where lower tax rates can actually generate more government revenue because the economy grows so much. Second, less regulation removes government restrictions so businesses can innovate and operate more efficiently. Third, smart monetary policy keeps inflation in check while maintaining enough money in the economy to fuel growth.

All of this sounds reasonable in theory. After all, who wouldn’t work harder if they kept more of their paycheck?

The Political Rebranding: Enter “Trickle-Down”

Here’s where economic theory meets political messaging. “Trickle-down economics” isn’t an academic term—it’s essentially a catchphrase, and not a complimentary one. Critics use it to describe supply-side policies when those policies mainly benefit wealthy people and corporations. The idea behind the name: give tax breaks to rich people and big companies, and the benefits will eventually “trickle down” to everyone else through job creation, higher wages, and economic growth.

Here’s the interesting part: no economist actually calls their theory “trickle-down economics.” Even David Stockman, President Reagan’s own budget director, later admitted that “supply-side” was basically a rebranding of “trickle-down” to make tax cuts for the wealthy easier to sell politically. So while they’re not identical concepts, they’re two sides of the same coin.

The Reagan Revolution: Testing the Theory

Ronald Reagan became president in 1981 and implemented the biggest supply-side experiment in U.S. history. He slashed the top tax rate from 70% down to 50%, and eventually to just 28%, arguing this would unleash economic growth that would lift all boats.

The results were genuinely mixed. On one hand, the economy created about 20 million jobs during Reagan’s presidency, unemployment fell from 7.6% to 5.5%, and the economy grew by 26% over eight years. Those aren’t small achievements.

But the picture gets more complicated when you look deeper. The tax cuts didn’t pay for themselves as promised—they reduced government revenue by about 9% initially. Reagan had to backtrack and raise taxes multiple times in 1982, 1983, 1984, and 1987 to address the mounting deficit problem. Income inequality increased significantly during this period, and surprisingly, the poverty rate at the end of Reagan’s term was essentially the same as when he started. Perhaps most telling, government debt more than doubled as a percentage of the economy.

There’s another wrinkle worth mentioning: much of the economic recovery happened because Federal Reserve Chairman Paul Volcker had already broken the back of inflation through tight monetary policy before Reagan’s tax cuts took effect. Disentangling how much credit Reagan’s policies deserve versus Volcker’s groundwork is genuinely difficult.

The Pattern Repeats

The story didn’t end with Reagan. George W. Bush enacted major tax cuts in 2001 and 2003, especially benefiting wealthy Americans. The result? Economic growth remained sluggish, deficits ballooned, and income inequality continued its upward march.

Then there’s Bill Clinton—the plot twist in this story. In 1993, Clinton actually raised taxes on the wealthy, pushing the top rate from 31% back up to 39.6%. Conservative economists predicted economic disaster. Instead, the economy boomed with what was then the longest sustained growth period in U.S. history, creating 22.7 million jobs. Even more remarkably, the government ran a budget surplus for the first time in decades.

Donald Trump’s 2017 tax cuts, focused heavily on corporations, showed minimal wage growth for workers while generating significant stock buybacks that primarily benefited shareholders—and yes, larger deficits. Trump’s subsequent economic policies in his second term have been characterized by such volatility that reasonable long-term assessments remain difficult.

The Kansas Experiment: A Modern Test Case

At the state level, Kansas Governor Sam Brownback implemented one of the boldest modern experiments in supply-side policy between 2012 and 2017, dramatically slashing income taxes especially for businesses. Proponents called it a “real live experiment” that would demonstrate supply-side principles in action.

Instead of unleashing growth, Kansas faced severe budget shortfalls that forced cuts to education and infrastructure. Economic growth actually lagged behind neighboring states that didn’t implement such aggressive cuts, and the state legislature eventually reversed many of the tax reductions. This case has become a frequently cited cautionary tale for critics of supply-side policies.

What Does Half a Century of Data Show?

After 50 years of real-world experiments, researchers finally have enough data to move beyond political rhetoric. A comprehensive study analyzed tax policy changes across 18 developed countries over five decades, looking at what actually happened after major tax cuts for the wealthy.

The findings are remarkably consistent. Tax cuts for the rich reliably increase income inequality—no surprise there. But they show no significant effect on overall economic growth rates and no significant effect on unemployment. Perhaps most damaging to the theory, they don’t “pay for themselves” through increased growth. At best, about one-third of lost revenue gets recovered through expanded economic activity.

In simpler terms: when you cut taxes for wealthy people, wealthy people get wealthier. The promised broader benefits largely fail to materialize. The 2022 World Inequality Report reinforced these conclusions, finding that the world’s richest 10% continue capturing the vast majority of all economic gains, while the bottom half of the population holds just 2% of all wealth.

Why the Theory Doesn’t Match Reality

When you think about it logically, the disconnect makes sense. If you give a tax cut to someone who’s already wealthy, they’ll probably save or invest most of it—they were already buying what they wanted and needed. Their daily spending habits don’t change much. But if you give money to someone who’s struggling to pay bills or afford necessities, they’ll spend it immediately, directly stimulating economic activity.

Economists call this concept “marginal propensity to consume,” and it explains why giving tax breaks to working and middle-class people actually does more to boost the economy than supply-side cuts focused on the wealthy. A dollar in the hands of someone who needs to spend it has more immediate economic impact than a dollar added to an already-substantial investment portfolio.

The Bottom Line

After 40-plus years of repeated experiments, the pattern is clear. Supply-side policies and trickle-down approaches consistently increase deficits, widen inequality, and fail to significantly boost overall economic growth or create more jobs than alternative policies. Meanwhile, periods with higher taxes on the wealthy, like the Clinton years, saw strong growth, robust job creation, and balanced budgets.

The Nuance Worth Keeping

None of this means all tax cuts are bad or that high taxes are always good—economics is rarely that simple. The critical questions are: who receives the tax cuts, and what outcomes do you realistically expect? Targeted tax cuts for working families, small businesses, or specific industries facing genuine challenges can serve as effective policy tools. Child tax credits, research and development incentives, or relief for struggling sectors might accomplish specific goals.

But the evidence accumulated over four decades is clear: broad tax cuts focused primarily on the wealthy and large corporations don’t deliver the promised economic benefits for everyone else. The benefits don’t trickle down in any meaningful way.

You’ll keep hearing these arguments for years to come. Politicians will continue promising that tax cuts for businesses and the wealthy will boost the entire economy. Now you know what the actual evidence shows, and you can judge those promises accordingly.


Sources:

America’s Healthcare Paradox: Why We Pay Double and Get Less

The healthcare debate in America often circles back to a fundamental question: should we move toward a single-payer system, or is our current mixed public-private model the better path forward? It’s a conversation that gets heated quickly, but when you strip away the politics and look at how different systems actually function around the world, some interesting patterns emerge.

What We Mean by Single-Payer

A single-payer healthcare system means that one entity—usually the government or a government-related organization—pays for all covered healthcare services. Doctors and hospitals can still be private (and usually are), but instead of dealing with dozens of different insurance companies, they bill one source. It’s a lot like Medicare, which is why proponents often call it “Medicare-for-all”.

The key thing to understand is that single-payer isn’t necessarily the same as socialized medicine. In Canada’s system, for instance, the government pays the bills, but doctors are largely in the private sector and hospitals are controlled by private boards or regional health authorities rather than being part of the national government. Compare that to the UK’s National Health Service, where many hospitals and clinics are government-owned and many doctors are government employees.

America’s Current Patchwork

The United States operates what might charitably be called a “creative” approach to healthcare—a complex mix of employer-sponsored private insurance, government programs like Medicare, Medicaid and the VA system, individual marketplace plans, and direct out-of-pocket payments. Government already pays roughly half of total US health spending, but benefits, cost-sharing, and networks vary widely between plans, with little overall coordination.​ In 2023, private health insurance spending accounted for 30 percent of total national health expenditures, Medicare covered 21 percent, and Medicaid covered 18 percent.  Most of the remainder was either paid out of pocket by private citizens or was written off by providers as uncollectible.

Here’s where it gets expensive. U.S. health care spending grew 7.5 percent in 2023, reaching $4.9 trillion or $14,570 per person, accounting for 17.6 percent of the nation’s GDP, and national health spending for 2024 is expected to have exceeded $5.3 trillion or 18% of GDP, and health spending is expected to grow to 20.3 percent of GDP by 2033.

For a typical American family, the costs are real and rising. In 2024, the estimated cost of healthcare for a family of four in an employer-sponsored health plan was $32,066.

The European Landscape

Europe doesn’t have one healthcare model—it has several, and they’re all quite different from what we have in the States. Most of the 35 countries in the European Union have single-payer healthcare systems, but the details vary considerably.

Countries like the UK, Sweden, and Norway operate what are essentially single-payer systems where it is solely the government who pays for and provides healthcare services and directly owns most facilities and employs most clinical and related staff with funds from tax contributions. Then you have countries like Germany, and Belgium that use “sickness funds”—these are non-profit funds that don’t market, cherry pick patients, set premiums or rates paid to providers, determine benefits, earn profits or have investors. They’re quasi-public institutions, not private insurance companies like we know them in America.  Some systems, such as the Netherlands or Switzerland, rely on mandatory individually purchased private insurance with tight regulation and subsidies, achieving universal coverage with a structured, competitive market.

The French System

France is particularly noted for a successful universal, government-run health insurance system usually described as a single-payer with supplements. All legal residents are automatically covered through the national health insurance program, which is funded by payroll taxes and general taxation.

Most physicians and hospitals are private or nonprofit, not government employees or facilities. Patients generally have free choice of doctors and specialists, though coordinating through a primary care physician improves access and reimbursement. The national insurer pays a large portion of medical costs (often 70–80%), while voluntary private supplemental insurance covers most remaining out-of-pocket expenses such as copays and deductibles.

France is known for spending significantly less per capita than the United States. Cost controls come from nationally negotiated fee schedules and drug pricing rather than limits on access.

What’s striking is that in 2019, US healthcare spending reached $11,072 per person—over double the average of $5,505 across wealthy European nations. Yet despite spending roughly twice as much per person, American health outcomes often lag behind.

The Outcomes Question

This is where the comparison gets uncomfortable for American exceptionalism. The U.S. has the lowest life expectancy at birth among comparable wealthy nations, the highest death rates for avoidable or treatable conditions, and the highest maternal and infant mortality.

In 2023, life expectancy in comparable countries was 82.5 years, which is 4.1 years longer than in the U.S. Japan manages this with healthcare spending at just $5,300 per capita, while Americans spend more than double that amount.

Now, it’s important to note that healthcare systems don’t operate in a vacuum. Life expectancy is influenced by many factors beyond medical care—diet, exercise, smoking, gun violence, drug overdoses, and social determinants of health all play roles. But when you’re spending twice as much and getting worse results, it suggests the system itself might be part of the problem.

Advantages of Single-Payer Systems

The case for single-payer rests on several compelling points. First, administrative simplicity translates to real cost savings. A study found that the administrative burden of health care in the United States was 27 percent of all national health expenditures, with the excess administrative cost of the private insurer system estimated at about $471 billion in 2012 compared to a single-payer system like Canada’s. That’s over $1 out of every $5 of total healthcare spending just going to paperwork, billing disputes, and insurance company profit and overhead before any patient receives care.

Universal coverage is another major advantage. In a properly functioning single-payer system, nobody goes bankrupt from medical bills, nobody delays care because they can’t afford it, and nobody loses coverage when they lose their job. The peace of mind that comes with knowing you’re covered regardless of employment status or pre-existing conditions is difficult to quantify but enormously valuable.

Single-payer systems also have significant negotiating power. When one entity is buying drugs and services for an entire nation, pharmaceutical companies and medical device manufacturers have much less leverage to charge whatever they want. This helps explain why prescription drug prices in other countries are often a fraction of prices in the U.S.

Disadvantages and Trade-offs

The critics of single-payer systems aren’t wrong about everything. Wait times are a genuine concern in some systems. When prices and overall budgets are tightly controlled, some countries experience longer waits for selected elective surgeries, imaging, or specialty visits, especially if investment lags demand.

In 2024, Canadian patients experienced a median wait time of 30 weeks between specialty referral and first treatment, up from 27.2 weeks in 2023, with rural areas facing even longer delays. For procedures like elective orthopedic surgery, patients wait an average of 39 weeks in Canada.

However, it’s crucial to understand that wait times are not a result of the single-payer system itself but of system management, as wait times vary significantly across different single-payer and social insurance systems. Many European countries with universal coverage don’t experience the same wait time issues that plague Canada.

The transition costs are also substantial. Moving from our current system to single-payer would disrupt a massive industry. Over fifteen percent of our economy is related to health care, with half spent by the private sector. Around 160 million Americans currently have insurance through their employers, and transitioning all of them to a government-run plan would be an enormous administrative and political challenge.

A large national payer can be slower to change benefit designs or adopt new payment models; shifting political majorities can affect funding levels and benefit generosity.

Taxes would need to increase significantly to fund such a system, though proponents argue this would be offset by the elimination of insurance premiums, deductibles, and co-pays. It’s essentially a question of whether you’d rather pay through taxes or through premiums—the money has to come from somewhere.

Advantages of America’s Mixed System

Our current system does have some genuine strengths. Innovation thrives in the American healthcare market. The profit motive, for all its flaws, does drive pharmaceutical research and medical device development. American medical schools and research institutions lead the world in many areas of medicine.   Academic medical centers and specialty hospitals deliver advanced procedures and complex care that attract patients internationally.​

The system also offers more choice for those who can afford it. If you have good insurance, you typically face shorter wait times for elective procedures and can often see specialists without lengthy delays. Americans with high-quality employer-sponsored coverage give their plans relatively high ratings.

Competition between providers can theoretically drive quality improvements, though this effect is often undermined by the complexity of the market and the difficulty consumers face in shopping for healthcare.

Disadvantages of the Current U.S. System

The most glaring problem is simple: The United States remains the only developed country without universal healthcare, and 30 million Americans remain uninsured despite gains under the Affordable Care Act, and many of these gains will soon be lost. Being uninsured in America isn’t just an inconvenience—it can be deadly. People delay care, skip medications, and avoid preventive screenings because of cost concerns. 

The administrative complexity is staggering. Doctors spend enormous amounts of time dealing with insurance companies, prior authorizations, and billing disputes. Hospitals employ armies of billing specialists just to navigate the maze of different insurance plans, each with its own rules, formularies, and coverage determinations.  U.S. administrative costs account for ~25% of all healthcare spending, among the highest in the world.

Medical bankruptcy is uniquely American. Even people with insurance can find themselves financially devastated by serious illness. High deductibles, surprise bills, and out-of-network charges create a minefield of potential financial catastrophe.  Studies of U.S. bankruptcy filings over the past two decades have consistently found that medical bills and medical problems are a major factor in a large share of consumer bankruptcies. Recent summaries suggest that roughly two‑thirds of US personal bankruptcies involve medical expenses or illness-related income loss, and around 17% of adults with health care debt report declaring bankruptcy or losing a home because of that debt.

The system is also profoundly inequitable. Quality of care often depends more on your job, your income, and your zip code than on your medical needs. Out-of-pocket costs per capita have increased as compared to previous decades and the burden falls disproportionately on those least able to afford it.

What Europe Shows Us

The European experience demonstrates that there isn’t one “right” way to achieve universal coverage. The UK’s NHS, Germany’s sickness funds, and France’s hybrid system all manage to cover everyone at roughly half the per-capita cost of American healthcare. Universal Health Coverage exists in all European countries, with healthcare financing almost universally government managed, either directly through taxation or semi-directly through mandated and government-subsidized social health insurance.

They’ve accomplished this through various combinations of centralized negotiation of drug prices, global budgets for hospitals, strong primary care systems that serve as gatekeepers to more expensive specialist care, emphasis on preventive services, and regulation that prevents insurance companies from cherry-picking healthy patients.

Are these systems perfect? No. One of the major disadvantages of centralized healthcare systems is long wait lists to access non-urgent care, though Americans often wait as long or longer for routine primary care appointments as do patients in most universal-coverage countries. Many European countries are wrestling with funding challenges as populations age and expensive new treatments become available. But they’ve solved the fundamental problem that America hasn’t: they ensure everyone has access to healthcare without the risk of financial ruin.

The Path Forward?

The debate over healthcare in America often presents false choices. We don’t have to choose between Canadian-style single-payer and our current system—there are multiple models we could adapt. We could move toward a German-style system with heavily regulated non-profit insurers. We could create a robust public option that competes with private insurance. We could expand Medicare gradually by lowering the eligibility age over time.

What’s clear from international comparisons is that the status quo is unusually expensive and produces mediocre results. We’re paying premium prices for economy outcomes. Whether single-payer is the answer depends partly on your priorities. Do you value universal coverage and cost control more than unlimited choice? Are you willing to accept potentially longer wait times for non-urgent care in exchange for lower costs and universal access? How much do you trust government to manage a program this large?

These aren’t easy questions, and reasonable people disagree. But the evidence from Europe suggests that universal coverage at reasonable cost is achievable—it just requires us to make some choices about what we value most in a healthcare system.


Sources:

Happy New Year

From Reagan Conservative to Social Democrat: A Political Evolution

Political beliefs rarely change overnight. Mine certainly didn’t. My journey from Reagan-era conservatism to social democracy unfolded slowly, shaped less by ideology than by lived experience and an accumulating body of evidence about what actually works.

Morning in America

Like many Americans of my generation, my political awakening came during the Reagan years. The message was optimistic and reassuring: limited government, free markets, individual responsibility, and a strong national defense would restore American greatness. Reagan’s charisma made complex economic ideas feel like common sense. Lower taxes would spur growth. Deregulation would unleash innovation. Markets would reward effort and discipline.

That worldview was personally affirming. Success was earned. Failure reflected poor choices. Government’s role should be narrow—defense, public order, and little else. Social programs, we were told, fostered dependency rather than opportunity. It was a coherent framework, and for a time, it seemed to fit the facts.

Cracks in the Foundation

By the 1990s, inconsistencies began to surface. Economic growth continued, but inequality widened. Entire industrial communities collapsed despite residents working hard and playing by the rules. The benefits of “trickle-down” economics were not trickling very far.

Personal experiences made the abstractions impossible to ignore. Families lost health insurance because of pre-existing conditions. Medical bills pushed insured households into bankruptcy. These outcomes weren’t failures of character; they were failures of systems.

The 2008 financial crisis shattered whatever illusions remained. Financial institutions that preached personal responsibility engaged in reckless speculation, then received massive government bailouts, while homeowners were left to face foreclosure. Like millions of others, I lost nearly half of my retirement savings. The contradiction was glaring: socialism for the wealthy, harsh market discipline for everyone else. Individual responsibility meant little when systemic risk brought down the entire economy.

A Turning Point

Job loss during the Great Recession completed the lesson. Despite qualifications and work history, employment opportunities vanished. Unemployment benefits—once easy to dismiss in theory as handouts—became essential in practice. The bootstrap mythology doesn’t hold up when the floor is pulled away.

This period also exposed the fragility of employer-based healthcare and retirement systems. COBRA coverage was unaffordable. 401(k)s evaporated. The safety net that once seemed excessive suddenly looked inadequate. Meanwhile, countries with stronger social protections weathered the recession better than the United States.

Seeing Other Models

Travel and research broadened my perspective further. Nations like Germany, Denmark, France, and Sweden paired market economies with robust social programs—and consistently outperformed the U.S. on measures of health, social mobility, and life satisfaction.

These were not stagnant, overregulated societies. They were thriving capitalist democracies that simply made different choices about public investment and risk-sharing.

Writers like Joseph Stiglitz and Thomas Piketty documented how concentrated wealth undermines both democracy and long-term growth. Historical evidence showed that America’s most prosperous era—the post-World War II boom—coincided with high marginal tax rates, strong unions, and major public investment.

Healthcare Changed Everything

Healthcare ultimately crystallized my shift. The U.S. spends far more per capita than any other nation yet produces worse outcomes on many basic measures.

As a physician, I watched patients struggle with insurance denials, opaque pricing, and medical debt. Healthcare markets don’t function like normal markets. You can’t comparison shop during a heart attack. When insurers profit by denying care, the system aligns against patients. Medical bankruptcy is virtually unknown in countries with universal coverage—for a reason. We have a system where the major goal of health insurance companies is making a profit for their investors—not providing affordable healthcare to their subscribers. 

Climate and Collective Action

Climate change further exposed the limits of market fundamentalism. Individualism and laissez-faire policies have failed to account for shared environmental costs and long-term consequences. Markets alone cannot price long-term environmental harm or coordinate collective action at the necessary scale. Addressing climate risk requires regulation, public investment, and democratic planning.

What Social Democracy Is—and Isn’t

Social democracy is not the rejection of capitalism. It is regulated capitalism with guardrails—markets where they work well, public systems where markets fail. Healthcare, education, infrastructure, and basic income security perform better with strong public involvement.

This differs from democratic socialism, a distinction I’ve explored elsewhere. Social democracy embraces entrepreneurship and competition while preventing monopoly power, protecting workers, and taxing fairly to fund shared prosperity.

As sociologist Lane Kenworthy notes, the U.S. already has elements of social democracy—Social Security, Medicare, public education—we simply underfund them compared to European nations.

A Pragmatic Conclusion

My evolution wasn’t ideological betrayal; it was pragmatic learning. I adjusted my beliefs based on outcomes, not slogans. Countries with strong social democracies routinely outperform the U.S. on health, mobility, education, and even business competitiveness.

True prosperity requires both entrepreneurial freedom and collective investment. The choice isn’t markets or government—it’s how to balance them intelligently. This lesson took me decades to learn, but the evidence now feels hard to ignore.

References

  1. Federal Reserve History – The Great Recession
    Overview of causes, systemic failures, and economic consequences of the 2007–2009 financial crisis.
    https://www.federalreservehistory.org/essays/great-recession
  2. OECD – Social Protection and Economic Resilience
    Comparative data on how countries with stronger social safety nets performed during economic downturns.
    https://www.oecd.org/economy
  3. World Happiness Report (United Nations / Oxford)
    Cross-national comparisons of well-being, social trust, and economic security.
    https://worldhappiness.report
  4. Joseph Stiglitz – Inequality and Economic Growth (IMF Finance & Development)
    Analysis of how income concentration undermines long-term economic performance and democracy.
    https://www.imf.org/en/Publications/fandd/issues/2019/09/inequality-and-economic-growth-stiglitz
  5. Thomas Piketty – Capital in the Twenty-First Century (Data Companion & Summaries)
    Historical evidence on wealth concentration and taxation in advanced economies.
    https://wid.world
  6. Tax Policy Center – Historical Top Marginal Income Tax Rates
    U.S. tax rate history showing high marginal rates during the post-war economic boom.
    https://www.taxpolicycenter.org/statistics/historical-highest-marginal-income-tax-rates
  7. The Commonwealth Fund – U.S. Health Care from a Global Perspective
    Comparative analysis of health spending, outcomes, and access across developed nations.
    https://www.commonwealthfund.org/publications/issue-briefs/2023/jan/us-health-care-global-perspective-2022
  8. OECD Health Statistics
    International comparisons of healthcare costs, outcomes, and system performance.
    https://www.oecd.org/health/health-data.htm
  9. IPCC Sixth Assessment Report – Synthesis Report
    Scientific consensus on climate change risks and the need for coordinated public action.
    https://www.ipcc.ch/report/ar6/syr
  10. Lane Kenworthy – Social Democratic Capitalism
    Comparative research on social democracy, public investment, and economic performance.
    https://lanekenworthy.net

Why We Make Promises to Ourselves Every January: The History of New Year’s Resolutions

New Year’s resolutions—a practice where individuals set goals or make promises to improve their lives in the upcoming year—have a rich and varied history spanning thousands of years. While the concept of self-improvement at the start of a new year feels distinctly modern, its origins are deeply rooted in ancient civilizations and religious traditions that understood the psychological power of fresh starts.

Origins of New Year’s Resolutions

The tradition of making promises at the start of a new year can be traced back over 4,000 years to ancient Babylon. During their 12-day festival called Akitu, held in mid-March to coincide with the spring harvest and planting season, Babylonians made solemn vows to their gods. These promises typically involved practical matters like repaying debts and returning borrowed items, reflecting the agricultural society’s emphasis on community obligations and divine favor. The Babylonians believed that success in fulfilling these promises would curry favor with their deities, ensuring good harvests and prosperity in the year ahead.

The practice evolved significantly when Julius Caesar reformed the Roman calendar in 46 BCE and established January 1 as the official start of the new year. This wasn’t an arbitrary choice—January was named after Janus, the two-faced Roman god of beginnings, endings, doorways, and transitions. The symbolism was perfect: one face looking back at the year past, the other gazing forward to the future. Romans offered sacrifices to Janus and made promises of good conduct for the coming year, combining reflection on past mistakes with optimism about future improvements.

By the Middle Ages, the focus shifted dramatically toward religious observance. In early Christianity, the first day of the year became a time of prayer, spiritual reflection, and making pious resolutions aimed at becoming better Christians. One of the most colorful New Year’s traditions from this era was the “Peacock Vow,” practiced by Christian knights. At the end of the Christmas season, these knights would reaffirm their commitment to knightly virtue while feasting on roast peacock at elaborate New Year’s celebrations. The peacock, a symbol of pride and nobility, served as the centerpiece for vows promising good behavior and chivalric deeds during the coming year.

In the 17th century, Puritans brought particular intensity to the practice of New Year’s resolutions, focusing them squarely on spiritual and moral improvement. Rather than the broad promises of earlier eras, Puritan resolutions were detailed and specific. They committed to avoiding pride and vanity, practicing charity and liberality toward others, refraining from revenge even when wronged, controlling anger in daily interactions, speaking no evil of their neighbors, and living every aspect of their lives aligned with strict religious principles. Beyond these behavioral commitments, they also resolved to study scriptures diligently throughout the year, improve their religious devotion on a weekly basis, and continually renew their dedication to God. These resolutions were taken with utmost seriousness, often recorded in personal journals and reviewed regularly.

In 1740, John Wesley, the founder of Methodism, formalized this spiritual approach by creating the Covenant Renewal Service, traditionally held on New Year’s Eve or New Year’s Day. These powerful gatherings encouraged participants to reflect deeply on the past year’s failings and successes while making resolutions for spiritual growth in the year ahead. This tradition continues in many Methodist churches today.

Interestingly, the first known use of the specific phrase “New Year’s Resolution” appeared in a Boston newspaper called Walker’s Hibernian Magazine in 1813. The article took a humorous tone, discussing how people broke their New Year’s vows almost as soon as they made them—a wry observation that suggests nothing much has changed over the last 212 years.

The Modern Evolution of New Year’s Resolutions

The secularization of New Year’s resolutions accelerated during the 19th and 20th centuries as Western societies became increasingly diverse and less uniformly religious. Self-improvement and personal growth gradually took precedence over religious vows, though the underlying psychology remained similar. The rise of print media played a crucial role in popularizing the practice beyond religious communities. Newspapers and magazines began publishing advice columns on how to set and achieve goals, turning what had been a primarily spiritual practice into a secular ritual of self-betterment.

The industrial revolution and urbanization also influenced the nature of resolutions. As more people moved to cities and took on wage labor, resolutions began to reflect modern concerns like career advancement, financial stability, and managing the stress of urban life. The self-help movement of the 20th century, spurred by books like Dale Carnegie’s “How to Win Friends and Influence People” and Norman Vincent Peale’s “The Power of Positive Thinking,” further embedded the idea that individuals could transform themselves through conscious effort and goal-setting.

By the 21st century, resolutions were firmly established in Western culture as a beloved tradition of hope and renewal, no longer tied to any particular religious framework. The internet age brought new dimensions to the practice, with social media allowing people to publicly declare their resolutions, fitness tracking apps enabling data-driven self-improvement, and online communities providing support and accountability.

Common New Year’s Resolutions

Resolutions tend to reflect both cultural priorities and universal human aspirations. When researchers survey what people resolve to change, recurring themes emerge that tell us something about areas of discontent in contemporary life. Health and fitness consistently dominate the list, with millions of people vowing to lose weight, exercise more regularly, and eat healthier foods. The popularity of these goals reflects our sedentary modern lifestyles, abundant processed foods, and the cultural premium placed on physical appearance and wellness.

Personal development goals are another major category. People promise themselves they will finally learn that new skill they’ve been putting off, read more books instead of scrolling through social media, and manage their time better to reduce stress and increase productivity. These resolutions speak to a desire for intellectual growth and a nagging sense that we’re not living up to our full potential.

Financial goals also rank high on most people’s resolution lists. Many resolve to save more money for the future, pay off debts that have been accumulating, or stick to a budget instead of impulse spending. These financial resolutions often stem from anxiety about economic security and a recognition that small daily choices compound into major financial consequences over time.

Relationship and community-focused resolutions reflect our social nature and the loneliness epidemic affecting many developed nations. People vow to spend more quality time with family and friends rather than staying busy with work and distractions. They plan to volunteer and to give back to their communities in meaningful ways. They hope to strengthen the social bonds that are crucial to happiness and longevity.

Finally, breaking bad habits remains a perennial favorite. Traditional vices like smoking and excessive alcohol consumption still top many lists, but modern resolutions also target newer concerns like limiting screen time and reducing smartphone addiction. These goals acknowledge how difficult it is to maintain healthy habits in an environment designed to encourage overconsumption and instant gratification.

The Success Rate of Resolutions

Despite their enduring popularity, New Year’s resolutions are notoriously difficult to keep. Multiple studies estimate that approximately 80% of resolutions fail by February, often crashing and burning within just a few days of January 1st. The reasons for this high failure rate are both psychological and practical. Many people set overly ambitious goals without considering the realistic constraints of their lives or the sustained effort needed for meaningful change. Others make vague resolutions like “get healthier” without specific action steps or measurable milestones.

Research in behavioral psychology suggests that setting realistic, measurable, and time-bound goals—often called SMART goals (Specific, Measurable, Achievable, Relevant, and Time-bound)—can significantly improve success rates. Rather than resolving to “exercise more,” for example, a SMART goal would be “go to the gym for 30 minutes every Monday, Wednesday, and Friday morning.” The specificity provides clear direction, and the measurability allows for tracking progress and celebrating small victories along the way.

However, it’s worth noting that most people approach their New Year’s resolutions more as a fun tradition than with serious anticipation that they will actually keep them. There’s a ritualistic, almost playful quality to the practice—we know the odds are against us, but we participate anyway, embracing the hopeful symbolism of a fresh start even if we suspect we’ll be back to our old habits before Valentine’s Day.

The Significance of Resolutions Today

New Year’s resolutions persist across centuries and cultures because they align with a fundamental human desire for self-improvement and the psychological comfort of fresh starts. The appeal of marking time with calendars and treating January 1st as somehow special—despite being astronomically arbitrary—speaks to our need for narrative structure in our lives. Whether rooted in ancient Babylonian pledges to repay debts, Roman sacrifices to Janus, Christian vows of spiritual renewal, or modern goals to lose ten pounds, resolutions represent an enduring belief in the potential for change.

The tradition reminds us that humans have always struggled with the gap between who we are and who we aspire to be, and that we’ve always believed, however naively, that marking a new beginning on the calendar might help us bridge that gap. Even if our resolutions fail more often than they succeed, the very act of making them reaffirms our agency and our hope that we can become better versions of ourselves with just a bit of conscious effort.

Sources:

History.com provides comprehensive coverage of New Year’s resolution traditions: https://www.history.com/news/the-history-of-new-years-resolutions

Britannica offers detailed information on Janus and Roman New Year traditions: https://www.britannica.com/topic/Janus-Roman-god

The Smithsonian Magazine explores New Year’s countdown traditions and their historical context: https://www.smithsonianmag.com/science-nature/why-do-we-count-down-to-the-new-year-180961433/

Anthony Aveni’s “The Book of the Year: A Brief History of Our Seasonal Holidays” provides scholarly analysis of New Year’s traditions across cultures.

Kaila Curry’s article “The Ancient History of New Year’s Resolutions” traces the practice from Babylonian times through modern era.

Joshua O’Driscoll’s research on “The Peacock Vows” documents medieval chivalric New Year’s traditions, excerpted in various historical compilations.​​​​​​​​​​​​​​​​

Study The Past If You Would Define The Future—Confucius

I particularly like this quotation. It is similar to the more modern version: Those who don’t learn from the past are doomed to repeat it. However, I much prefer the former because it seems to be more in the form of advice or instruction. The latter seems to be more of a dire warning. Though I suspect, given the current state of the world, a dire warning is in order.

But regardless of whether it comes in the form of advice or warning, people today do not seem to heed the importance of studying the past.  The knowledge of history in our country is woeful. The lack of emphasis on the teaching of history in general and specifically American history, is shameful. While it is tempting to blame it on the lack of interest on the part of the younger generation, I find people my own age also have little appreciation of the events that shaped our nation, the world and their lives. Without this understanding, how can we evaluate what is currently happening and understand what we must do to come together as a nation and as a world.

I have always found history to be a fascinating subject. Biographies and nonfiction historical books remain among my favorite reading. In college I always added one or two history courses every semester to raise my grade point average. Even in college I found it strange that many of my friends hated history courses and took only the minimum. At the time, I didn’t realize just how serious this lack of historical perspective was to become.

Several years ago I became aware of just how little historical knowledge most people had. At the time Jay Leno was still doing his late-night show and he had a segment called Jaywalking. During the segment he would ask people in the street questions that were somewhat esoteric and to which he could expect to get unusual and generally humorous answers. On one show, on the 4th of July, he asked people “From what country did the United States declare independence on the 4th of July?” and of course no one knew the answer.

My first thought was that he must have gone through dozens of people to find the four or five people who did not know the answer to his question. The next day at work, the 5th of July, I decided to ask several people, all of whom were college graduates, the same question. I got not one single correct answer. Although, one person at least realized “I think I should know this”. When I told my wife, a retired teacher, she wasn’t surprised.  For a long time, she had been concerned about the lack of emphasis on social studies and the arts in school curriculums.  I was becoming seriously concerned about the direction of education in our country.

A lot of people are probably thinking “So what, who really cares what a bunch of dead people did 200 years ago?” If we don’t know what they did and why they did it how can we understand its relevance today?  We have no way to judge what actions may support the best interests of society and what might ultimately be detrimental.

Failure to learn from and understand the past results in a me-centric view of everything. If you fail to understand how things have developed, then you certainly cannot understand what the best course is to go forward. Attempting to judge all people and events of the past through your own personal prejudices leads only to continued and worsening conflict.

If you study the past you will see that there has never general agreement on anything. There were many disagreements, debates and even a civil war over differences of opinion.  It helps us to understand that there are no perfect people who always do everything the right way and at the right time. It helps us to appreciate the good that people do while understanding the human weaknesses that led to the things that we consider faults today. In other words, we cannot expect anyone to be a 100% perfect person. They may have accomplished many good and meaningful things and those good and meaningful things should not be discarded because the person was also a human being with human flaws.

Understanding the past does not mean approving of everything that occurred but it also means not condemning everything that does not fit into twenty-first century mores.  Only by recognizing this and seeing what led to the disasters of the past can we hope to avoid repetition of the worst aspects of our history. History teaches lessons in compromise, involvement and understanding. Failure to recognize that leads to strident argument and an unwillingness to cooperate with those who may differ in even the slightest way. Rather than creating the hoped-for perfect society, it simply leads to a new set of problems and a new group of grievances.

In sum, failure to study history is a failure to prepare for the future. We owe it to ourselves and future generations to understand where we came from and how we can best prepare our country and the world for them. They deserve nothing less than a full understanding of the past and a rational way forward. 

This was my first post after I started my blog in 2021.  I believe it is even more relevant now.

Merry Christmas from The Grumpy Doc

The Multitasking Myth

What’s Really Happening in Your Brain

You’re probably multitasking right now. Maybe you’re reading this with a podcast playing in the background, or you’ve got three browser tabs open and you’re checking your phone every few minutes. We all do it. We even brag about it on resumes: “Excellent multitasker!” But here’s the uncomfortable truth that neuroscience has been trying to tell us for years—what we call multitasking is mostly an illusion.

What People Actually Mean

When most people say they’re multitasking, they’re describing one of two scenarios. The first is doing multiple automatic activities simultaneously—like walking while talking, or listening to music while folding laundry. The second, and the one that gets more interesting, is rapidly switching attention between different demanding tasks—like answering emails while on a conference call, or texting while watching TV.

The distinction matters because our brains handle these situations very differently. Activities that have become automatic through practice don’t require much conscious attention. You can absolutely walk and chew gum at the same time because neither activity demands your prefrontal cortex’s full attention. But when both tasks require active thinking and decision-making? That’s where things get complicated.

The Brain’s Bottleneck

Here’s what neuroscience tells us: true multitasking—simultaneously processing multiple streams of complex information—is essentially impossible for the human brain. What feels like multitasking is actually rapid task-switching, and your brain pays a price every time it makes that switch.

The limitation comes from something researchers call the “response selection bottleneck.” When you’re performing tasks that require conscious thought, they all funnel through the same neural pathways in your prefrontal cortex. This region can only process one demanding task at a time, so when you think you’re doing two things at once, you’re really just toggling between them very quickly.

Studies using functional MRI brain imaging have shown what happens during this switching process. When people attempt to multitask, researchers observe reduced activity in the regions responsible for each individual task compared to when those tasks are done separately. Your brain literally can’t devote full processing power to both activities simultaneously.

The Switching Cost

Every time you switch from one task to another, there’s a cognitive cost. Your brain needs to disengage from the first task, shift attention, and then reorient to the new task. This happens so quickly—sometimes in tenths of a second—that we don’t consciously notice it. But those microseconds add up.

Sorry, but get ready for some doctor talk.  When people switch tasks, imaging studies show increased activation in frontoparietal control and dorsal attention networks, especially in prefrontal regions (like the inferior frontal junction) and parietal cortex (such as intraparietal sulcus). This boosted activity reflects the brain dropping one task set, loading another into working memory, and re‑orienting attention—processes that consume time and neural resources.

Over time, practice can make specific tasks more automatic, reducing average activity in these control networks and allowing smoother coordination of tasks. However, even in trained multitaskers, studies still find evidence for serial queuing of operations in the multiple‑demand frontoparietal network, reinforcing the idea that consciously doing multiple demanding things “at once” is extremely limited.

Research from Stanford University found that people who regularly engage in heavy media multitasking actually perform worse at filtering out irrelevant information and switching between tasks than people who focus on one thing at a time. Essentially, chronic multitaskers become worse at the very thing they practice most.

Even when people train extensively, studies indicate they mainly become faster at switching and coordinating, not truly doing two demanding tasks at once. Experimental work using reaction‑time paradigms shows a reliable “switch cost”: when people change tasks, responses get slower and more error‑prone compared to staying with one task.  This cost is one of the strongest signs that most human “multitasking” is serial switching under time pressure rather than genuine simultaneous processing.

The American Psychological Association reports that these mental blocks created by switching between tasks can cost up to 40% of productive time. Think about that for a minute—nearly half your work time potentially lost to the mechanics of jumping between activities.

The Attention Residue Problem

There’s another wrinkle that makes multitasking even less efficient. When you switch away from a task before completing it, part of your attention remains stuck on the unfinished work. Researchers call this “attention residue,” and it reduces your cognitive performance on the next task.

Sophie Leroy, a business professor at the University of Washington, demonstrated this effect in a series of studies. People who switched tasks performed significantly worse on the second task than people who finished the first task before moving on. The unfinished task keeps running in your mental background, using up cognitive resources you need for the new activity.

When “Multitasking” Actually Works

There are legitimate exceptions to the no-multitasking rule, but they’re more limited than most people think. You can successfully combine activities when at least one of them is so well-practiced that it’s become automatic—essentially requiring no conscious thought. You can listen to an audiobook while jogging because your body handles the running on autopilot.

Some research also suggests that certain types of background music or ambient noise can enhance performance on creative tasks, though this seems to work best when the music is familiar and lacks lyrics that compete with language-processing tasks.

Why We Keep Trying

If multitasking is so inefficient, why do we persist? Part of the answer lies in how it feels. Task-switching triggers the release of dopamine, the brain’s reward chemical. Every time you check your phone or switch to a new browser tab, you get a little neurochemical hit. It feels productive, even when it isn’t.

There’s also a cultural element. We live in an attention economy where being constantly connected and responsive feels mandatory. Focusing on one thing can feel like you’re missing out or falling behind, even though the research consistently shows that single-tasking produces better results faster.

It’s worth noting that research consistently shows this gap between perception and performance.  People who think they are excellent multitaskers tend to be the worst at it.

The Bottom Line

The evidence is pretty clear: what we call multitasking is really task-switching, and it makes us slower and more error-prone at both activities. Your brain has a fundamental processing limitation that hasn’t changed despite our increasingly multi-screen world. The prefrontal cortex can only fully engage with one complex task at a time, and switching between tasks creates cognitive costs that add up to significant lost productivity and increased mistakes.

This doesn’t mean you should never listen to music while working or that walking while talking will melt your brain. But when you’re doing something that really matters—writing an important email, having a meaningful conversation, learning something new—giving it your full attention will always produce better results than splitting your focus.

The Freemasons and the Founding Fathers: Secret Society or Just a Really Good Book Club?

You’ve probably heard the whispers—the Freemasons secretly controlled the American Revolution, George Washington wore a special apron, and there’s a hidden pyramid on the dollar bill. It’s the kind of thing that sounds like it came straight from a Nicolas Cage movie. But like most historical legends, the real story is more interesting (and less conspiratorial) than the mythology.

So, what’s the actual deal with Freemasons and America’s founding? Let’s dig in.

What Even Is Freemasonry?

First things first: Freemasonry started out as actual stonemasons’ guilds back in medieval Europe—think guys who built cathedrals sharing trade secrets. But by the early 1700s, it had transformed into something completely different: a philosophical club where educated men gathered to discuss big ideas about morality, reason, and how to be better humans.

The secrecy? That was part of the appeal. Lodges had rituals and passwords, sure, but the core values weren’t exactly hidden. Freemasons were all about Enlightenment thinking—liberty, equality, the pursuit of knowledge. Basically, the kind of stuff that gets you excited if you’re the type who actually enjoys reading philosophy books.

In colonial America, joining a Masonic lodge was a bit like joining an elite networking group today, except instead of swapping business cards, you discussed natural rights and wore fancy aprons. Lawyers, merchants, printers—the educated professional class—flocked to lodges for both the intellectual stimulation and the social connections.

The Founding Fathers: Who Was Actually In?

Let’s separate fact from fiction when it comes to which founders were card-carrying Masons.

Definitely Masons:

George Washington became a Master Mason at 21 in 1753. He wasn’t the most active member—he didn’t attend meetings constantly—but he took it seriously enough to wear his Masonic apron when he laid the cornerstone of the U.S. Capitol in 1793. That’s a pretty public endorsement.

Benjamin Franklin was perhaps the most dedicated Mason among the founders. Initiated in 1731, he eventually became Grand Master of Pennsylvania’s Grand Lodge and helped establish lodges in France during his diplomatic stint. Franklin was basically the poster child for Enlightenment Masonry.

Paul Revere—yes, that Paul Revere—was Grand Master of Massachusetts. His midnight ride gets all the attention, but his Masonic connections were just as important to his Revolutionary activities.

John Hancock also served as Grand Master of Massachusetts. His oversized signature on the Declaration was matched by his outsized commitment to Masonic ideals.

John Marshall, the Chief Justice who shaped American constitutional law, was a dedicated Mason. So was James Monroe, the fifth president.

Here’s a fun stat: of the 56 signers of the Declaration of Independence, at least nine (about 16%) were Masons. Among the 39 who signed the Constitution, roughly thirteen (33%) belonged to the fraternity.

The Maybes:

Thomas Jefferson? Probably not a Mason, despite endless conspiracy theories. There’s no solid evidence of membership, though his Enlightenment philosophy certainly sounded Masonic. His buddy the Marquis de Lafayette was definitely in, which hasn’t helped dispel the rumors.

Alexander Hamilton? The evidence is murky. Some historians think his writings hint at Masonic sympathies, but there’s no membership record.

Definitely Not:

John Adams wasn’t a Mason and was actually skeptical of secret societies. He still believed in many of the same principles, though—virtue, republican government, that sort of thing.

Did the Masons Really Influence the Revolution?

Here’s where it gets interesting. No, the Freemasons didn’t sit around a lodge plotting revolution like some shadowy cabal. But did their ideas and networks matter? Absolutely.

Think about what Masonic lodges provided: a space where educated colonists could meet, discuss radical ideas about natural rights and self-governance, and build trust across colonial boundaries—all without British officials breathing down their necks. These lodges brought together men from different colonies, different religious backgrounds (Anglicans, Quakers, Deists), and different social classes.

The radical part? Inside a lodge, everyone met “on the level.” It didn’t matter if you were born rich or poor—merit and virtue determined your standing. That’s pretty revolutionary thinking in the 1700s when most of the world still believed some people were just born better than others. Sound familiar? “All men are created equal” has a similar ring to it.

Freemasonry also championed religious tolerance. You had to believe in some kind of Supreme Being, but that was it—no specific creed required. This ecumenical approach directly influenced the founders’ commitment to religious freedom and separation of church and state.

The Masonic motto about moving “from darkness to light” through knowledge wasn’t just ritualistic mumbo-jumbo. It reflected genuine Enlightenment belief in reason and progress—the same intellectual current that powered revolutionary thinking.

What About All That Symbolism?

Okay, let’s address the pyramid and the all-seeing eye on the dollar bill. Are they Masonic? Maybe, maybe not. The Great Seal of the United States definitely uses imagery that Masons also used—but so did lots of 18th-century groups drawing on Enlightenment and classical symbolism. The connection is debated among historians.

What’s undeniable is that Masonic culture emphasized architecture and building as metaphors for constructing a just society. When Washington laid that Capitol cornerstone in his Masonic apron, he was making a statement about building something enduring and meaningful.

The “Conspiracy” Question

Let’s be clear: there was no Masonic conspiracy to create America. The fraternity wasn’t even unified—lodges operated independently, and members included both patriots and loyalists. Officially, Masonic organizations tried to stay neutral during the Revolution, though obviously that didn’t work out perfectly when the war split families and communities.

What is true is that many of the Revolution’s most articulate, influential leaders happened to be Masons. And the fraternity’s values—liberty, equality, reason, fraternity—aligned perfectly with revolutionary ideology. Correlation, not conspiracy.

After the Revolution, Freemasonry exploded in popularity. It became associated with the Enlightenment values that had supposedly won the day. Future presidents including Andrew Jackson, James Polk, and Theodore Roosevelt were all Masons. At its 19th-century peak, an estimated one in five American men belonged to a lodge.

What’s the Bottom Line?

The Freemason influence on America’s founding is real, but it’s cultural rather than conspiratorial. The lodges provided a space where Enlightenment ideas could circulate, where colonial leaders could build networks of trust, and where egalitarian principles could be practiced in miniature.

Washington, Franklin, Hancock, and the others weren’t sitting in smoke-filled rooms with secret handshakes planning to overthrow the British crown. They were part of a broader philosophical movement that valued personal improvement, moral virtue, and human rights. The Masonic lodge was one venue—among many—where those ideas took root.

Freemasonry was one tributary feeding into the river of revolutionary thought, along with classical republicanism, British common law, various religious traditions, and plain old grievances about taxes and representation.

The real story is somehow simpler and more fascinating than the conspiracy theories: a bunch of educated colonists joined a fraternity that encouraged them to think big thoughts about human nature and just governance. Those thoughts, debated in lodges and taverns and town halls, eventually sparked a revolution.

Not because of secret symbols or mysterious rituals, but because ideas about liberty and equality—once you start taking them seriously—are genuinely revolutionary.

True confession—The Grumpy Doc is not now, nor has he ever been, a Mason.

The Enigma of Magical Thinking: From Everyday Enchantment to Political Discourse

Have you ever been talking to someone when you started to think, “How in the world can they believe that?” They may have been engaging in magical thinking. But don’t feel too superior because most likely you have been guilty of the same thing.

Magical thinking is one of those fascinating quirks of human psychology that shows up everywhere—from your friend who won’t talk about their job interview until it’s over, to major political movements that shape our world. At its core, it’s the belief that our thoughts, words, or actions can influence events in ways that completely ignore standard cause-and-effect logic.

What We’re Really Talking About

Magical thinking isn’t new. Our ancestors practiced animism, believing spirits lived in everything around them. They created rituals to appease these spirits or tap into their power. Fast forward to today, and despite all our scientific advances, these patterns of thinking haven’t gone anywhere—they’ve just evolved.

It’s essentially a cognitive bias where we connect events that aren’t truly linked. This typically happens when we’re facing uncertainty, stress, or situations where we feel powerless. The thinking pattern gives us a psychological safety net—a feeling that we’re in control.

How It Shows Up in Daily Life

Superstitions and Rituals

Knocking on wood to prevent bad luck is a universal example. There’s zero logical connection between rapping your knuckles on a wooden surface and your future, but people do it anyway because it feels like taking action against uncertainty.

Athletes are notorious for this. That “lucky” jersey, the pre-game meal eaten in exactly the same order, the specific warm-up routine—these rituals don’t really affect performance, but they can boost confidence and calm nerves, which indirectly helps.

Lucky Charms and Talismans

Rabbit’s feet, four-leaf clovers, special coins—lots of people carry objects they believe bring good fortune. These beliefs come from cultural traditions and personal experiences. While there’s no scientific backing for their power, the comfort they provide is genuinely real.

The Jinx Effect

Ever avoided talking about something good that might happen because you didn’t want to “jinx” it? I worked in emergency rooms for many years, and no one would ever use the word “quiet” for fear that that would cause a sudden rush of ambulances. That’s magical thinking connecting your words to external outcomes in a totally irrational way.

Health Decisions

This gets more serious when magical thinking influences medical choices. Some people strongly believe in homeopathic remedies or alternative therapies that lack scientific validation. Interestingly, the placebo effect demonstrates how powerful belief can be—people sometimes experience limited health improvements simply because they believe a treatment works, although these effects are most common in relief of mild to moderate pain.

Gambling Behaviors

Casinos thrive on magical thinking. Blowing on dice, wearing lucky clothes, or believing you’re “due” for a win after several losses—these are all examples of the illusion of control. Gamblers think they can influence random outcomes through specific actions, which can fuel persistent gambling even when they’re losing money.

When Magical Thinking Enters Politics

Here’s where things get more complex and consequential. Magical thinking doesn’t just affect personal decisions—it shapes entire political movements and policy debates.

Conspiracy Theories

QAnon represents one of the most striking modern examples. Followers believe a secret group of powerful figures runs a global operation, and that certain political leaders possess almost supernatural abilities to fight against it. Despite zero credible evidence, this belief system has attracted significant followings, demonstrating how magical thinking can create entire alternate realities in the political sphere.

Pandemic Responses

COVID-19 brought magical thinking into sharp relief. Some people blamed 5G technology for causing the virus, leading to actual attacks on cell towers in multiple countries. Others promoted unproven treatments as miracle cures, despite scientific evidence showing they didn’t work and in some cases were harmful. The desire for simple answers to a complicated crisis made people vulnerable to these beliefs.

Vaccine denialanother pandemic related example of magical thinking—is attributing harmful effects to vaccines despite extensive scientific evidence to the contrary, while simultaneously believing that alternative approaches (like “natural immunity” alone) possess special protective powers.

Economic Policies

“Trickle-down economics”—the idea that tax cuts for wealthy individuals automatically generate economic growth and increased government revenue—often functions as magical thinking in policy debates. This theory simplifies incredibly complex economic dynamics and, according to multiple economic analyses, lacks consistent empirical support. Critics point out it ignores the nuances of fiscal policy and income distribution.

Climate Change

Despite overwhelming scientific consensus, some political movements deny climate change reality. This sometimes involves believing that natural cycles alone explain everything, or that technological solutions will magically appear without significant policy intervention. This type of thinking often protects existing economic interests or ideological positions.

Why Our Brains Do This

Pattern Recognition

Humans are pattern-seeking machines. We’re wired to spot connections, even when they don’t exist—a tendency called apophenia. This helped our ancestors survive—better to assume that rustling bush is a tiger than to ignore it—but it also leads us to form superstitions and magical beliefs.

The Comfort Factor

When life feels uncertain or stressful, magical thinking offers psychological comfort. Rituals and lucky charms reduce anxiety and give us a sense of agency, even if that control is illusory.

Cultural Transmission

Many superstitions and magical beliefs get passed down through generations, becoming woven into cultural norms. When we see others engaging in these behaviors, it reinforces them. Social acceptance is powerful.

Political Advantages

In politics specifically, magical thinking persists because it:

  • Simplifies complex issues into digestible narratives
  • Strengthens group identity and belonging
  • Can be exploited by politicians and interest groups to mobilize support
  • Provides psychological certainty in a chaotic political landscape

Finding the Balance

Here’s the thing: magical thinking isn’t purely negative. It serves real psychological functions—helping us cope with uncertainty, reducing anxiety, and giving us a sense of control when the world feels chaotic.

The problem comes when we rely on it too heavily. Avoiding medical treatment for unproven remedies can have serious health consequences. Basing policy decisions on magical thinking rather than evidence can affect millions of people. The key is balance.

The Takeaway

Magical thinking connects us to our shared human history, from ancient animistic beliefs to modern political movements. It reveals how our minds constantly work to make sense of an unpredictable world. By understanding these cognitive patterns, we can appreciate their psychological benefits while staying alert to their limitations.

Whether we’re knocking on wood or evaluating political claims, recognizing magical thinking helps us become more critical consumers of information. We can honor the comfort these beliefs provide while still grounding important decisions in evidence and rational analysis.

The enchantment isn’t going anywhere—it’s part of being human. But awareness of it? That’s our best tool for navigating between the rational and the magical in both our personal lives and our shared political reality.

Page 3 of 15

Powered by WordPress & Theme by Anders Norén