Grumpy opinions about everything.

Category: Commentary Page 2 of 15

This is the home of grumpy opinions.

The Price Tag Mystery: Why Nobody Really Knows What Healthcare Costs in America

Imagine walking into a store where nothing has a price tag. When you get to the register, the cashier scans your items and tells you the total—but that total is different for every customer. Your neighbor might pay $50 for the same items that cost you $200. The store won’t tell you why, and you won’t find out until after you’ve already “bought” everything.

Welcome to American healthcare, where the simple question “how much does this cost?” has no simple answer.

You might think I’m exaggerating, but the evidence suggests otherwise. Research published in late 2023 by PatientRightsAdvocate.org found that prices for the same medical procedure can vary by more than 10 times within a single hospital depending on which insurance plan you have, and by as much as 33 times across different hospitals. A knee replacement that costs around $23,170 in Baltimore might run $58,193 in New York. An emergency department visit that one facility charges $486 for might cost $3,549 at another hospital for the identical service.

The fundamental problem is that hospitals and doctors don’t have one price for their services. They have dozens, sometimes hundreds, of different prices for the exact same procedure depending on who’s paying. This bizarre system evolved because most healthcare in America isn’t a simple transaction between patient and provider—there’s a third party in the middle called an insurance company, and that changes everything.

The Fiction of Chargemaster Prices

A hospital chargemaster is essentially the hospital’s internal price list—a massive catalog that assigns a dollar amount to every service, supply, test, medication, and procedure the hospital can bill for, from an aspirin to a complex surgery. These listed prices are usually very high and are not what most patients actually pay; instead, the chargemaster functions as a starting point for negotiations with insurers and government programs like Medicare and Medicaid, which typically pay much lower, pre-set rates. What an individual patient ultimately pays depends on several factors layered on top of the chargemaster price. Think of them like the manufacturer’s suggested retail price on a car: technically real, but nobody pays them.

A hospital might list an MRI at $3,000 or a blood test at $500. But then insurance companies come in. They represent thousands or millions of potential patients, which gives them serious bargaining power. They negotiate with hospitals along these lines: “We’ll send you lots of patients, but only if you give us a discount.” So, the hospital agrees to accept much less—maybe they’ll take $1,200 for that $3,000 MRI or $150 for the blood test. This discounted amount is called the “negotiated rate,” and it’s what the insurance company will really pay.

Here’s where it gets messy: every insurance company negotiates its own rates with every hospital. Blue Cross might negotiate one price, Aetna a different price, UnitedHealthcare yet another. The same exact MRI at the same hospital might be $1,200 for one insurer’s customers and $1,800 for another’s. And these negotiated rates have traditionally been kept secret—treated like confidential business information that gives each party a competitive advantage.

The Write-Off Game

What happens to that difference between the chargemaster price and the negotiated rate? The hospital “writes it off.” That’s accounting language for “we accept that we’re not getting paid this money, and we’re taking it off the books.” If the hospital charged $3,000 but agreed to accept $1,200, they write off $1,800. This isn’t lost money in the normal sense—they never expected to collect it in the first place. The chargemaster prices are inflated specifically because everyone knows discounts are coming. Some hospitals now post “discounted cash prices” that are often far below chargemaster and sometimes even below some negotiated rates. These are sometimes, though not always, offered to uninsured patients, generally referred to as self-pay. There can be a catch—some hospitals require lump-sum payment of the total bill to qualify for the lower price.

According to the American Hospital Association, U.S. hospitals collectively plan to write off approximately $760 billion in billed charges in 2025 across all categories of write-offs. That’s not a typo—$760 billion. These write-offs happen in several different situations. The most common are contractual write-offs, where the provider has agreed to accept less than their list price from insurance companies.

Hospitals have far more write-offs than just contractual.  They also write off money for charity care—treating patients who can’t afford to pay anything, and they write off bad debt when patients could pay but don’t. They write off small balances that aren’t worth the administrative cost of collection, and they write off amounts related to various billing errors, denied claims, and coverage disputes. Healthcare providers typically adjust about 10 to 12 percent of their gross revenue due to these various write-offs and claim adjustments.

Why Such Wild Variation?

Even with all these negotiated discounts built into the system, the prices still vary enormously. A 2024 study from the Baker Institute found that for emergency department visits, the price charged by hospitals in the top 10% can be three to seven times higher than the hospitals in the bottom 10% for the identical procedure. Research published in Health Affairs Scholar in early 2025 found that even after adjusting for differences between insurers and procedures, the top 25% of prices across all states is 48 percent higher than the bottom 25% of prices for inpatient services.

Several factors drive this variation. Hospitals in areas with less competition can charge more because insurers have fewer alternatives for negotiation. Prestigious hospitals can demand higher rates because insurers want them in their networks to attract customers. Some insurance companies have more bargaining power than others based on their market share. There’s no central authority setting prices—it’s all private negotiations, hospital by hospital, insurer by insurer, procedure by procedure.

For patients, this creates a nightmare scenario. Even if you have insurance, you usually have no idea what you’ll pay until after you’ve received care. Your out-of-pocket costs depend on your deductible (the amount you pay before insurance kicks in), your copay or coinsurance (your share after insurance starts paying), and whether the negotiated rate between your specific insurance and that specific hospital is high or low. Two people with different insurance plans getting the same procedure at the same hospital on the same day can end up with drastically different bills.

Research using new transparency data confirms this isn’t just anecdotal. A study from early 2025 found that for something as routine as a common office visit, mean prices ranged from $82 with Aetna to $115 with UnitedHealth. Within individual insurance companies, the price of the top 25% of office visits was 20 to 50 percent higher than the bottom 25%, meaning even within one insurer’s network, where you go or where you live makes a huge difference.

The Government Steps In

The federal government finally said “enough” and started requiring transparency. Since 2021, hospitals must post their prices online, including what they’ve negotiated with each insurance company. The Centers for Medicare and Medicaid Services (CMS) strengthened these requirements in 2024, mandating standardized formats and increasing enforcement. Health insurance plans face similar requirements to disclose their negotiated rates.

The theory was straightforward: if patients could see prices ahead of time, they could shop around, which would force prices down through competition. CMS estimated this could save as much as $80 billion by 2025. The idea seemed sound—transparency works in other markets, so why not healthcare?

In practice, it’s been messy. A Government Accountability Office (GAO) report from October 2024 found that while hospitals are posting data, stakeholders like health plans and employers have raised serious concerns about data quality. They’ve encountered inconsistent file formats, extremely complex pricing structures, and data that appears to be incomplete or possibly inaccurate. Even when hospitals post the required information, it’s often so convoluted that comparing prices across facilities becomes nearly impossible for average consumers.

An Office of Inspector General report from November 2024 found that not all selected hospitals were complying with the transparency requirements in the first place. And CMS still doesn’t have robust mechanisms to verify whether the data being posted is accurate and complete. The GAO recommended that CMS assess whether hospital pricing data are sufficiently complete and accurate to be usable, and to assess if additional enforcement if needed.

Imagine trying to comparison shop when one store lists prices in dollars, another in euros, and a third uses a proprietary currency they invented. That’s roughly where we are with healthcare price data—technically available, but practically unusable for most people trying to make informed decisions.

The Trump administration in 2025 signed a new executive order aimed at strengthening enforcement of price transparency rules and directing agencies to standardize and make hospital and insurer pricing information more accessible; this action built on rather than reduced the earlier requirements.  Hopefully this will improve the ability of patients to access real costs, but it is my opinion that the industry will continue to resist full and open compliance.

The Limits of Shopping for Healthcare

There’s also a deeper philosophical problem: for healthcare to work like a normal market where price transparency drives competition, patients would need to be able to shop around based on price. That could work for scheduled procedures like knee replacements, colonoscopies, or elective surgeries. You have time to research, compare, and choose.

But it doesn’t work at all when you’re having a heart attack, or your child breaks their arm. You go to the nearest hospital, period. You’re not calling around asking about prices while someone’s having a medical emergency. Even for non-emergencies, choosing based on price assumes equal quality across providers, which isn’t always true and is even harder to assess than price itself.

A study on price transparency tools found mixed results on whether they truly reduce spending. Some research shows modest savings when people use price comparison tools for shoppable services like imaging and lab work. But utilization of these tools remains low, and for many healthcare encounters, price shopping simply isn’t practical or appropriate.

Who Really Knows?

So, who truly understands what things cost in this system? Hospital administrators know what different insurers pay them for specific procedures, but that knowledge is limited to their facility. They don’t necessarily know what other hospitals charge. Insurance company executives know what they’ve negotiated with various hospitals in their network, but they haven’t historically shared meaningful price information with their customers in advance. And they don’t know what their competitors have negotiated.

Patients, caught in the middle, often find out their costs only when they receive a bill weeks after treatment. By that point, the care has been delivered, and the financial damage is done. Recent surveys suggest that surprise medical bills remain a significant problem, with many patients receiving unexpected charges from out-of-network providers they didn’t choose or even know were involved in their care.

The people who are starting to get a comprehensive view are researchers and policymakers analyzing the newly available transparency data. Studies published in 2024 and 2025 using these data have given us unprecedented visibility into pricing patterns and variation. But this is aggregate, statistical knowledge—it helps us understand the system but doesn’t necessarily help individual patients figure out what they’ll pay for a specific procedure.

Where We Stand

The transparency regulations represent a genuine attempt to inject some market discipline into healthcare pricing. Making negotiated rates public breaks down the information asymmetry that has allowed prices to vary so wildly. In theory, if patients and employers can see that Hospital A charges twice what Hospital B does for the same procedure, competitive pressure should push prices toward the lower end.

There’s some early evidence this might be working. A study of children’s hospitals found that price variation for common imaging procedures decreased by about 19 percent between 2023 and 2024, though overall prices continued rising. Whether this trend will continue and expand to other types of facilities remains to be seen.  I am concerned that rather than lowering overall prices it may cause hospitals at the lower end to raise their prices closer to those at the higher end.

Significant obstacles remain. The data quality issues need resolution before the information becomes truly usable. Many patients lack either the time, expertise, or practical ability to shop based on price. And the fundamental structure of American healthcare—with its complex interplay of providers, insurers, pharmacy benefit managers, and government programs—means that even perfect price transparency won’t create a simple, straightforward market.

So, to return to the original question: does anyone truly know the cost of medical care in the United States? In an aggregate sense, researchers and policymakers are starting to understand the patterns thanks to transparency requirements. The data are revealing just how variable and opaque pricing has been. But as a practical matter for individual patients trying to figure out what they’ll pay for needed care, not really. The information is becoming available but remains largely inaccessible or incomprehensible for ordinary people trying to make informed healthcare decisions.

The $760 billion in annual write-offs tells you everything you need to know: the posted prices are largely fictional, the negotiated prices vary wildly, and the system has evolved to be so complex that even the people operating within it struggle to understand the full picture. We’re making progress toward transparency, but we’re a long way from a healthcare system where patients can confidently get the answer to the simple question: “How much will this cost?”

A closing thought: All of this could be solved by development of a single-payer healthcare system such as I proposed in my previous post America’s Healthcare Paradox: Why We Pay Double and Get Less.

Critical Ignoring: The Skill You Didn’t Know You Needed

You’ve probably spent years learning how to pay attention—reading closely, analyzing deeply, and thinking critically. But here’s something nobody taught you in school: in today’s digital world, knowing what not to pay attention to might be just as important as knowing what deserves your focus.

That’s the essence of critical ignoring, a concept developed by researchers Anastasia Kozyreva, Sam Wineburg, Stephan Lewandowsky, and Ralph Hertwig  . It’s basically the skill of deliberately and strategically choosing what information to ignore so you can invest your limited attention where it truly matters.  I first became aware of this concept just a few weeks ago while reading an article by Christopher Mims in the Wall Street Journal.

Why This Matters Now

Think about your typical day online. You’re bombarded with news alerts, social media posts, clickbait headlines, and outrage-inducing content designed specifically to hijack your attention. Traditional advice tells you to carefully evaluate each source, read critically, and fact-check thoroughly. But here’s the problem: if you’re investing serious mental energy evaluating sources that should have been ignored in the first place, your attention has already been stolen.

The researchers make a crucial observation about how the digital world has changed the game. In the past, information was scarce and we had to seek it out. Now we’re drowning in it, and much of it is deliberately designed to be attention-grabbing through tactics like sparking curiosity, outrage, or anger. Our attention has become the scarce resource that advertisers and content providers are constantly trying to seize and exploit.

Critical ignoring is not sticking your head in the sand or refusing to hear anything that challenges you. Apathy is “I don’t care about any of this.”  Critical ignoring is “I care enough to be selective, so that I can focus on what truly matters.”  Denial is “I refuse to believe or even look at uncomfortable evidence.” Critical ignoring is “I’m not going to invest my time in sources that are clearly unreliable, or in discussions that are going nowhere, so I can better examine serious evidence elsewhere.”

The key distinction is that critical ignoring always serves better judgment, not comfort at any cost.

How To Actually Do It

The researchers outline three practical strategies you can use right away:

Self-Nudging: This is about redesigning your digital environment to remove temptations before they become problems. Think of it as changing your information ecosystem. Instead of relying on willpower alone, you might unsubscribe from inflammatory newsletters, turn off news notifications that stress you out, or use browser extensions to block certain websites during work hours. The idea is to design your environment so you can implement the resolutions you’ve made.

Lateral Reading: This one’s particularly clever. Instead of reading a website from top to bottom like you’ve always done, professional fact-checkers will open another browser tab and quickly research who’s behind the source. That way, you spend sixty seconds searching for information about the source rather than spending twenty minutes carefully reading content from a source that turns out to be backed by a lobbying group or known misinformation peddler. The researchers note this is often faster and more effective than trying to critically evaluate the content itself.

Don’t Feed the Trolls: This strategy advises you not to reward malicious actors with your attention.  When you encounter inflammatory comments, deliberately misleading posts, or content clearly designed to provoke anger, the best response is often no response at all. Engaging with trolls or bad-faith content just amplifies it and wastes your mental energy.

I’ll Add Another

Ignore the Influencers: Refuse to click on miracle‑cure headlines or anecdote‑driven threads when you can go directly to professional medical sources, systematic reviews, or guidelines from reliable sources.  Ignore influencers’ health claims unless they clearly cite solid evidence and expertise.

The Bigger Picture

What makes critical ignoring different from just being selective is that it’s strategic and informed. To know what to ignore, you need to understand the landscape first. It’s not about burying your head in the sand—it’s about being intentional with your attention budget.

The traditional approach of “pay careful attention to everything” made sense in a world of vetted textbooks and curated libraries. But on the unvetted internet, that approach often ends up being a colossal waste of time and energy. The admonition to “pay careful attention” is exactly what attention thieves exploit.

Making It Work For You

Start by taking inventory of your information landscape —all the apps, websites, notifications, and sources competing for your attention. Which ones consistently deliver value? Which ones leave you feeling manipulated, angry, or stressed? Practice self-nudging by removing or limiting access to the latter category.

When you encounter a new source making bold claims, resist the urge to dive deep into their content immediately. Instead, spend a minute or two doing lateral reading. Search for “who runs [site name]” or “[organization name] funding.” You’ll be amazed how quickly you can identify whether something deserves your time.

And when you see obvious rage-bait or trolling, practice the “scroll on by” technique. Your attention is valuable—don’t give it away for free to people trying to manipulate you.

Critical ignoring isn’t about being less informed. It’s about being better informed by focusing your limited cognitive resources on reliable sources and meaningful content rather than letting the algorithm’s latest outrage-of-the-day consume your mental bandwidth.

Sources:         

Kozyreva, A., Wineburg, S., Lewandowsky, S., & Hertwig, R. (2023). Critical Ignoring as a Core Competence for Digital Citizens. Current Directions in Psychological Science, 32(1), 81-88. https://journals.sagepub.com/doi/10.1177/09637214221121570

                ∙Full text also available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC7615324/

                ∙Interview with lead researcher: https://www.mpg.de/19554217/new-digital-competencies-critical-ignoring

Mims, Christopher. “Your Key Survival Skill for 2026: Critical Ignoring.” The Wall Street Journal, January 3, 2026.

American Psychological Association.  https://www.apa.org/news/podcasts/speaking-of-psychology/attention-spans

Lane, S. & Atchley, P. “Human Capacity in the Attention Economy”, American Psychological Association, 2020.

Assessing the Trump-Orwell Comparisons: Warning, Not Prophecy

The comparison between the Trump administration and George Orwell’s dystopian works has recently become one of the most prevalent political metaphors. one I’ve used myself. Following Trump’s second inauguration in January 2025, sales of 1984 surged once again on Amazon’s bestseller lists, just as they did during his first term.

These comparisons are rhetorically powerful, but their accuracy depends on how literally Orwell is read and how carefully distinctions are drawn between authoritarian warning signs and fully realized totalitarian systems. But how accurate are the comparisons? Let me walk you through the key parallels, the evidence supporting them, and the critical questions we should be asking.

Understanding Orwell’s Core Themes

Before diving into the comparisons, it’s worth revisiting what Orwell was actually warning us about. In 1984, published in 1949, Orwell depicted a totalitarian state where the Party manipulates reality through “Newspeak” (language control), “doublethink” (holding contradictory beliefs), the “memory hole” (historical revision), and constant surveillance by Big Brother. The novel’s famous slogans—”War is Peace, Freedom is Slavery, Ignorance is Strength”—exemplify how the Party inverts the very meaning of words.

Animal Farm, written as an allegory of the Soviet Union under Stalin, traces how a revolutionary movement devolves into dictatorship. The pigs, led by Napoleon, gradually corrupt the founding principles of equality, with Squealer serving as the regime’s propaganda minister who constantly rewrites history and justifies Napoleon’s increasingly authoritarian actions.

The Major Parallels

The most famous early comparison emerged during Trump’s first term when adviser Kellyanne Conway defended false crowd size claims with the phrase “alternative facts.” This triggered the first major 1984 sales spike in 2017. According to multiple sources, critics immediately drew connections to Orwell’s concept of manipulating language to control thought.

In the current administration, commentators have identified several Orwellian language patterns. The administration has restricted use of certain words on government websites—including “female,” “Black,” “gender,” and “sexuality”—reminiscent of how Newspeak aimed to “narrow the range of thought” by eliminating words. An executive order on January 29, 2025, titled “Ending Radical Indoctrination in K-12 Schooling” has been criticized as doublespeak, using the language of educational freedom while actually restricting what can be taught.  Doublespeak has evolved as a way of combining the ideas of newspeak and doublethink.

Perhaps the most concrete parallel involves the systematic deletion of historical content from government websites. The Organization of American Historians condemned the administration’s efforts to “reflect a glorified narrative while suppressing the voices of historically excluded groups”. Specific documented deletions include information about Harriet Tubman, the Tuskegee Airmen (later restored after public outcry), the Enola Gay airplane (accidentally caught in a purge of anything containing “gay”), and nearly 400 books removed from the U.S. Naval Academy library relating to diversity topics. The Smithsonian’s National Museum of American History also removed references to Trump’s impeachments from its “Limits of Presidential Power” exhibit, which critics including Senator Adam Schiff called “Orwellian”.

Trump’s repeated characterization of political opponents as the “enemy from within” and the media as the “enemy of the people” parallels 1984’s Emmanuel Goldstein figure and the ritualized Two Minutes Hate sessions. One analysis suggests Trump leads Americans through “a succession of Two Minute Hates—of freeloading Europeans, prevaricating Panamanians, vile Venezuelans, Black South Africans, corrupt humanitarians, illegal immigrants, and lazy Federal workers”.

Multiple sources document that new White House staff must undergo “loyalty tests” and some face polygraph examinations. Trump’s statement “I need loyalty. I expect loyalty” echoes 1984’s declaration that “There will be no loyalty, except loyalty to the Party”. Within weeks of his second inauguration, Trump dismissed dozens of inspectors general—the internal government watchdogs. According to reports from Politico and Reuters, several have filed lawsuits claiming their removal violated federal law. An executive order titled “Ensuring Accountability for All Agencies” placed previously independent agencies like the SEC and FTC under direct White House supervision.

The Animal Farm Connections

While 1984 gets more attention, Stanford literature professor Alex Woloch argues that Animal Farm might be more relevant because “it traces that sense of a ‘slippery slope'” from democracy to totalitarianism, whereas in 1984 the totalitarian system is already fully established.

There are echoes of Animal Farm in the way populist rhetoric has framed liberals, progressive institutions, and the press as enemies of “the people,” while power was being consolidated within Trump’s narrow leadership circle. Orwell’s pigs do not abandon revolutionary language; they repurpose it. The “ordinary” supporters are exhorted to endure sacrifices and to direct anger at opposing groups, while political insiders consolidate authority and wealth—echoing the pigs’ gradual move into the farmhouse and adoption of human privileges. Critics argue that Trump’s sustained use of grievance-based populism, even while wielding executive power, fits this pattern symbolically if not structurally.

Other parallels being drawn to Animal Farm include Napoleon’s propaganda minister Squealer and the administration’s communication strategy of inverting reality and the gradual corruption of founding principles while maintaining revolutionary rhetoric like “drain the swamp”. They also are scapegoating political opponents and immigrants much as Napoleon blamed Snowball for all problems. They also are taking credit for others’ achievements just as Napoleon did with the other animals’ work. In the novel, Napoleon demands full investigations of Snowball even after discovering he had nothing to do with alleged misdeeds, much as Trump demanded investigations of Hillary Clinton, James Comey, Letitia James, and Jerome Powell while avoiding scrutiny of his own conduct.

As in Orwell’s farm, where the constant invoking of enemies keeps the animals fearful and loyal, the politics of permanent crisis and blame are being used to normalize increasingly aggressive behavior by those in power.

Critical Perspectives and Limitations

These comparisons raise several important concerns that deserve serious consideration. Orwell was writing about actual totalitarian regimes—Stalinist Russia and Nazi Germany—where millions died in purges, gulags, and genocides. The United States in 2026, despite concerning trends, still maintains functioning courts, elections, a free press, and a civil society. Some observers are warning against trivializing real authoritarian regimes by making overstated comparisons.

The Trump administration’s frequent attacks on the press, civil servants, and election administrators do resemble early warning signs Orwell would have recognized—not as proof of totalitarianism, but as a stress test on democratic norms.

Conservative commentators argue that these comparisons are exaggerated partisan attacks that misrepresent Trump’s actions. They point out that some court challenges to administration actions have succeeded, media criticism continues unabated, and political opposition remains robust—none of which would be possible in Orwell’s Oceania. The question becomes whether we’re witnessing isolated, though concerning actions or rather a systematic pattern—what Professor Woloch calls the “slippery slope” question.

One opinion piece suggested Trump’s actions resemble the chaotic, rule-breaking fraternity culture of “Animal House” more than the calculated totalitarianism of Orwell’s works—emphasizing bombast and spectacle over systematic control. This view argues that the MAGA movement is more “Blutonian than Orwellian,” driven by emotional appeals and personality rather than systematic thought control.

Where the Comparisons Are Strongest and Weakest

Based on my analysis, the comparisons appear most accurate in several specific areas. The pattern of language manipulation and redefinition—calling restrictions “freedom” and censorship “transparency”—closely mirrors doublespeak. The documented systematic removal of historical content from government sources directly parallels the memory hole concept. The dismissing of senior officials such as the head of the Bureau of Labor Statistics after an unfavorable jobs report, the wholesale firing of agency inspectors general and signaling that neutral experts should conform to political expectations mirrors the Orwellian demand for loyalty.  The assumption of control of previously independent agencies, and pressure on courts to allow the administration’s consolidation of power have parallels in the total party control. Unleashing ICE agents on the general public and excusing the murder of protesters are chillingly similar to the thought police and the “vaporizing” of citizens in Oceana. Perhaps most strikingly, Trump’s 2018 statement “What you’re seeing and what you’re reading is not what’s happening” nearly quotes Orwell’s line: “The party told you to reject the evidence of your eyes and ears”.

The comparisons are most strained when they overstate the current reality by suggesting America has already become Oceania, while democratic institutions that were lacking completely in Oceania are still functioning in America. Unlike 1984’s Winston, Americans retain significant ability to resist and organize. There is no single state monopoly over information. State and local governments, and civil society remain vigorous and are often hostile to Trump. Additionally, some comparisons conflate authoritarian-sounding rhetoric with actual totalitarian control, which aren’t equivalent.

Speculation: The Trajectory Question

The pattern of actions I’ve documented—systematic information control, loyalty purges, attacks on institutional independence, and explicit statements about seeking a third term—suggests a consistent direction rather than random actions. If these trends continue unchecked, particularly combined with further erosion of electoral integrity, increased prosecution of political opponents through mechanisms like the “Weaponization Working Group,” greater control over media and information, and weakening of judicial independence, then the slide toward authoritarianism could accelerate. As I am writing this article, Trump continues to promote what he calls the “Board of Peace,” a proposed international organization that is an attempt to create a U.S.-led alternative to the United Nations. The scholar Alfred McCoy notes that Trump appears to be pursuing what Orwell described: a world divided into three regional blocs under strongman leaders, with weakened international institutions.

However, several factors may counter this trajectory. Strong civil society and activist movements continue organizing opposition movements. Independent state governments push back against federal overreach and robust legal challenges have blocked numerous executive actions. The free press continues investigative reporting despite attacks. Congressional resistance still exists—even Senator Booker’s 25-hour speech on constitutional abuse entered the Congressional Record as a permanent historical marker.

My speculation is that the most likely outcome is neither complete Orwellian dystopia nor a comfortable return to democratic norms, but rather what political scientists call “competitive authoritarianism” or “illiberal democracy”—where democratic forms persist but are increasingly hollowed out, opposition exists but faces systematic disadvantages, and truth becomes increasingly contested. The key question isn’t whether we’ll replicate 1984 exactly, but whether enough democratic safeguards will hold to prevent sliding further into authoritarianism. One observer standing before a giant banner of Trump’s face in Washington noted that “Orwell’s world isn’t just fiction. It’s a mirror—reflecting what happens when power faces no resistance, when truth bends to loyalty, and when silence becomes the safest response”.

The Bottom Line

The Orwell comparisons aren’t perfect historical analogies, but they’re not baseless partisan rhetoric either. They identify genuine patterns of authoritarian behavior that merit serious attention—the manipulation of language to distort reality, the systematic rewriting of historical narratives, the demand for personal loyalty over institutional integrity, and the rejection of shared factual reality. I am concerned about the increasing use of Nazi inspired phrases and themes by members of the Trump administration. Most recently, Kristy Noam’s use of the phrase “one of us-all of you”. While not a formal written Nazi policy, it reflects their practice when dealing with partisan attacks in occupied countries and can only be viewed as a threat of violence against American citizens.

Whether these patterns represent isolated troubling actions or the beginnings of systematic democratic erosion remains the crucial—and still open—question. As Orwell himself noted, he didn’t write to predict the future but to prevent it. The value of these comparisons may ultimately lie not in their precision as historical parallels, but in their power to alert citizens to concerning trends before they become irreversible.

Key Sources

  • Organization of American Historians statements on historical revisionism
  • Politico and Reuters reporting on inspector general firings
  • The Washington Post and Axios on executive order impacts
  • Stanford Professor Alex Woloch’s analysis in The World (https://theworld.org/stories/2017/01/25/people-are-saying-trumps-government-orwellian-what-does-actually-mean)
  • World Press Institute analysis (https://worldpressinstitute.org/the-orwell-effect-how-2025-america-felt-like-198/)
  • Adam Gopnik, “Orwell’s ‘1984’ and Trump’s America,” The New Yorker, Jan. 26, 2017.
  • “Trump’s America: Rethinking 1984 and Brave New World,” Monthly Review, Sept. 7, 2025.
  • “False or misleading statements by Donald Trump,” Wikipedia (overview of documented falsehoods).
  • “Trump’s Efforts to Control Information Echo, an Authoritarian Playbook,” The New York Times, Aug. 3, 2025.
  • “Trump’s 7 most authoritarian moves so far,” CNN Politics, Aug. 13, 2025.
  • “The Orwellian echoes in Trump’s push for ‘Americanism’ at the Smithsonian,” The Conversation, Aug. 20, 2025.
  • “Everything Is Content for the ‘Clicktatorship’,” WIRED, Jan. 13, 2026.
  • “’Animal Farm’ Perfectly Describes Life in the Era of Donald Trump,” Observer, May 8, 2017.
  • “Ditch the ‘Animal Farm’ Mentality in Resisting Trump Policies,” YES! Magazine, May 8, 2017.

Full disclosure: I recently bought a hat that says “Make Orwell Fiction Again”.

The Strange Tale of Spontaneous Human Combustion

Did you ever run into an idea so strange that you can’t quite understand how anyone ever took it seriously? Recently, while reading about historical curiosities in Pseudoscience by Kang and Pederson, I was reminded of one of the most enduring examples: spontaneous human combustion.

The classic image is always the same. Someone enters a room and finds a small pile of ashes where a person once sat. The body is nearly destroyed, yet the chair beneath it is barely scorched and the rest of the room looks strangely untouched. For centuries, this baffling scene was explained by a dramatic idea—that a person could suddenly burst into flames from the inside, without any external fire at all.

It sounds like something lifted straight from a gothic novel, but belief in spontaneous human combustion stretches back to at least the seventeenth century and reached its peak in the Victorian era. To understand why it gained such traction, it helps to look at the social attitudes of the time, the cases that convinced people it was real, and what modern forensic science eventually uncovered.

Much of the early belief rested on moral judgment rather than evidence. In the nineteenth century, spontaneous human combustion was widely accepted as a kind of divine punishment. Many of the alleged victims were described as heavy drinkers, often elderly, overweight, or socially isolated, and women were frequently overrepresented in the reports. To Victorian minds, this pattern felt meaningful. Alcohol was flammable, after all, and it seemed reasonable—at least then—to assume that a body saturated with spirits might somehow ignite. Sensational newspaper reporting amplified the mystery, presenting lurid details while glossing over inconvenient facts.

The idea gained intellectual credibility in 1746 when Paul Rolli, a Fellow of the Royal Society, formally used the term “spontaneous human combustion” while describing the death of Countess Cornelia Zangari Bandi. The involvement of a respected scientific figure gave the concept legitimacy that lingered for generations.

Several cases became canonical. Countess Bandi’s death in 1731 was described as leaving little more than ashes and partially intact legs, still clothed in stockings. In 1966, John Irving Bentley of Pennsylvania was found almost completely burned except for one leg, with his pipe discovered intact nearby. Mary Reeser, known as the “Cinder Woman,” died in Florida in 1951, leaving behind melted fat embedded in the rug near where she had been sitting. As recently as 2010, an Irish coroner ruled that spontaneous human combustion caused the death of Michael Faherty, whose body was found badly burned near a fireplace in a room that showed little fire damage. Over roughly three centuries, about two hundred such cases have been cited worldwide.

Believers proposed explanations that ranged from the scientific-sounding to the overtly theological. Alcoholism was the most popular theory, with some physicians genuinely arguing that chronic drinking made the human body combustible. Earlier medical thinking leaned on imbalances of bodily humors, while later writers speculated about unknown chemical reactions producing internal heat. Religious interpretations framed these deaths as punishment for sin. Even in modern times, a few proponents have suggested that acetone buildup in people with alcoholism, diabetes, or extreme diets could somehow trigger combustion.

The idea was so culturally embedded that Charles Dickens famously killed off the alcoholic character Mr. Krook by spontaneous combustion in Bleak House. When critics objected, Dickens defended the plot choice by citing what he believed were credible historical and medical sources.

The illusion of the supernatural persisted because the circumstances were almost perfectly misleading. Victims were typically alone, elderly, or physically impaired, unable to respond quickly to a smoldering fire. The localized damage looked impossible to the untrained eye. Potential ignition sources were often destroyed in the fire itself. And dramatic storytelling filled in the gaps left by incomplete investigations.

What actually happens in these cases is far less mystical and far more unsettling. Modern forensic science points to an explanation known as the “wick effect.” In this scenario, there is always an external ignition source—often a cigarette, candle, lamp, or fireplace ember. Once clothing catches fire, heat melts the person’s body fat. That liquefied fat soaks into the clothing, which then behaves like a candle wick. The fire burns slowly and steadily, sometimes for hours, consuming much of the body while leaving nearby objects relatively unscathed.

This effect has been demonstrated experimentally. In the 1960s, researchers at Leeds University showed that cloth soaked in human fat could sustain a slow burn for extended periods once ignited. In 1998, forensic scientist John de Haan famously replicated the effect for the BBC by burning a pig carcass wrapped in a blanket. The result closely matched classic spontaneous combustion scenes: severe destruction of the body, with extremities left behind and limited damage to the surrounding room.

The reason these fires don’t usually engulf the entire space is simple physics. Flames rise more easily than they spread sideways, and the heat output of a wick-effect fire is relatively localized. It’s similar to standing near a campfire—you can be close without catching fire yourself.

Investigators Joe Nickell and John F. Fischer examined dozens of historical cases and found that every one involved a plausible ignition source, details that earlier accounts often ignored or downplayed. When these factors are restored to the narrative, the mystery largely disappears.

As science writer Benjamin Radford has pointed out, if spontaneous human combustion were truly spontaneous, we would expect it to occur randomly and frequently, in public places as well as private ones. Instead, it consistently appears in situations involving isolation and an external heat source.

The bottom line is straightforward. There is no credible scientific evidence that humans can burst into flames without an external ignition source. What has been labeled spontaneous human combustion is better understood as a tragic combination of accidental fire and the wick effect. The myth endured because it blended moral judgment, fear, and incomplete science into a compelling story. Today, forensic investigation has replaced superstition with explanation, even if the results remain unsettling.

Spontaneous human combustion survives as a reminder of how easily mystery fills the space where evidence is thin—and how patiently applied science eventually closes that gap.


Sources and Further Reading

Peer-reviewed forensic and medical analyses are available through the National Center for Biotechnology Information, including “So-called Spontaneous Human Combustion” in the Journal of Forensic Sciences (https://pubmed.ncbi.nlm.nih.gov/21392004/) and Koljonen and Kluger’s 2012 review, “Spontaneous human combustion in the light of the 21st century,” published in the Journal of Burn Care & Research (https://pubmed.ncbi.nlm.nih.gov/22269823/).

General scientific and historical overviews can be found in Encyclopædia Britannica’s article “Is Spontaneous Human Combustion Real?” (https://www.britannica.com/story/is-spontaneous-human-combustion-real), Scientific American’s discussion of the wick effect (https://www.scientificamerican.com/blog/cocktail-party-physics/burn-baby-burn-understanding-the-wick-effect/), and Live Science’s summary of facts and theories (https://www.livescience.com/42080-spontaneous-human-combustion.html).

Accessible explanatory pieces are also available from HowStuffWorks (https://science.howstuffworks.com/science-vs-myth/unexplained-phenomena/shc.htm), History.com (https://www.history.com/articles/is-spontaneous-human-combustion-real), Mental Floss (https://www.mentalfloss.com/article/22236/quick-7-seven-cases-spontaneous-human-combustion), and All That’s Interesting (https://allthatsinteresting.com/spontaneous-human-combustion). Wikipedia’s entries on spontaneous human combustion and the wick effect provide comprehensive background and references at https://en.wikipedia.org/wiki/Spontaneous_human_combustion and https://en.wikipedia.org/wiki/Wick_effect.

What “Woke” Really Means: A Look at a Loaded Word

Why everyone’s fighting over a word nobody agrees on

Okay, so you’ve probably heard “woke” thrown around about a million times, right? It’s in political debates, online arguments, your uncle’s Facebook rants—basically everywhere. And here’s the weird part: depending on who’s saying it, it either means you’re enlightened or you’re insufferable.

So let’s figure out what’s actually going on with this word.

Where It All Started

Here’s something most people don’t know: “woke” wasn’t invented by social media activists or liberal college students. It goes way back to the 1930s in Black communities, and it meant something straightforward—stay alert to racism and injustice.

The earliest solid example comes from blues musician Lead Belly. In his song “Scottsboro Boys” (about nine Black teenagers falsely accused of rape in Alabama in 1931), he told Black Americans to “stay woke”—basically meaning watch your back, because the system isn’t on your side. This wasn’t abstract philosophy; it was survival advice in the Jim Crow South.

The term hung around in Black culture for decades. It got a boost in 2008 when Erykah Badu used “I stay woke” in her song “Master Teacher,” where it meant something like staying self-aware and questioning the status quo.

But the big explosion happened around 2014 during the Ferguson protests after Michael Brown was killed. Black Lives Matter activists started using “stay woke” to talk about police brutality and systemic racism. It spread through Black Twitter, then got picked up by white progressives showing solidarity with social justice movements. By the late 2010s, it had expanded to cover sexism, LGBTQ+ issues, and pretty much any social inequality you can think of.

And that’s when conservatives started using it as an insult.

The Liberal Take: It’s About Giving a Damn

For progressives, “woke” still carries that original vibe of awareness. According to a 2023 Ipsos poll, 56% of Americans (and 78% of Democrats) said “woke” means “to be informed, educated, and aware of social injustices.”

From this angle, being woke just means you’re paying attention to how race, gender, sexuality, and class affect people’s lives—and you think we should try to make things fairer. It’s not about shaming people; it’s about understanding the experiences of others.

Liberals see it as continuing the work of the civil rights movement—expanding who we empathize with and include. That might mean supporting diversity programs, using inclusive language, or rethinking how we teach history. To them, it’s just what thoughtful people do in a diverse society.

Here’s the Progressive Argument in a Nutshell

The term literally started as self-defense. Progressives argue the problems are real. Being “woke” is about recognizing that bias, inequality, and discrimination still exist. The data back some of this up—there are documented disparities in policing, sentencing, healthcare, and economic opportunity across racial lines. From this view, pointing these things out isn’t being oversensitive; it’s just stating facts.

They also point out that conservatives weaponized the term. They took a word from Black communities about awareness and justice and turned it into an all-purpose insult for anything they don’t like about the left. Some activists call this a “racial dog whistle”—a way to attack justice movements without being explicitly racist.

The concept naturally expanded from racial justice to other inequalities—sexism, LGBTQ+ discrimination, other forms of unfairness. Supporters see this as logical: if you care about one group being treated badly, why wouldn’t you care about others?

And here’s their final point: what’s the alternative? When you dismiss “wokeness,” you’re often dismissing the underlying concerns. Denying that racism still affects American life can become just another way to ignore real problems.

Bottom line from the liberal side: being “woke” means you’ve opened your eyes to how society works differently for different people, and you think we can do better.

The Conservative Take: It’s About Going Too Far

Conservatives see it completely differently. To them, “woke” isn’t about awareness—it’s about excess and control.

They see “wokeness” as an ideology that forces moral conformity and punishes anyone who disagrees. What started as social awareness has turned into censorship and moral bullying. When a professor loses their job over an unpopular opinion or comedy shows get edited for “offensive” jokes, conservatives point and say: “See? This is exactly what we’re talking about.”  To them, “woke” is just the new version of “politically correct”—except worse. It’s intolerance dressed up as virtue.

Here’s the conservative argument in a nutshell:

Wokeness has moved way beyond awareness into something harmful. They argue it creates a “victimhood culture” where status and that benefits come from claiming you’re oppressed rather than from merit or hard work. Instead of fixing injustice, they say it perpetuates it by elevating people based on identity rather than achievement.

They see it as “an intolerant and moralizing ideology” that threatens free speech. In their view, woke culture only allows viewpoints that align with progressive ideology and “cancels” dissenters or labels them “white supremacists.”

Many conservatives deny that structural racism or widespread discrimination still exists in modern America. They attribute unequal outcomes to factors other than bias. They believe America is fundamentally a great country and reject the idea that there is systematic racism or that capitalism can sometimes be unjust.

They also see real harm in certain progressive positions—like the idea that gender is principally a social construct or that children should self-determine their gender. They view these as threats to traditional values and biological reality.

Ultimately, conservatives argue that wokeness is about gaining power through moral intimidation rather than correcting injustice. In their view, the people rejecting wokeness are the real critical thinkers.

The Heart of the Clash

Here’s what makes this so messy: both sides genuinely believe they’re defending what’s right.

Liberals think “woke” means justice and empathy. Conservatives think it means judgment and control. The exact same thing—a company ad featuring diverse families, a school curriculum change, a social movement—can look like progress to one person and propaganda to another.

One person’s enlightenment is literally another person’s indoctrination.

The Word Nobody Wants Anymore

Here’s the ironic part: almost nobody calls themselves “woke” anymore. Like “politically correct” before it, the word has gotten so loaded that it’s frequently used as an insult—even by people who agree with the underlying ideas. The term has been stretched to cover everything from racial awareness to climate activism to gender identity debates, and the more it’s used, the less anyone knows what it truly means.

Recently though, some progressives have started reclaiming the term—you’re beginning to see “WOKE” on protest signs now.

So, Who’s Right?

Maybe both. Maybe neither.

If “woke” means staying aware of injustice and treating people fairly, that’s good. If it means acting morally superior and shutting down disagreement, that’s not. The truth is probably somewhere in the messy middle.

This whole debate tells us more about America than about the word itself. We’ve always struggled with how to balance freedom with fairness, justice with tolerance. “Woke” is just the latest word we’re using to have that same old argument.

The Bottom Line

Whether you love it or hate it, “woke” isn’t going anywhere soon. It captures our national struggle to figure out what awareness and fairness should look like today.

And honestly? Maybe we’d all be better off spending less time arguing about the word and more time talking about the actual values behind it—what’s fair, what’s free speech, what kind of society do we want?

Being “woke” originally meant recognizing systemic prejudices—racial injustice, discrimination, and social inequities many still experience daily. But the term’s become a cultural flashpoint.  Here’s the thing: real progress requires acknowledging both perspectives exist and finding common ground. It’s not about who’s “right”—it’s about building bridges.

 If being truly woke means staying alert to injustice while remaining open to dialogue with those who see things differently, seeking solutions that work for everyone, caring for others, being empathetic and charitable, then call me WOKE.

Bull Markets, Bear Markets and the Story Behind Wall Street’s Most Famous Animals

If you’ve ever caught a business news segment, you’ve probably heard anchors throwing around terms like “bull market” and “bear market” as if everyone just naturally knows what they mean. But beyond the basic idea that one’s good and one’s bad, the real mechanics of these market conditions—and how they got their animal nicknames—are pretty interesting.

How the Stock Market Works (The Quick Version)

Before we dive into bulls and bears, let’s cover the basics. The stock market is essentially a place where people buy and sell ownership stakes in companies. When you buy a share of stock, you’re purchasing a tiny piece of that company. The price of that share goes up or down based on how many people want to buy it versus how many want to sell it—classic supply and demand.

Companies sell shares to raise money for growth, and investors buy them hoping the company will do well and the stock price will increase. The overall “market” is tracked through indexes like the S&P 500 or Dow Jones Industrial Average, which measure how a group of major companies are performing. When most stocks are rising, we say the market is up; when most are falling, the market is down.

What Bull and Bear Markets Actually Mean

A bull market refers to a period when stock prices are rising, typically defined as a 20% or more increase from recent lows. During bull markets, investors are optimistic, companies are generally doing well, and people are more willing to take risks with their money. Bull markets usually are driven by a strong economy with low inflation and optimistic investors. Think of the economic boom of the late 1990s or the recovery after the 2008 financial crisis—those were classic bull markets where prices kept climbing for years.

A bear market is essentially the opposite: a general decline in the stock market over time, usually defined as a 20% or more price decline over at least a two-month period. During bear markets, investors get nervous, sell off their holdings, and pessimism spreads. When a 10% to 20% decline occurs, it’s classified as a correction, and bear territory always precedes a bear market. The Great Depression, the 2008 financial crisis, and the COVID-19 pandemic’s initial impact all triggered bear markets.

The Colorful History Behind the Terms

Now here’s where things get interesting. These terms didn’t come from some modern marketing genius—they trace back to 18th century London, and the story involves everything from old proverbs to violent animal fights to one of history’s biggest financial scandals.

The “bear” term came first. Etymologists point to an old proverb, warning that it is not wise “to sell the bear’s skin before one has caught the bear”. This saying was about the foolishness of counting on something before you actually have it. By the early 1700s, traders who engaged in short selling (betting that prices would fall) were called “bear-skin jobbers” because they sold a bear’s skin—the shares—before catching the bear. The term eventually got shortened to just “bears.”

The real watershed moment came with the South Sea Bubble of 1720. The South Sea Company was a British joint stock company founded by an act of Parliament in 1711, and in 1720, the company assumed most of the British national debt and convinced investors to give up state annuities for company stock sold at a very high premium. When everything collapsed, share prices dropped dramatically, starting a “bear market,” and the story became the topic of many literary works and went down in history as an infamous metaphor.

As for “bull,” the origins are a bit murkier. The first known instance of the market term “bull” popped up in 1714, shortly after the “bear” term emerged. Most historians think it arose as a natural counterpoint to “bear,” possibly influenced by bull-baiting and bear-baiting, two animal fighting sports popular at the time—though I should note that’s somewhat speculative.

There’s also a popular explanation about how the animals attack: bears swipe downward with their paws while bulls thrust upward with their horns, which nicely mirrors market movements. While that’s a helpful memory device, it’s probably more of a convenient coincidence than the actual origin. The term “bull” originally meant a speculative purchase in the expectation that stock prices would rise, and was later applied to the person making such purchases.

Why This Still Matters

These metaphors have stuck around for three centuries because they work. They’re visceral and easy to remember—you can picture a charging bull or a hibernating bear without needing an economics degree. They also capture something real about market psychology: the aggressive optimism of bulls pushing prices up versus the defensive pessimism of bears hunkering down.

Understanding these terms helps you follow financial news and, more importantly, recognize when markets are shifting. Knowing you’re in a bull market might make you less surprised by rising prices, while recognizing a bear market can help you avoid panic-selling when things look grim.

The Bull and Bear Markets are among those things I’ve heard for years and never knew their origin.  This article is an attempt to explain it to myself.

Sources:

Bull Markets, Bear Markets and the Story Behind Wall Street’s Most Famous Animals

If you’ve ever caught a business news segment, you’ve probably heard anchors throwing around terms like “bull market” and “bear market” as if everyone just naturally knows what they mean. But beyond the basic idea that one’s good and one’s bad, the real mechanics of these market conditions—and how they got their animal nicknames—are pretty interesting.

How the Stock Market Works (The Quick Version)

Before we dive into bulls and bears, let’s cover the basics. The stock market is essentially a place where people buy and sell ownership stakes in companies. When you buy a share of stock, you’re purchasing a tiny piece of that company. The price of that share goes up or down based on how many people want to buy it versus how many want to sell it—classic supply and demand.

Companies sell shares to raise money for growth, and investors buy them hoping the company will do well and the stock price will increase. The overall “market” is tracked through indexes like the S&P 500 or Dow Jones Industrial Average, which measure how a group of major companies are performing. When most stocks are rising, we say the market is up; when most are falling, the market is down.

What Bull and Bear Markets Actually Mean

A bull market refers to a period when stock prices are rising, typically defined as a 20% or more increase from recent lows. During bull markets, investors are optimistic, companies are generally doing well, and people are more willing to take risks with their money. Bull markets usually are driven by a strong economy with low inflation and optimistic investors. Think of the economic boom of the late 1990s or the recovery after the 2008 financial crisis—those were classic bull markets where prices kept climbing for years.

A bear market is essentially the opposite: a general decline in the stock market over time, usually defined as a 20% or more price decline over at least a two-month period. During bear markets, investors get nervous, sell off their holdings, and pessimism spreads. When a 10% to 20% decline occurs, it’s classified as a correction, and bear territory always precedes a bear market. The Great Depression, the 2008 financial crisis, and the COVID-19 pandemic’s initial impact all triggered bear markets.

The Colorful History Behind the Terms

Now here’s where things get interesting. These terms didn’t come from some modern marketing genius—they trace back to 18th century London, and the story involves everything from old proverbs to violent animal fights to one of history’s biggest financial scandals.

The “bear” term came first. Etymologists point to an old proverb, warning that it is not wise “to sell the bear’s skin before one has caught the bear”. This saying was about the foolishness of counting on something before you actually have it. By the early 1700s, traders who engaged in short selling (betting that prices would fall) were called “bear-skin jobbers” because they sold a bear’s skin—the shares—before catching the bear. The term eventually got shortened to just “bears.”

The real watershed moment came with the South Sea Bubble of 1720. The South Sea Company was a British joint stock company founded by an act of Parliament in 1711, and in 1720, the company assumed most of the British national debt and convinced investors to give up state annuities for company stock sold at a very high premium. When everything collapsed, share prices dropped dramatically, starting a “bear market,” and the story became the topic of many literary works and went down in history as an infamous metaphor.

As for “bull,” the origins are a bit murkier. The first known instance of the market term “bull” popped up in 1714, shortly after the “bear” term emerged. Most historians think it arose as a natural counterpoint to “bear,” possibly influenced by bull-baiting and bear-baiting, two animal fighting sports popular at the time—though I should note that’s somewhat speculative.

There’s also a popular explanation about how the animals attack: bears swipe downward with their paws while bulls thrust upward with their horns, which nicely mirrors market movements. While that’s a helpful memory device, it’s probably more of a convenient coincidence or a retroactive description than the actual origin. The term “bull” originally meant a speculative purchase in the expectation that stock prices would rise, and was later applied to the person making such purchases.

Why This Still Matters

These metaphors have stuck around for three centuries because they work. They’re visceral and easy to remember—you can picture a charging bull or a hibernating bear without needing an economics degree. They also capture something real about market psychology: the aggressive optimism of bulls pushing prices up versus the defensive pessimism of bears hunkering down.

Understanding these terms helps you follow financial news and, more importantly, recognize when markets are shifting. Knowing you’re in a bull market might make you less surprised by rising prices, while recognizing a bear market can help you avoid panic-selling when things look grim.

The Bull and Bear Markets are among those things I’ve heard for years and never knew their origin.  This article is an attempt to explain it to myself.

Sources:

Supply-Side Economics and Trickle-Down: What Actually Happened?

The Basic Question

You’ve probably heard politicians arguing about tax cuts—some promising they’ll supercharge the economy, others dismissing them as giveaways to the rich. These debates usually involve two terms that get thrown around like political footballs: “supply-side economics” and “trickle-down economics.” But what do these terms actually mean, and more importantly, do they work? After four decades of real-world experiments, we finally have enough data to answer that question.

Understanding Supply-Side Economics

Supply-side economics is a legitimate economic theory that emerged in the 1970s when the U.S. economy was struggling with both high inflation and high unemployment—a combination that traditional economic theories said shouldn’t happen. The core idea is straightforward: economic growth comes from producing more goods and services (the “supply” side), not just from boosting consumer demand.

The theory rests on three main pillars. First, lower taxes—the thinking is that if people and businesses keep more of their money, they’ll work harder, invest more, and create jobs. According to economist Arthur Laffer’s famous curve, there’s supposedly a sweet spot where lower tax rates can actually generate more government revenue because the economy grows so much. Second, less regulation removes government restrictions so businesses can innovate and operate more efficiently. Third, smart monetary policy keeps inflation in check while maintaining enough money in the economy to fuel growth.

All of this sounds reasonable in theory. After all, who wouldn’t work harder if they kept more of their paycheck?

The Political Rebranding: Enter “Trickle-Down”

Here’s where economic theory meets political messaging. “Trickle-down economics” isn’t an academic term—it’s essentially a catchphrase, and not a complimentary one. Critics use it to describe supply-side policies when those policies mainly benefit wealthy people and corporations. The idea behind the name: give tax breaks to rich people and big companies, and the benefits will eventually “trickle down” to everyone else through job creation, higher wages, and economic growth.

Here’s the interesting part: no economist actually calls their theory “trickle-down economics.” Even David Stockman, President Reagan’s own budget director, later admitted that “supply-side” was basically a rebranding of “trickle-down” to make tax cuts for the wealthy easier to sell politically. So while they’re not identical concepts, they’re two sides of the same coin.

The Reagan Revolution: Testing the Theory

Ronald Reagan became president in 1981 and implemented the biggest supply-side experiment in U.S. history. He slashed the top tax rate from 70% down to 50%, and eventually to just 28%, arguing this would unleash economic growth that would lift all boats.

The results were genuinely mixed. On one hand, the economy created about 20 million jobs during Reagan’s presidency, unemployment fell from 7.6% to 5.5%, and the economy grew by 26% over eight years. Those aren’t small achievements.

But the picture gets more complicated when you look deeper. The tax cuts didn’t pay for themselves as promised—they reduced government revenue by about 9% initially. Reagan had to backtrack and raise taxes multiple times in 1982, 1983, 1984, and 1987 to address the mounting deficit problem. Income inequality increased significantly during this period, and surprisingly, the poverty rate at the end of Reagan’s term was essentially the same as when he started. Perhaps most telling, government debt more than doubled as a percentage of the economy.

There’s another wrinkle worth mentioning: much of the economic recovery happened because Federal Reserve Chairman Paul Volcker had already broken the back of inflation through tight monetary policy before Reagan’s tax cuts took effect. Disentangling how much credit Reagan’s policies deserve versus Volcker’s groundwork is genuinely difficult.

The Pattern Repeats

The story didn’t end with Reagan. George W. Bush enacted major tax cuts in 2001 and 2003, especially benefiting wealthy Americans. The result? Economic growth remained sluggish, deficits ballooned, and income inequality continued its upward march.

Then there’s Bill Clinton—the plot twist in this story. In 1993, Clinton actually raised taxes on the wealthy, pushing the top rate from 31% back up to 39.6%. Conservative economists predicted economic disaster. Instead, the economy boomed with what was then the longest sustained growth period in U.S. history, creating 22.7 million jobs. Even more remarkably, the government ran a budget surplus for the first time in decades.

Donald Trump’s 2017 tax cuts, focused heavily on corporations, showed minimal wage growth for workers while generating significant stock buybacks that primarily benefited shareholders—and yes, larger deficits. Trump’s subsequent economic policies in his second term have been characterized by such volatility that reasonable long-term assessments remain difficult.

The Kansas Experiment: A Modern Test Case

At the state level, Kansas Governor Sam Brownback implemented one of the boldest modern experiments in supply-side policy between 2012 and 2017, dramatically slashing income taxes especially for businesses. Proponents called it a “real live experiment” that would demonstrate supply-side principles in action.

Instead of unleashing growth, Kansas faced severe budget shortfalls that forced cuts to education and infrastructure. Economic growth actually lagged behind neighboring states that didn’t implement such aggressive cuts, and the state legislature eventually reversed many of the tax reductions. This case has become a frequently cited cautionary tale for critics of supply-side policies.

What Does Half a Century of Data Show?

After 50 years of real-world experiments, researchers finally have enough data to move beyond political rhetoric. A comprehensive study analyzed tax policy changes across 18 developed countries over five decades, looking at what actually happened after major tax cuts for the wealthy.

The findings are remarkably consistent. Tax cuts for the rich reliably increase income inequality—no surprise there. But they show no significant effect on overall economic growth rates and no significant effect on unemployment. Perhaps most damaging to the theory, they don’t “pay for themselves” through increased growth. At best, about one-third of lost revenue gets recovered through expanded economic activity.

In simpler terms: when you cut taxes for wealthy people, wealthy people get wealthier. The promised broader benefits largely fail to materialize. The 2022 World Inequality Report reinforced these conclusions, finding that the world’s richest 10% continue capturing the vast majority of all economic gains, while the bottom half of the population holds just 2% of all wealth.

Why the Theory Doesn’t Match Reality

When you think about it logically, the disconnect makes sense. If you give a tax cut to someone who’s already wealthy, they’ll probably save or invest most of it—they were already buying what they wanted and needed. Their daily spending habits don’t change much. But if you give money to someone who’s struggling to pay bills or afford necessities, they’ll spend it immediately, directly stimulating economic activity.

Economists call this concept “marginal propensity to consume,” and it explains why giving tax breaks to working and middle-class people actually does more to boost the economy than supply-side cuts focused on the wealthy. A dollar in the hands of someone who needs to spend it has more immediate economic impact than a dollar added to an already-substantial investment portfolio.

The Bottom Line

After 40-plus years of repeated experiments, the pattern is clear. Supply-side policies and trickle-down approaches consistently increase deficits, widen inequality, and fail to significantly boost overall economic growth or create more jobs than alternative policies. Meanwhile, periods with higher taxes on the wealthy, like the Clinton years, saw strong growth, robust job creation, and balanced budgets.

The Nuance Worth Keeping

None of this means all tax cuts are bad or that high taxes are always good—economics is rarely that simple. The critical questions are: who receives the tax cuts, and what outcomes do you realistically expect? Targeted tax cuts for working families, small businesses, or specific industries facing genuine challenges can serve as effective policy tools. Child tax credits, research and development incentives, or relief for struggling sectors might accomplish specific goals.

But the evidence accumulated over four decades is clear: broad tax cuts focused primarily on the wealthy and large corporations don’t deliver the promised economic benefits for everyone else. The benefits don’t trickle down in any meaningful way.

You’ll keep hearing these arguments for years to come. Politicians will continue promising that tax cuts for businesses and the wealthy will boost the entire economy. Now you know what the actual evidence shows, and you can judge those promises accordingly.


Sources:

America’s Healthcare Paradox: Why We Pay Double and Get Less

The healthcare debate in America often circles back to a fundamental question: should we move toward a single-payer system, or is our current mixed public-private model the better path forward? It’s a conversation that gets heated quickly, but when you strip away the politics and look at how different systems actually function around the world, some interesting patterns emerge.

What We Mean by Single-Payer

A single-payer healthcare system means that one entity—usually the government or a government-related organization—pays for all covered healthcare services. Doctors and hospitals can still be private (and usually are), but instead of dealing with dozens of different insurance companies, they bill one source. It’s a lot like Medicare, which is why proponents often call it “Medicare-for-all”.

The key thing to understand is that single-payer isn’t necessarily the same as socialized medicine. In Canada’s system, for instance, the government pays the bills, but doctors are largely in the private sector and hospitals are controlled by private boards or regional health authorities rather than being part of the national government. Compare that to the UK’s National Health Service, where many hospitals and clinics are government-owned and many doctors are government employees.

America’s Current Patchwork

The United States operates what might charitably be called a “creative” approach to healthcare—a complex mix of employer-sponsored private insurance, government programs like Medicare, Medicaid and the VA system, individual marketplace plans, and direct out-of-pocket payments. Government already pays roughly half of total US health spending, but benefits, cost-sharing, and networks vary widely between plans, with little overall coordination.​ In 2023, private health insurance spending accounted for 30 percent of total national health expenditures, Medicare covered 21 percent, and Medicaid covered 18 percent.  Most of the remainder was either paid out of pocket by private citizens or was written off by providers as uncollectible.

Here’s where it gets expensive. U.S. health care spending grew 7.5 percent in 2023, reaching $4.9 trillion or $14,570 per person, accounting for 17.6 percent of the nation’s GDP, and national health spending for 2024 is expected to have exceeded $5.3 trillion or 18% of GDP, and health spending is expected to grow to 20.3 percent of GDP by 2033.

For a typical American family, the costs are real and rising. In 2024, the estimated cost of healthcare for a family of four in an employer-sponsored health plan was $32,066.

The European Landscape

Europe doesn’t have one healthcare model—it has several, and they’re all quite different from what we have in the States. Most of the 35 countries in the European Union have single-payer healthcare systems, but the details vary considerably.

Countries like the UK, Sweden, and Norway operate what are essentially single-payer systems where it is solely the government who pays for and provides healthcare services and directly owns most facilities and employs most clinical and related staff with funds from tax contributions. Then you have countries like Germany, and Belgium that use “sickness funds”—these are non-profit funds that don’t market, cherry pick patients, set premiums or rates paid to providers, determine benefits, earn profits or have investors. They’re quasi-public institutions, not private insurance companies like we know them in America.  Some systems, such as the Netherlands or Switzerland, rely on mandatory individually purchased private insurance with tight regulation and subsidies, achieving universal coverage with a structured, competitive market.

The French System

France is particularly noted for a successful universal, government-run health insurance system usually described as a single-payer with supplements. All legal residents are automatically covered through the national health insurance program, which is funded by payroll taxes and general taxation.

Most physicians and hospitals are private or nonprofit, not government employees or facilities. Patients generally have free choice of doctors and specialists, though coordinating through a primary care physician improves access and reimbursement. The national insurer pays a large portion of medical costs (often 70–80%), while voluntary private supplemental insurance covers most remaining out-of-pocket expenses such as copays and deductibles.

France is known for spending significantly less per capita than the United States. Cost controls come from nationally negotiated fee schedules and drug pricing rather than limits on access.

What’s striking is that in 2019, US healthcare spending reached $11,072 per person—over double the average of $5,505 across wealthy European nations. Yet despite spending roughly twice as much per person, American health outcomes often lag behind.

The Outcomes Question

This is where the comparison gets uncomfortable for American exceptionalism. The U.S. has the lowest life expectancy at birth among comparable wealthy nations, the highest death rates for avoidable or treatable conditions, and the highest maternal and infant mortality.

In 2023, life expectancy in comparable countries was 82.5 years, which is 4.1 years longer than in the U.S. Japan manages this with healthcare spending at just $5,300 per capita, while Americans spend more than double that amount.

Now, it’s important to note that healthcare systems don’t operate in a vacuum. Life expectancy is influenced by many factors beyond medical care—diet, exercise, smoking, gun violence, drug overdoses, and social determinants of health all play roles. But when you’re spending twice as much and getting worse results, it suggests the system itself might be part of the problem.

Advantages of Single-Payer Systems

The case for single-payer rests on several compelling points. First, administrative simplicity translates to real cost savings. A study found that the administrative burden of health care in the United States was 27 percent of all national health expenditures, with the excess administrative cost of the private insurer system estimated at about $471 billion in 2012 compared to a single-payer system like Canada’s. That’s over $1 out of every $5 of total healthcare spending just going to paperwork, billing disputes, and insurance company profit and overhead before any patient receives care.

Universal coverage is another major advantage. In a properly functioning single-payer system, nobody goes bankrupt from medical bills, nobody delays care because they can’t afford it, and nobody loses coverage when they lose their job. The peace of mind that comes with knowing you’re covered regardless of employment status or pre-existing conditions is difficult to quantify but enormously valuable.

Single-payer systems also have significant negotiating power. When one entity is buying drugs and services for an entire nation, pharmaceutical companies and medical device manufacturers have much less leverage to charge whatever they want. This helps explain why prescription drug prices in other countries are often a fraction of prices in the U.S.

Disadvantages and Trade-offs

The critics of single-payer systems aren’t wrong about everything. Wait times are a genuine concern in some systems. When prices and overall budgets are tightly controlled, some countries experience longer waits for selected elective surgeries, imaging, or specialty visits, especially if investment lags demand.

In 2024, Canadian patients experienced a median wait time of 30 weeks between specialty referral and first treatment, up from 27.2 weeks in 2023, with rural areas facing even longer delays. For procedures like elective orthopedic surgery, patients wait an average of 39 weeks in Canada.

However, it’s crucial to understand that wait times are not a result of the single-payer system itself but of system management, as wait times vary significantly across different single-payer and social insurance systems. Many European countries with universal coverage don’t experience the same wait time issues that plague Canada.

The transition costs are also substantial. Moving from our current system to single-payer would disrupt a massive industry. Over fifteen percent of our economy is related to health care, with half spent by the private sector. Around 160 million Americans currently have insurance through their employers, and transitioning all of them to a government-run plan would be an enormous administrative and political challenge.

A large national payer can be slower to change benefit designs or adopt new payment models; shifting political majorities can affect funding levels and benefit generosity.

Taxes would need to increase significantly to fund such a system, though proponents argue this would be offset by the elimination of insurance premiums, deductibles, and co-pays. It’s essentially a question of whether you’d rather pay through taxes or through premiums—the money has to come from somewhere.

Advantages of America’s Mixed System

Our current system does have some genuine strengths. Innovation thrives in the American healthcare market. The profit motive, for all its flaws, does drive pharmaceutical research and medical device development. American medical schools and research institutions lead the world in many areas of medicine.   Academic medical centers and specialty hospitals deliver advanced procedures and complex care that attract patients internationally.​

The system also offers more choice for those who can afford it. If you have good insurance, you typically face shorter wait times for elective procedures and can often see specialists without lengthy delays. Americans with high-quality employer-sponsored coverage give their plans relatively high ratings.

Competition between providers can theoretically drive quality improvements, though this effect is often undermined by the complexity of the market and the difficulty consumers face in shopping for healthcare.

Disadvantages of the Current U.S. System

The most glaring problem is simple: The United States remains the only developed country without universal healthcare, and 30 million Americans remain uninsured despite gains under the Affordable Care Act, and many of these gains will soon be lost. Being uninsured in America isn’t just an inconvenience—it can be deadly. People delay care, skip medications, and avoid preventive screenings because of cost concerns. 

The administrative complexity is staggering. Doctors spend enormous amounts of time dealing with insurance companies, prior authorizations, and billing disputes. Hospitals employ armies of billing specialists just to navigate the maze of different insurance plans, each with its own rules, formularies, and coverage determinations.  U.S. administrative costs account for ~25% of all healthcare spending, among the highest in the world.

Medical bankruptcy is uniquely American. Even people with insurance can find themselves financially devastated by serious illness. High deductibles, surprise bills, and out-of-network charges create a minefield of potential financial catastrophe.  Studies of U.S. bankruptcy filings over the past two decades have consistently found that medical bills and medical problems are a major factor in a large share of consumer bankruptcies. Recent summaries suggest that roughly two‑thirds of US personal bankruptcies involve medical expenses or illness-related income loss, and around 17% of adults with health care debt report declaring bankruptcy or losing a home because of that debt.

The system is also profoundly inequitable. Quality of care often depends more on your job, your income, and your zip code than on your medical needs. Out-of-pocket costs per capita have increased as compared to previous decades and the burden falls disproportionately on those least able to afford it.

What Europe Shows Us

The European experience demonstrates that there isn’t one “right” way to achieve universal coverage. The UK’s NHS, Germany’s sickness funds, and France’s hybrid system all manage to cover everyone at roughly half the per-capita cost of American healthcare. Universal Health Coverage exists in all European countries, with healthcare financing almost universally government managed, either directly through taxation or semi-directly through mandated and government-subsidized social health insurance.

They’ve accomplished this through various combinations of centralized negotiation of drug prices, global budgets for hospitals, strong primary care systems that serve as gatekeepers to more expensive specialist care, emphasis on preventive services, and regulation that prevents insurance companies from cherry-picking healthy patients.

Are these systems perfect? No. One of the major disadvantages of centralized healthcare systems is long wait lists to access non-urgent care, though Americans often wait as long or longer for routine primary care appointments as do patients in most universal-coverage countries. Many European countries are wrestling with funding challenges as populations age and expensive new treatments become available. But they’ve solved the fundamental problem that America hasn’t: they ensure everyone has access to healthcare without the risk of financial ruin.

The Path Forward?

The debate over healthcare in America often presents false choices. We don’t have to choose between Canadian-style single-payer and our current system—there are multiple models we could adapt. We could move toward a German-style system with heavily regulated non-profit insurers. We could create a robust public option that competes with private insurance. We could expand Medicare gradually by lowering the eligibility age over time.

What’s clear from international comparisons is that the status quo is unusually expensive and produces mediocre results. We’re paying premium prices for economy outcomes. Whether single-payer is the answer depends partly on your priorities. Do you value universal coverage and cost control more than unlimited choice? Are you willing to accept potentially longer wait times for non-urgent care in exchange for lower costs and universal access? How much do you trust government to manage a program this large?

These aren’t easy questions, and reasonable people disagree. But the evidence from Europe suggests that universal coverage at reasonable cost is achievable—it just requires us to make some choices about what we value most in a healthcare system.


Sources:

Happy New Year

Page 2 of 15

Powered by WordPress & Theme by Anders Norén