Grumpy opinions about everything.

Category: Commentary Page 2 of 15

This is the home of grumpy opinions.

When They Knew: How the Fossil Fuel Industry Buried Its Own Climate Science

The story begins not with climate deniers casting doubt on new science, but with something far more troubling: companies conducting rigorous research, understanding exactly what their products would do to the planet, and then spending decades lying to the public. They treated science as an internal planning tool and then deployed public relations, front groups, and “manufactured doubt” to delay regulation and protect profits.

The Oil Industry’s Own Scientists Saw It Coming

In 1977, a scientist named James Black stood before Exxon’s management committee with an uncomfortable message. According to internal documents later uncovered by investigative journalists, Black told executives that burning fossil fuels was increasing atmospheric carbon dioxide, and that continually rising CO2 levels would increase global temperatures by two to three degrees—a projection that is still consistent with today’s scientific consensus. He warned that we had a window of just five to ten years before “hard decisions regarding changes in energy strategies might become critical”.

What happened next is remarkable for its precision. Throughout the late 1970s and 1980s, Exxon assembled what one scientist called “a credible scientific team” to investigate the climate question. They launched ambitious projects, including outfitting a supertanker with custom instruments to measure how oceans absorbed CO2—one of the most pressing scientific questions of the era. A 2023 Harvard study analyzing Exxon’s internal climate projections from 1977 to 2003 found they predicted global warming with what researchers called “shocking skill and accuracy.” Specifically, the company projected 0.20 degrees Centigrade of warming per decade, with a margin of error of just 0.04 degrees—a forecast that has proven largely correct.

Exxon wasn’t alone. Shell produced a confidential 1988 report titled “The Greenhouse Effect” that warned of climate changes “larger than any that have occurred over the last 12,000 years,” including destructive floods and mass migrations. The report revealed Shell had been running an internal climate science program since 1981. In one striking document from 1986, Shell predicted that fossil fuel emissions would cause changes “the greatest in recorded history”.

Even industry groups understood what was coming. In 1980, the American Petroleum Institute (API) invited Stanford scientist John Laurmann to brief oil company representatives at its secret “CO2 and Climate Task Force”. His presentation, now public, warned that continued fossil fuel use would be “barely noticeable” by 2005 but by the 2060s would have “globally catastrophic effects.” That same year, the API called on governments to triple coal production worldwide, publicly insisting there would be no negative consequences.

The Coal Industry Knew Even Earlier

If anything, the coal industry understood the problem first. A 1966 article in the trade publication Mining Congress Journal by James Garvey, president of Bituminous Coal Research Inc., explicitly discussed how continued coal consumption would increase atmospheric temperatures and cause “vast changes in the climates of the earth.” A combustion engineer from Peabody Coal, now the world’s largest coal company, acknowledged in the same publication that the industry was “buying time” before air pollution regulations would force action.

This 1966 evidence is particularly damning because it predates widespread public awareness by decades. The coal industry didn’t stumble into climate denial—they entered it with full knowledge of what they were obscuring.

Major coal interests also had early awareness that carbon emissions posed regulatory and market risks, particularly for coal‑fired electricity, and they participated in joint industry research and strategy discussions about climate change in the 1980s and 1990s. At the same time, coal associations helped create public campaigns such as the Information Council for the Environment (ICE—even then a disturbing acronym), whose internal planning documents explicitly set an objective to “reposition global warming as theory (not fact)” and to target specific demographic groups with tailored doubt‑based messages.

According to a report from the Union of Concerned Scientists, these efforts often relied on “grassroots” fronts, advertising, and even forged constituent letters to legislators to undermine support for climate policy and to counter the conclusions of mainstream climate science, which even the companies’ own experts did not refute.

What They Said Publicly

The contrast between private knowledge and public statements is stark. While Exxon scientists were building sophisticated climate models internally, the company’s public messaging emphasized uncertainty. In a 1997 speech, Exxon CEO Lee Raymond told an audience at the World Petroleum Conference: “Let’s agree there’s a lot we really don’t know about how climate change will change in the 21st century and beyond”.  They spread messaging that emphasized uncertainty, framed global warming as just a “theory,” and highlighted supposed flaws in climate models, even as their own scientists were using those models to make precise projections. The company and allied trade associations supported think tanks and advocacy groups such as Citizens For Sound Science, that questioned if human activity was responsible for warming and opposed binding limits on emissions, producing a stark discrepancy between internal scientific knowledge and external communication.

In 1989, Exxon helped create the Global Climate Coalition—despite its environmental sounding name, the organization worked to cast doubt on climate science and block clean energy legislation throughout the 1990s. Electric utilities and coal‑linked organizations joined this coalition to systematically attack climate scientists and lobby to weaken or stall international agreements like the Kyoto Protocol, despite internal recognition that greenhouse gases were driving warming.

Internal API documents from a 1998 meeting reveal an explicit strategy to “ensure that a majority of the American public… recognizes that significant uncertainties exist in climate science”.

In 1991, Shell produced a film, “Climate of Concern,” which stated that human driven—as opposed to greenhouse gas driven—climate change was happening “at a rate faster than at any time since the end of the ice age” and warned of extreme weather, flooding, famine, and climate refugees.  They understood the science but tried to shift the blame.

According to a 2013 Drexel University study, between 2003 and 2010 alone, approximately $558 million was distributed to about 100 climate change denial organizations. Greenpeace reports that Exxon alone spent more than $30 million on think tanks promoting climate denial.

The Tobacco Playbook

The parallels to Big Tobacco’s strategy are not coincidental—they’re intentional. Research by the Center for International Environmental Law uncovered more than 100 documents from the Tobacco Industry Archives showing that oil and tobacco companies not only used the same PR firms and research institutes, but often the same individual researchers. The connection goes back to at least the 1950s.  A report published in Scientific American suggests the oil and tobacco industries both hired the PR firm Hill & Knowlton Inc. as early as 1956.

A 1969 internal memo from R.J. Reynolds Tobacco Company stated plainly: “Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the mind of the general public”. This became the template. Create uncertainty. Emphasize what isn’t known rather than what is. Fund research that casts doubt. Attack the credibility of independent scientists. They formed organizations with scientific-sounding names that existed primarily to muddy the waters.

In one particularly brazen example, a 2015 presentation by Cloud Peak Energy executive Richard Reavey titled “Survival Is Victory: Lessons From the Tobacco Wars,” explicitly coached coal executives on how to apply tobacco industry tactics.

What makes the fossil fuel case particularly egregious is the temporal dimension. These weren’t companies caught off-guard by emerging science. They funded the research. They understood the findings. Their own scientists urged action. A 1978 Exxon memo noted this could be “the kind of opportunity we are looking for to have Exxon technology, management and leadership resources put into the context of a project aimed at benefitting mankind”.

Instead, when oil prices collapsed in the mid-1980s, Exxon pivoted from conducting climate research to funding climate denial. By the late 1980s, according to reporting by InsideClimate News, Exxon “curtailed its carbon dioxide research” and “worked instead at the forefront of climate denial”.

Where We Stand Now

Across the oil, gas, and coal industries, there is not a genuine scientific dispute inside companies but a divergence between what in‑house experts knew and what corporate leaders chose to communicate to the public and policymakers. This divergence mirrors the tobacco industry’s long‑running use of organized doubt. In both arenas, industry actors treated early recognition of harm as a legal and political threat and responded by investing in campaigns to confuse, delay, and reframe the science rather than addressing the risks their own research had identified.

The evidence trail has led to legal action. More than 20 cities, counties, and states have filed lawsuits against fossil fuel companies for damages caused by climate change, arguing the industry knowingly deceived the public. The European Parliament held hearings in 2019 on climate denial by ExxonMobil and other actors. The hashtags #ExxonKnew, #ShellKnew, and #TotalKnew have become rallying cries for accountability.

Senator Sheldon Whitehouse has explicitly compared the fossil fuel industry’s actions to the tobacco racketeering case that ultimately held cigarette makers accountable. As he noted in a Senate speech, the elements of a civil racketeering case are straightforward: defendants conducted an enterprise with a pattern of racketeering activity.

The difference between the tobacco and fossil fuel cases may be one of scale. As researchers Naomi Oreskes and Erik Conway documented in their book Merchants of Doubt, both industries worked to obscure truth for profit. But while tobacco kills individuals, climate change threatens entire ecosystems and future generations.  The time to act is now.

Sources:

Scientific American – “Exxon Knew about Climate Change Almost 40 Years Ago”
https://www.scientificamerican.com/article/exxon-knew-about-climate-change-almost-40-years-ago/
 
Harvard Gazette – Harvard-led analysis finds ExxonMobil internal research accurately predicted climate change
https://news.harvard.edu/gazette/story/2023/01/harvard-led-analysis-finds-exxonmobil-internal-research-accurately-predicted-climate-change/
 
InsideClimate News – Exxon’s Own Research Confirmed Fossil Fuels’ Role in Global Warming Decades Ago
https://insideclimatenews.org/news/02052024/from-the-archive-exxon-research-global-warming/
 
PBS Frontline – Investigation Finds Exxon Ignored Its Own Early Climate Change Warnings
https://www.pbs.org/wgbh/frontline/article/investigation-finds-exxon-ignored-its-own-early-climate-change-warnings/
 
NPR – Exxon climate predictions were accurate decades ago. Still it sowed doubt
https://www.npr.org/2023/01/12/1148376084/exxon-climate-predictions-were-accurate-decades-ago-still-it-sowed-doubt
 
Science (journal) – Assessing ExxonMobil’s global warming projections
https://www.science.org/doi/10.1126/science.abk0063
 
Climate Investigations Center – Shell Climate Documents
https://climateinvestigations.org/shell-oil-climate-documents/
 
The Conversation – What Big Oil knew about climate change, in its own words
https://theconversation.com/what-big-oil-knew-about-climate-change-in-its-own-words-170642
 
ScienceAlert – The Coal Industry Was Well Aware of Climate Change Predictions Over 50 Years Ago
https://www.sciencealert.com/coal-industry-knew-about-climate-change-in-the-60s-damning-revelations-show
 
The Intercept – A Major Coal Company Went Bust. Its Bankruptcy Filing Shows That It Was Funding Climate Change Denialism
https://theintercept.com/2019/05/16/coal-industry-climate-change-denial-cloud-peak-energy/
 
Center for International Environmental Law – Big Oil Denial Playbook Revealed by New Documents
https://www.ciel.org/news/oil-tobacco-denial-playbook/
 
Wikipedia – Tobacco industry playbook
https://en.wikipedia.org/wiki/Tobacco_industry_playbook
 
Scientific American – Tobacco and Oil Industries Used Same Researchers to Sway Public
https://www.scientificamerican.com/article/tobacco-and-oil-industries-used-same-researchers-to-sway-public1/
 
Environmental Health (journal) – The science of spin: targeted strategies to manufacture doubt with detrimental effects on environmental and public health
https://link.springer.com/article/10.1186/s12940-021-00723-0
 
Senator Sheldon Whitehouse – Time to Wake Up: Climate Denial Recalls Tobacco Racketeering
https://www.whitehouse.senate.gov/news/speeches/time-to-wake-up-climate-denial-recalls-tobacco-racketeering/
 
VICE News – Meet the ‘Merchants of Doubt’ Who Sow Confusion about Tobacco Smoke and Climate Change
https://www.vice.com/en/article/meet-the-merchants-of-doubt-who-sow-confusion-about-tobacco-smoke-and-climate-change/
 
Union of Concerned Scientists – The Climate Deception Dossiers
https://www.ucs.org/sites/default/files/attach/2015/07/The-Climate-Deception-Dossiers.pdf
 
 
Illustration generated by author using ChatGPT.
 
 
 
 
 
 

The Founding Feuds: When America’s Heroes Couldn’t Stand Each Other

The mythology of the founding fathers often portrays them as a harmonious band of brothers united in noble purpose. The reality was far messier—these brilliant, ambitious men engaged in bitter personal feuds that sometimes threatened the very republic they were creating.  In some ways, the American revolution was as much of a battle of egos as it was a war between King and colonists.

The Revolutionary War Years: Hancock, Adams, and Washington’s Critics

The tensions began even before independence was declared. John Hancock and Samuel Adams, both Massachusetts firebrands, developed a rivalry that simmered throughout the Revolution. Adams, the older political strategist, had been the dominant figure in Boston’s resistance movement. When Hancock—wealthy, vain, and eager for glory—was elected president of the Continental Congress in 1775, the austere Adams felt his protégé had grown too big for his britches. Hancock’s request for a leave of absence from the presidency of Congress in 1777 coupled with his desire for an honorific military escort home, struck Adams as a relapse into vanity. Adams even opposed a resolution of thanks for Hancock’s service, signaling open estrangement. Their relationship continued to deteriorate to the point where they barely spoke, with Adams privately mocking Hancock’s pretensions and Hancock using his position to undercut Adams politically.

The choice of Washington as commander sparked its own controversies. John Adams had nominated Washington, partly to unite the colonies by giving Virginia the top military role. Washington’s command was anything but universally admired and as the war dragged on with mixed results many critics emerged.

After the victory at Saratoga in 1777, General Horatio Gates became the focal point of what’s known as the Conway Cabal—a loose conspiracy aimed at having Gates replace Washington as commander-in-chief. General Thomas Conway wrote disparaging letters about Washington’s military abilities. Some members of Congress, including Samuel Adams, Thomas Mifflin, and Richard Henry Lee, questioned whether Washington’s defensive strategy was too cautious and if his battlefield performance was lacking. Gates himself played a duplicitous game, publicly supporting Washington while privately positioning himself as an alternative.

When Washington discovered the intrigue, his response was characteristically measured but firm.  Rather than lobbying Congress or forming a counter-faction, Washington leaned heavily on reputation and restraint. He continued to communicate respectfully with Congress, emphasizing the army’s needs rather than defending his own position.  Washington did not respond with denunciations or public accusations. Instead, he handled the situation largely behind the scenes. When he learned that Conway had written a critical letter praising Gates, Washington calmly informed him that he was aware of the letter—quoting it verbatim.

The conspiracy collapsed, in part because Washington’s personal reputation with the rank and file and with key political figures proved more resilient than his critics had anticipated. But the episode exposed deep fractures over strategy, leadership, and regional loyalties within the revolutionary coalition.

The Ideological Split: Hamilton vs. Jefferson and Madison

Perhaps the most consequential feud emerged in the 1790s between Alexander Hamilton and Thomas Jefferson, with James Madison eventually siding with Jefferson. This wasn’t just personal animosity—it represented a fundamental disagreement about America’s future.

Hamilton, Washington’s Treasury Secretary, envisioned an industrialized commercial nation with a strong central government, a national bank, and close ties to Britain. Jefferson, the Secretary of State, championed an agrarian republic of small farmers with minimal federal power and friendship with Revolutionary France. Their cabinet meetings became so contentious that Washington had to mediate. Hamilton accused Jefferson of being a dangerous radical who would destroy public credit. Jefferson called Hamilton a monarchist who wanted to recreate British aristocracy in America.

The conflict got personal. Hamilton leaked damaging information about Jefferson to friendly newspapers. Jefferson secretly funded a journalist, James Callender, to attack Hamilton in print. When Hamilton’s extramarital affair with Maria Reynolds became public in 1797, Jefferson’s allies savored every detail. The feud split the nation into the first political parties: Hamilton’s Federalists and Jefferson’s Democratic-Republicans. Madison, once Hamilton’s ally in promoting the Constitution, switched sides completely, becoming Jefferson’s closest political partner and Hamilton’s implacable foe.

The Adams-Jefferson Friendship, Rivalry, and Reconciliation

John Adams and Thomas Jefferson experienced one of history’s most remarkable personal relationships. They were close friends during the Revolution, working together in Congress and on the committee to draft the Declaration of Independence (though Jefferson did the actual writing). Both served diplomatic posts in Europe and developed deep mutual respect.

But the election of 1796 turned them into rivals. Adams won the presidency with Jefferson finishing second, making Jefferson vice president under the original constitutional system—imagine your closest competitor becoming your deputy. By the 1800 election, they were bitter enemies. The campaign was vicious, with Jefferson’s supporters calling Adams a “hideous hermaphroditical character” and Adams’s allies claiming Jefferson was an atheist who would destroy Christianity.

Jefferson won in 1800, and the two men didn’t speak for over a decade. Their relationship was so bitter that Adams left Washington early in the morning, before Jefferson’s inauguration. What makes their story extraordinary is the reconciliation. In 1812, mutual friends convinced them to resume correspondence. Their letters over the next fourteen years—158 of them—became one of the great intellectual exchanges in American history, discussing philosophy, politics, and their memories of the Revolution. Both men died on July 4, 1826, the fiftieth anniversary of the Declaration of Independence, with Adams’s last words reportedly being “Thomas Jefferson survives” (though Jefferson had actually died hours earlier).

Franklin vs. Adams: A Clash of Styles

In Paris, the relationship between Benjamin Franklin and John Adams was a tense blend of grudging professional reliance and deep personal irritation, rooted in radically different diplomatic styles and temperaments. Franklin, already a celebrated figure at Versailles, cultivated French support through charm, sociability, and patient maneuvering in salons and at court, a method that infuriated Adams. He equated such “nuances” with evasiveness and preferred direct argument, formal memorandums, and hard‑edged ultimatums. Sharing lodgings outside Paris only intensified Adams’s resentment as he watched Franklin rise late, receive endless visitors, and seemingly mix pleasure with business, leading Adams to complain that nothing would ever get done unless he did it himself, while Franklin privately judged Adams “always an honest man, often a wise one, but sometimes and in some things, absolutely out of his senses.” Their French ally, Foreign Minister Vergennes, reinforced the imbalance by insisting on dealing primarily with Franklin and effectively sidelining Adams in formal diplomacy. This deepened Adams’s sense that Franklin was both overindulged by the French and insufficiently assertive on America’s behalf. Yet despite their mutual loss of respect, the two ultimately cooperated—often uneasily—in the peace negotiations with Britain, and both signatures appear on the 1783 Treaty of Paris, a testament to the way personal feud and shared national purpose coexisted within the American diplomatic mission.

Hamilton and Burr: From Political Rivalry to Fatal Duel

The Hamilton-Burr feud ended in the most dramatic way possible: a duel at Weehawken, New Jersey, on July 11, 1804, where Hamilton was mortally wounded and Burr destroyed his own political career.

Their rivalry had been building for years. Both were New York lawyers and politicians, but Hamilton consistently blocked Burr’s ambitions. When Burr ran for governor of New York in 1804, Hamilton campaigned against him with particular venom, calling Burr dangerous and untrustworthy at a dinner party. When Burr read accounts of Hamilton’s remarks in a newspaper, he demanded an apology. Hamilton refused to apologize or deny the comments, leading to the duel challenge.

What made this especially tragic was that Hamilton’s oldest son, Philip, had been killed in a duel three years earlier defending his father’s honor. Hamilton reportedly planned to withhold his fire, but he either intentionally shot into the air or missed. Burr’s shot struck Hamilton in the abdomen, and he died the next day. Burr was charged with murder in both New York and New Jersey and fled to the South.  Though he later returned to complete his term as vice president, his political career was finished.

Adams vs. Hamilton: The Federalist Crack-Up

One of the most destructive feuds happened within the same party. John Adams and Alexander Hamilton were both Federalists, but their relationship became poisonous during Adams’s presidency (1797-1801).

Hamilton, though not in government, tried to control Adams’s cabinet from behind the scenes. When Adams pursued peace negotiations with France (the “Quasi-War” with France was raging), Hamilton wanted war. Adams discovered that several of his cabinet members were more loyal to Hamilton than to him and fired them. In the 1800 election, Hamilton wrote a fifty-four-page pamphlet attacking Adams’s character and fitness for office—extraordinary since they were in the same party. The pamphlet was meant for limited circulation among Federalist leaders, but Jefferson’s allies got hold of it and published it widely, devastating both Adams’s re-election chances and Hamilton’s reputation. The feud helped Jefferson win and essentially destroyed the Federalist Party.

Washington and Jefferson: The Unacknowledged Tension

While Washington and Jefferson never had an open feud, their relationship cooled significantly during Washington’s presidency. Jefferson, as Secretary of State, increasingly opposed the administration’s policies, particularly Hamilton’s financial program. When Washington supported the Jay Treaty with Britain in 1795—which Jefferson saw as a betrayal of France and Republican principles—Jefferson became convinced Washington had fallen under Hamilton’s spell.

Jefferson resigned from the cabinet in 1793, partly from policy disagreements but also from discomfort with what he saw as Washington’s monarchical tendencies (the formal receptions and the ceremonial aspects of the presidency). Washington, in turn, came to view Jefferson as disloyal, especially when he learned Jefferson had been secretly funding attacks on the administration in opposition newspapers and had even put a leading critic on the federal payroll. By the time Washington delivered his Farewell Address in 1796, warning against political parties and foreign entanglements, many saw it as a rebuke of Jefferson’s philosophy. They maintained outward courtesy, but their warm relationship never recovered.

Why These Feuds Mattered

These weren’t just personal squabbles—they shaped American democracy in profound ways. The Hamilton-Jefferson rivalry created our two-party system (despite Washington’s warnings). The Adams-Hamilton split showed that parties could fracture from within. The Adams-Jefferson reconciliation demonstrated that political enemies could find common ground after leaving power.

The founding fathers were human, with all the ambition, pride, jealousy, and pettiness that entails. They fought over power, principles, and personal slights. What’s remarkable isn’t that they agreed on everything—they clearly didn’t—but that despite their bitter divisions, they created a system robust enough to survive their feuds. The Constitution itself, with its checks and balances, almost seems designed to accommodate such disagreements, ensuring that no single person or faction could dominate.

SOURCES

  1. National Archives – Founders Online

https://founders.archives.gov

2.   Massachusetts Historical Society – Adams-Jefferson Letters

https://www.masshist.org/publications/adams-jefferson

       3.    Founders Online – Hamilton’s Letter Concerning John Adams

https://founders.archives.gov/documents/Hamilton/01-25-02-0110

       4.    Gilder Lehrman Institute – Hamilton and Jefferson

https://www.gilderlehrman.org/history-resources/spotlight-primary-source/alexander-hamilton-and-thomas-jefferson

       5.    National Park Service – The Conway Cabal

https://www.nps.gov/articles/000/the-conway-cabal.htm

       6.    American Battlefield Trust – Hamilton-Burr Duel

https://www.battlefields.org/learn/articles/hamilton-burr-duel

        7.   Mount Vernon – Thomas Jefferson

https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/thomas-jefferson

        8.   Monticello – Thomas Jefferson Encyclopedia

https://www.monticello.org/research-education/thomas-jefferson-encyclopedia

        9.   Library of Congress – John Adams Papers

https://www.loc.gov/collections/john-adams-papers

10. Joseph Ellis – “Founding Brothers: The Revolutionary Generation”

https://www.pulitzer.org/winners/joseph-j-ellis

Illustration generated by author using ChatGPT.

Truth at a Crossroads: How Trust, Identity, and Information Shape What We Believe

When Oxford Dictionaries declared “post-truth” its word of the year in 2016, it crystallized something many people had been feeling: that we’d entered a strange new era where objective facts seemed less influential in shaping public opinion than appeals to emotion and personal belief. The term exploded in usage that year, becoming shorthand for a troubling shift in how we process information. But have we really entered uncharted territory, or is this just the latest chapter in a very old story?

The short answer is: it’s complicated. The phenomenon itself isn’t new, but the scale and speed at which misinformation spreads certainly is. We are in a new world where the boundary between truth and untruth is blurred, institutions that once arbitrated facts are losing authority, and politics are running on “truthiness” and spectacle more than evidence.

The Psychology of Believing What We Want to Believe

To understand why people increasingly seem to choose sources over facts, we need to dive into how our minds actually work. People now seem to routinely sort themselves into information camps, each with its own “truth,” trusted voices, and shared worldview. But why is this and why does it seem to be getting worse?

Psychologists have spent decades studying something called confirmation bias—essentially, the tendency to seek out information that supports our existing beliefs while avoiding or dismissing information that contradicts them. This isn’t just about being stubborn. Research shows we actively sample more information from sources that align with what we already believe, and the higher our confidence in our initial beliefs, the more biased our information gathering becomes.

But there’s something even more powerful at play called motivated reasoning. While confirmation bias is about seeking information that confirms our beliefs, motivated reasoning is about protecting ideological beliefs by selectively crediting or discrediting facts to fit our identity-defining group’s position. In other words, we don’t just want to be right—we want to belong.

This matters because humans are fundamentally tribal creatures. When we form attachments to groups like political parties or ideological movements, we develop strong motivations to advance the group’s relative status and experience emotions like pride, shame, and anger on behalf of the group. Information processing becomes less about truth-seeking and more about identity protection.

Why Source Trumps Fact

So why do people trust a source they identify with over objective facts that contradict their worldview? Research points to several interconnected reasons.

First, there’s the practical matter of cognitive shortcuts. We’re bombarded with information daily, and people judge the reliability of evidence by using mental shortcuts called heuristics, such as how readily a particular idea comes to mind. If someone we trust says something, that’s an easier mental pathway than laboriously fact-checking every claim. This reliance becomes problematic when “trusted” means ideologically comfortable rather than factually reliable.

Analysts of the post‑truth phenomenon also highlight declining trust in traditional “truth tellers” such as mainstream media, scientific institutions, and government agencies. As these institutions lose authority, counter‑elites or influencers can present alternative narratives that followers treat as at least as plausible as established facts

Second, and more importantly, is the issue of identity. When individuals engage in identity-protective thinking, their processing of information more likely guides them to positions that are congruent with their membership in ideologically or culturally defined groups than to ones that reflect the best available scientific evidence. Being wrong about a fact might sting for a moment, but being cast out of your social group could have real consequences for your emotional support, social standing, and sense of self.

Third, there’s a feedback loop at work. In social media, confirmation bias is amplified by filter bubbles and algorithmic editing, which display to individuals only information they’re likely to agree with while excluding opposing views. The more we’re exposed only to sources that confirm our beliefs, the more alien and untrustworthy contradictory information appears.

Interestingly, being smarter doesn’t necessarily protect you from these biases. Some research suggests that people who are adept at using effortful, analytical modes of information processing may actually be even better at fitting their beliefs to their group identities, using their intelligence to construct more sophisticated justifications for what they already want to believe.

The Historical Echo Chamber

Despite the way it feels, this isn’t the first time truth has had competition. History is full of eras when myth, rumor, propaganda, and identity overshadowed facts.

During The Reformation of the1500s, misinformation was spread on both sides of the catholic-protestant divide.  Pamphlets—many of them highly distorted or outright fabricated—spread rapidly thanks to the printing press. Propaganda became a political weapon. Ordinary people suddenly had access to arguments they weren’t equipped to verify.  People were ostracized and some even executed based on little more than rumors or lies.  We might have hoped for better from religious leaders.

 The French Revolution (1780s–1790s) was awash in claims and counterclaims, many of them—if not most—had little basis in fact.Competing newspapers told wildly different stories about the same events. Rumors fueled paranoia, purges, and violence. Truth became secondary to whichever faction controlled the narrative.

Following the Civil War and Reconstruction, the “Lost Cause” narrative became a powerful example of source-driven myth making. Despite historical evidence, generations accepted a version of events shaped by postwar Southern elites, not by facts. Echoes of it still reverberate today, driving much of the opposition to the civil rights movement.

Fast forward to the 1890s, and we see something remarkably familiar. Yellow journalism, characterized by sensationalism and manipulated facts, emerged from the circulation war between Joseph Pulitzer’s New York World and William Randolph Hearst’s New York Journal. These papers used exaggerated headlines, unverified claims, faked interviews, misleading headlines, and pseudoscience to boost sales.

As early as 1898, a publication for the newspaper industry wrote that “the public is becoming heartily sick of fake news and fake extras”—sound familiar?

During the 20th-century propaganda states, typified by both fascist and communist regimes perfected source-based truth. The leader or the party defined reality, and disagreement was literally dangerous. In these systems, truth wasn’t debated—it was assigned.

What Makes Now Different?

While the psychological mechanisms and even the tactics aren’t new, several factors make our current moment distinct. The speed and scale of information spread is unprecedented. A false claim can circle the globe in hours. Studies show that people are bombarded by fake information online, leading the distinction between facts and fiction to become increasingly blurred as blogs, social media, and citizen journalism are awarded similar or greater credibility than other information sources.

We’re also experiencing a fragmentation of trusted authorities. Where once a handful of major newspapers and broadcast networks served as gatekeepers, now the fragmentation of centralized mass media gatekeepers has fundamentally altered information seeking, including ways of knowing, shared authorities, and trust in institutions.

So Are We in a Post-Truth Era?

Yes and no. The term “post-truth” captures something real about our current moment—the scale, speed, and sophistication of misinformation is unprecedented. But calling it “post-truth” suggests we’ve crossed some bright line into entirely new territory.  I’d argue we’re not quite there—but we are navigating a world where truth is sometimes lost in the collision of ancient human tendencies and modern technology

The data clearly show that confirmation bias, motivated reasoning, and identity-protective cognition are real and powerful forces. Historical evidence demonstrates that propaganda, misinformation, and the choice of tribal loyalty over objective fact have been with us for millennia. What’s changed is our information ecosystem driven by the technology that allows false information to spread faster than ever, and the by the fragmentation of shared sources of authority that once helped create common ground.

Perhaps a better framing would be that we’re in an era of “turbo-charged tribal epistemology”—where our very human tendency to trust our tribe’s narrative over contradicting evidence has been supercharged by algorithms that feed us what we want to hear and isolate us from alternative perspectives.  (I wish I could take credit for the term turbo-charged tribal epistemology. I really like it, but I read it somewhere, I just can’t remember where.) 

The question isn’t really whether we’re in a post-truth society. The question is whether we can develop the individual and collective skills to navigate an information environment that exploits every cognitive bias we have. The environment has changed, but the task remains the same: finding ways to establish shared facts despite our deep-seated tendency to believe what we want to believe.

Sources:

The Price Tag Mystery: Why Nobody Really Knows What Healthcare Costs in America

Imagine walking into a store where nothing has a price tag. When you get to the register, the cashier scans your items and tells you the total—but that total is different for every customer. Your neighbor might pay $50 for the same items that cost you $200. The store won’t tell you why, and you won’t find out until after you’ve already “bought” everything.

Welcome to American healthcare, where the simple question “how much does this cost?” has no simple answer.

You might think I’m exaggerating, but the evidence suggests otherwise. Research published in late 2023 by PatientRightsAdvocate.org found that prices for the same medical procedure can vary by more than 10 times within a single hospital depending on which insurance plan you have, and by as much as 33 times across different hospitals. A knee replacement that costs around $23,170 in Baltimore might run $58,193 in New York. An emergency department visit that one facility charges $486 for might cost $3,549 at another hospital for the identical service.

The fundamental problem is that hospitals and doctors don’t have one price for their services. They have dozens, sometimes hundreds, of different prices for the exact same procedure depending on who’s paying. This bizarre system evolved because most healthcare in America isn’t a simple transaction between patient and provider—there’s a third party in the middle called an insurance company, and that changes everything.

The Fiction of Chargemaster Prices

A hospital chargemaster is essentially the hospital’s internal price list—a massive catalog that assigns a dollar amount to every service, supply, test, medication, and procedure the hospital can bill for, from an aspirin to a complex surgery. These listed prices are usually very high and are not what most patients actually pay; instead, the chargemaster functions as a starting point for negotiations with insurers and government programs like Medicare and Medicaid, which typically pay much lower, pre-set rates. What an individual patient ultimately pays depends on several factors layered on top of the chargemaster price. Think of them like the manufacturer’s suggested retail price on a car: technically real, but nobody pays them.

A hospital might list an MRI at $3,000 or a blood test at $500. But then insurance companies come in. They represent thousands or millions of potential patients, which gives them serious bargaining power. They negotiate with hospitals along these lines: “We’ll send you lots of patients, but only if you give us a discount.” So, the hospital agrees to accept much less—maybe they’ll take $1,200 for that $3,000 MRI or $150 for the blood test. This discounted amount is called the “negotiated rate,” and it’s what the insurance company will really pay.

Here’s where it gets messy: every insurance company negotiates its own rates with every hospital. Blue Cross might negotiate one price, Aetna a different price, UnitedHealthcare yet another. The same exact MRI at the same hospital might be $1,200 for one insurer’s customers and $1,800 for another’s. And these negotiated rates have traditionally been kept secret—treated like confidential business information that gives each party a competitive advantage.

The Write-Off Game

What happens to that difference between the chargemaster price and the negotiated rate? The hospital “writes it off.” That’s accounting language for “we accept that we’re not getting paid this money, and we’re taking it off the books.” If the hospital charged $3,000 but agreed to accept $1,200, they write off $1,800. This isn’t lost money in the normal sense—they never expected to collect it in the first place. The chargemaster prices are inflated specifically because everyone knows discounts are coming. Some hospitals now post “discounted cash prices” that are often far below chargemaster and sometimes even below some negotiated rates. These are sometimes, though not always, offered to uninsured patients, generally referred to as self-pay. There can be a catch—some hospitals require lump-sum payment of the total bill to qualify for the lower price.

According to the American Hospital Association, U.S. hospitals collectively plan to write off approximately $760 billion in billed charges in 2025 across all categories of write-offs. That’s not a typo—$760 billion. These write-offs happen in several different situations. The most common are contractual write-offs, where the provider has agreed to accept less than their list price from insurance companies.

Hospitals have far more write-offs than just contractual.  They also write off money for charity care—treating patients who can’t afford to pay anything, and they write off bad debt when patients could pay but don’t. They write off small balances that aren’t worth the administrative cost of collection, and they write off amounts related to various billing errors, denied claims, and coverage disputes. Healthcare providers typically adjust about 10 to 12 percent of their gross revenue due to these various write-offs and claim adjustments.

Why Such Wild Variation?

Even with all these negotiated discounts built into the system, the prices still vary enormously. A 2024 study from the Baker Institute found that for emergency department visits, the price charged by hospitals in the top 10% can be three to seven times higher than the hospitals in the bottom 10% for the identical procedure. Research published in Health Affairs Scholar in early 2025 found that even after adjusting for differences between insurers and procedures, the top 25% of prices across all states is 48 percent higher than the bottom 25% of prices for inpatient services.

Several factors drive this variation. Hospitals in areas with less competition can charge more because insurers have fewer alternatives for negotiation. Prestigious hospitals can demand higher rates because insurers want them in their networks to attract customers. Some insurance companies have more bargaining power than others based on their market share. There’s no central authority setting prices—it’s all private negotiations, hospital by hospital, insurer by insurer, procedure by procedure.

For patients, this creates a nightmare scenario. Even if you have insurance, you usually have no idea what you’ll pay until after you’ve received care. Your out-of-pocket costs depend on your deductible (the amount you pay before insurance kicks in), your copay or coinsurance (your share after insurance starts paying), and whether the negotiated rate between your specific insurance and that specific hospital is high or low. Two people with different insurance plans getting the same procedure at the same hospital on the same day can end up with drastically different bills.

Research using new transparency data confirms this isn’t just anecdotal. A study from early 2025 found that for something as routine as a common office visit, mean prices ranged from $82 with Aetna to $115 with UnitedHealth. Within individual insurance companies, the price of the top 25% of office visits was 20 to 50 percent higher than the bottom 25%, meaning even within one insurer’s network, where you go or where you live makes a huge difference.

The Government Steps In

The federal government finally said “enough” and started requiring transparency. Since 2021, hospitals must post their prices online, including what they’ve negotiated with each insurance company. The Centers for Medicare and Medicaid Services (CMS) strengthened these requirements in 2024, mandating standardized formats and increasing enforcement. Health insurance plans face similar requirements to disclose their negotiated rates.

The theory was straightforward: if patients could see prices ahead of time, they could shop around, which would force prices down through competition. CMS estimated this could save as much as $80 billion by 2025. The idea seemed sound—transparency works in other markets, so why not healthcare?

In practice, it’s been messy. A Government Accountability Office (GAO) report from October 2024 found that while hospitals are posting data, stakeholders like health plans and employers have raised serious concerns about data quality. They’ve encountered inconsistent file formats, extremely complex pricing structures, and data that appears to be incomplete or possibly inaccurate. Even when hospitals post the required information, it’s often so convoluted that comparing prices across facilities becomes nearly impossible for average consumers.

An Office of Inspector General report from November 2024 found that not all selected hospitals were complying with the transparency requirements in the first place. And CMS still doesn’t have robust mechanisms to verify whether the data being posted is accurate and complete. The GAO recommended that CMS assess whether hospital pricing data are sufficiently complete and accurate to be usable, and to assess if additional enforcement if needed.

Imagine trying to comparison shop when one store lists prices in dollars, another in euros, and a third uses a proprietary currency they invented. That’s roughly where we are with healthcare price data—technically available, but practically unusable for most people trying to make informed decisions.

The Trump administration in 2025 signed a new executive order aimed at strengthening enforcement of price transparency rules and directing agencies to standardize and make hospital and insurer pricing information more accessible; this action built on rather than reduced the earlier requirements.  Hopefully this will improve the ability of patients to access real costs, but it is my opinion that the industry will continue to resist full and open compliance.

The Limits of Shopping for Healthcare

There’s also a deeper philosophical problem: for healthcare to work like a normal market where price transparency drives competition, patients would need to be able to shop around based on price. That could work for scheduled procedures like knee replacements, colonoscopies, or elective surgeries. You have time to research, compare, and choose.

But it doesn’t work at all when you’re having a heart attack, or your child breaks their arm. You go to the nearest hospital, period. You’re not calling around asking about prices while someone’s having a medical emergency. Even for non-emergencies, choosing based on price assumes equal quality across providers, which isn’t always true and is even harder to assess than price itself.

A study on price transparency tools found mixed results on whether they truly reduce spending. Some research shows modest savings when people use price comparison tools for shoppable services like imaging and lab work. But utilization of these tools remains low, and for many healthcare encounters, price shopping simply isn’t practical or appropriate.

Who Really Knows?

So, who truly understands what things cost in this system? Hospital administrators know what different insurers pay them for specific procedures, but that knowledge is limited to their facility. They don’t necessarily know what other hospitals charge. Insurance company executives know what they’ve negotiated with various hospitals in their network, but they haven’t historically shared meaningful price information with their customers in advance. And they don’t know what their competitors have negotiated.

Patients, caught in the middle, often find out their costs only when they receive a bill weeks after treatment. By that point, the care has been delivered, and the financial damage is done. Recent surveys suggest that surprise medical bills remain a significant problem, with many patients receiving unexpected charges from out-of-network providers they didn’t choose or even know were involved in their care.

The people who are starting to get a comprehensive view are researchers and policymakers analyzing the newly available transparency data. Studies published in 2024 and 2025 using these data have given us unprecedented visibility into pricing patterns and variation. But this is aggregate, statistical knowledge—it helps us understand the system but doesn’t necessarily help individual patients figure out what they’ll pay for a specific procedure.

Where We Stand

The transparency regulations represent a genuine attempt to inject some market discipline into healthcare pricing. Making negotiated rates public breaks down the information asymmetry that has allowed prices to vary so wildly. In theory, if patients and employers can see that Hospital A charges twice what Hospital B does for the same procedure, competitive pressure should push prices toward the lower end.

There’s some early evidence this might be working. A study of children’s hospitals found that price variation for common imaging procedures decreased by about 19 percent between 2023 and 2024, though overall prices continued rising. Whether this trend will continue and expand to other types of facilities remains to be seen.  I am concerned that rather than lowering overall prices it may cause hospitals at the lower end to raise their prices closer to those at the higher end.

Significant obstacles remain. The data quality issues need resolution before the information becomes truly usable. Many patients lack either the time, expertise, or practical ability to shop based on price. And the fundamental structure of American healthcare—with its complex interplay of providers, insurers, pharmacy benefit managers, and government programs—means that even perfect price transparency won’t create a simple, straightforward market.

So, to return to the original question: does anyone truly know the cost of medical care in the United States? In an aggregate sense, researchers and policymakers are starting to understand the patterns thanks to transparency requirements. The data are revealing just how variable and opaque pricing has been. But as a practical matter for individual patients trying to figure out what they’ll pay for needed care, not really. The information is becoming available but remains largely inaccessible or incomprehensible for ordinary people trying to make informed healthcare decisions.

The $760 billion in annual write-offs tells you everything you need to know: the posted prices are largely fictional, the negotiated prices vary wildly, and the system has evolved to be so complex that even the people operating within it struggle to understand the full picture. We’re making progress toward transparency, but we’re a long way from a healthcare system where patients can confidently get the answer to the simple question: “How much will this cost?”

A closing thought: All of this could be solved by development of a single-payer healthcare system such as I proposed in my previous post America’s Healthcare Paradox: Why We Pay Double and Get Less.

Critical Ignoring: The Skill You Didn’t Know You Needed

You’ve probably spent years learning how to pay attention—reading closely, analyzing deeply, and thinking critically. But here’s something nobody taught you in school: in today’s digital world, knowing what not to pay attention to might be just as important as knowing what deserves your focus.

That’s the essence of critical ignoring, a concept developed by researchers Anastasia Kozyreva, Sam Wineburg, Stephan Lewandowsky, and Ralph Hertwig  . It’s basically the skill of deliberately and strategically choosing what information to ignore so you can invest your limited attention where it truly matters.  I first became aware of this concept just a few weeks ago while reading an article by Christopher Mims in the Wall Street Journal.

Why This Matters Now

Think about your typical day online. You’re bombarded with news alerts, social media posts, clickbait headlines, and outrage-inducing content designed specifically to hijack your attention. Traditional advice tells you to carefully evaluate each source, read critically, and fact-check thoroughly. But here’s the problem: if you’re investing serious mental energy evaluating sources that should have been ignored in the first place, your attention has already been stolen.

The researchers make a crucial observation about how the digital world has changed the game. In the past, information was scarce and we had to seek it out. Now we’re drowning in it, and much of it is deliberately designed to be attention-grabbing through tactics like sparking curiosity, outrage, or anger. Our attention has become the scarce resource that advertisers and content providers are constantly trying to seize and exploit.

Critical ignoring is not sticking your head in the sand or refusing to hear anything that challenges you. Apathy is “I don’t care about any of this.”  Critical ignoring is “I care enough to be selective, so that I can focus on what truly matters.”  Denial is “I refuse to believe or even look at uncomfortable evidence.” Critical ignoring is “I’m not going to invest my time in sources that are clearly unreliable, or in discussions that are going nowhere, so I can better examine serious evidence elsewhere.”

The key distinction is that critical ignoring always serves better judgment, not comfort at any cost.

How To Actually Do It

The researchers outline three practical strategies you can use right away:

Self-Nudging: This is about redesigning your digital environment to remove temptations before they become problems. Think of it as changing your information ecosystem. Instead of relying on willpower alone, you might unsubscribe from inflammatory newsletters, turn off news notifications that stress you out, or use browser extensions to block certain websites during work hours. The idea is to design your environment so you can implement the resolutions you’ve made.

Lateral Reading: This one’s particularly clever. Instead of reading a website from top to bottom like you’ve always done, professional fact-checkers will open another browser tab and quickly research who’s behind the source. That way, you spend sixty seconds searching for information about the source rather than spending twenty minutes carefully reading content from a source that turns out to be backed by a lobbying group or known misinformation peddler. The researchers note this is often faster and more effective than trying to critically evaluate the content itself.

Don’t Feed the Trolls: This strategy advises you not to reward malicious actors with your attention.  When you encounter inflammatory comments, deliberately misleading posts, or content clearly designed to provoke anger, the best response is often no response at all. Engaging with trolls or bad-faith content just amplifies it and wastes your mental energy.

I’ll Add Another

Ignore the Influencers: Refuse to click on miracle‑cure headlines or anecdote‑driven threads when you can go directly to professional medical sources, systematic reviews, or guidelines from reliable sources.  Ignore influencers’ health claims unless they clearly cite solid evidence and expertise.

The Bigger Picture

What makes critical ignoring different from just being selective is that it’s strategic and informed. To know what to ignore, you need to understand the landscape first. It’s not about burying your head in the sand—it’s about being intentional with your attention budget.

The traditional approach of “pay careful attention to everything” made sense in a world of vetted textbooks and curated libraries. But on the unvetted internet, that approach often ends up being a colossal waste of time and energy. The admonition to “pay careful attention” is exactly what attention thieves exploit.

Making It Work For You

Start by taking inventory of your information landscape —all the apps, websites, notifications, and sources competing for your attention. Which ones consistently deliver value? Which ones leave you feeling manipulated, angry, or stressed? Practice self-nudging by removing or limiting access to the latter category.

When you encounter a new source making bold claims, resist the urge to dive deep into their content immediately. Instead, spend a minute or two doing lateral reading. Search for “who runs [site name]” or “[organization name] funding.” You’ll be amazed how quickly you can identify whether something deserves your time.

And when you see obvious rage-bait or trolling, practice the “scroll on by” technique. Your attention is valuable—don’t give it away for free to people trying to manipulate you.

Critical ignoring isn’t about being less informed. It’s about being better informed by focusing your limited cognitive resources on reliable sources and meaningful content rather than letting the algorithm’s latest outrage-of-the-day consume your mental bandwidth.

Sources:         

Kozyreva, A., Wineburg, S., Lewandowsky, S., & Hertwig, R. (2023). Critical Ignoring as a Core Competence for Digital Citizens. Current Directions in Psychological Science, 32(1), 81-88. https://journals.sagepub.com/doi/10.1177/09637214221121570

                ∙Full text also available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC7615324/

                ∙Interview with lead researcher: https://www.mpg.de/19554217/new-digital-competencies-critical-ignoring

Mims, Christopher. “Your Key Survival Skill for 2026: Critical Ignoring.” The Wall Street Journal, January 3, 2026.

American Psychological Association.  https://www.apa.org/news/podcasts/speaking-of-psychology/attention-spans

Lane, S. & Atchley, P. “Human Capacity in the Attention Economy”, American Psychological Association, 2020.

Assessing the Trump-Orwell Comparisons: Warning, Not Prophecy

The comparison between the Trump administration and George Orwell’s dystopian works has recently become one of the most prevalent political metaphors. one I’ve used myself. Following Trump’s second inauguration in January 2025, sales of 1984 surged once again on Amazon’s bestseller lists, just as they did during his first term.

These comparisons are rhetorically powerful, but their accuracy depends on how literally Orwell is read and how carefully distinctions are drawn between authoritarian warning signs and fully realized totalitarian systems. But how accurate are the comparisons? Let me walk you through the key parallels, the evidence supporting them, and the critical questions we should be asking.

Understanding Orwell’s Core Themes

Before diving into the comparisons, it’s worth revisiting what Orwell was actually warning us about. In 1984, published in 1949, Orwell depicted a totalitarian state where the Party manipulates reality through “Newspeak” (language control), “doublethink” (holding contradictory beliefs), the “memory hole” (historical revision), and constant surveillance by Big Brother. The novel’s famous slogans—”War is Peace, Freedom is Slavery, Ignorance is Strength”—exemplify how the Party inverts the very meaning of words.

Animal Farm, written as an allegory of the Soviet Union under Stalin, traces how a revolutionary movement devolves into dictatorship. The pigs, led by Napoleon, gradually corrupt the founding principles of equality, with Squealer serving as the regime’s propaganda minister who constantly rewrites history and justifies Napoleon’s increasingly authoritarian actions.

The Major Parallels

The most famous early comparison emerged during Trump’s first term when adviser Kellyanne Conway defended false crowd size claims with the phrase “alternative facts.” This triggered the first major 1984 sales spike in 2017. According to multiple sources, critics immediately drew connections to Orwell’s concept of manipulating language to control thought.

In the current administration, commentators have identified several Orwellian language patterns. The administration has restricted use of certain words on government websites—including “female,” “Black,” “gender,” and “sexuality”—reminiscent of how Newspeak aimed to “narrow the range of thought” by eliminating words. An executive order on January 29, 2025, titled “Ending Radical Indoctrination in K-12 Schooling” has been criticized as doublespeak, using the language of educational freedom while actually restricting what can be taught.  Doublespeak has evolved as a way of combining the ideas of newspeak and doublethink.

Perhaps the most concrete parallel involves the systematic deletion of historical content from government websites. The Organization of American Historians condemned the administration’s efforts to “reflect a glorified narrative while suppressing the voices of historically excluded groups”. Specific documented deletions include information about Harriet Tubman, the Tuskegee Airmen (later restored after public outcry), the Enola Gay airplane (accidentally caught in a purge of anything containing “gay”), and nearly 400 books removed from the U.S. Naval Academy library relating to diversity topics. The Smithsonian’s National Museum of American History also removed references to Trump’s impeachments from its “Limits of Presidential Power” exhibit, which critics including Senator Adam Schiff called “Orwellian”.

Trump’s repeated characterization of political opponents as the “enemy from within” and the media as the “enemy of the people” parallels 1984’s Emmanuel Goldstein figure and the ritualized Two Minutes Hate sessions. One analysis suggests Trump leads Americans through “a succession of Two Minute Hates—of freeloading Europeans, prevaricating Panamanians, vile Venezuelans, Black South Africans, corrupt humanitarians, illegal immigrants, and lazy Federal workers”.

Multiple sources document that new White House staff must undergo “loyalty tests” and some face polygraph examinations. Trump’s statement “I need loyalty. I expect loyalty” echoes 1984’s declaration that “There will be no loyalty, except loyalty to the Party”. Within weeks of his second inauguration, Trump dismissed dozens of inspectors general—the internal government watchdogs. According to reports from Politico and Reuters, several have filed lawsuits claiming their removal violated federal law. An executive order titled “Ensuring Accountability for All Agencies” placed previously independent agencies like the SEC and FTC under direct White House supervision.

The Animal Farm Connections

While 1984 gets more attention, Stanford literature professor Alex Woloch argues that Animal Farm might be more relevant because “it traces that sense of a ‘slippery slope'” from democracy to totalitarianism, whereas in 1984 the totalitarian system is already fully established.

There are echoes of Animal Farm in the way populist rhetoric has framed liberals, progressive institutions, and the press as enemies of “the people,” while power was being consolidated within Trump’s narrow leadership circle. Orwell’s pigs do not abandon revolutionary language; they repurpose it. The “ordinary” supporters are exhorted to endure sacrifices and to direct anger at opposing groups, while political insiders consolidate authority and wealth—echoing the pigs’ gradual move into the farmhouse and adoption of human privileges. Critics argue that Trump’s sustained use of grievance-based populism, even while wielding executive power, fits this pattern symbolically if not structurally.

Other parallels being drawn to Animal Farm include Napoleon’s propaganda minister Squealer and the administration’s communication strategy of inverting reality and the gradual corruption of founding principles while maintaining revolutionary rhetoric like “drain the swamp”. They also are scapegoating political opponents and immigrants much as Napoleon blamed Snowball for all problems. They also are taking credit for others’ achievements just as Napoleon did with the other animals’ work. In the novel, Napoleon demands full investigations of Snowball even after discovering he had nothing to do with alleged misdeeds, much as Trump demanded investigations of Hillary Clinton, James Comey, Letitia James, and Jerome Powell while avoiding scrutiny of his own conduct.

As in Orwell’s farm, where the constant invoking of enemies keeps the animals fearful and loyal, the politics of permanent crisis and blame are being used to normalize increasingly aggressive behavior by those in power.

Critical Perspectives and Limitations

These comparisons raise several important concerns that deserve serious consideration. Orwell was writing about actual totalitarian regimes—Stalinist Russia and Nazi Germany—where millions died in purges, gulags, and genocides. The United States in 2026, despite concerning trends, still maintains functioning courts, elections, a free press, and a civil society. Some observers are warning against trivializing real authoritarian regimes by making overstated comparisons.

The Trump administration’s frequent attacks on the press, civil servants, and election administrators do resemble early warning signs Orwell would have recognized—not as proof of totalitarianism, but as a stress test on democratic norms.

Conservative commentators argue that these comparisons are exaggerated partisan attacks that misrepresent Trump’s actions. They point out that some court challenges to administration actions have succeeded, media criticism continues unabated, and political opposition remains robust—none of which would be possible in Orwell’s Oceania. The question becomes whether we’re witnessing isolated, though concerning actions or rather a systematic pattern—what Professor Woloch calls the “slippery slope” question.

One opinion piece suggested Trump’s actions resemble the chaotic, rule-breaking fraternity culture of “Animal House” more than the calculated totalitarianism of Orwell’s works—emphasizing bombast and spectacle over systematic control. This view argues that the MAGA movement is more “Blutonian than Orwellian,” driven by emotional appeals and personality rather than systematic thought control.

Where the Comparisons Are Strongest and Weakest

Based on my analysis, the comparisons appear most accurate in several specific areas. The pattern of language manipulation and redefinition—calling restrictions “freedom” and censorship “transparency”—closely mirrors doublespeak. The documented systematic removal of historical content from government sources directly parallels the memory hole concept. The dismissing of senior officials such as the head of the Bureau of Labor Statistics after an unfavorable jobs report, the wholesale firing of agency inspectors general and signaling that neutral experts should conform to political expectations mirrors the Orwellian demand for loyalty.  The assumption of control of previously independent agencies, and pressure on courts to allow the administration’s consolidation of power have parallels in the total party control. Unleashing ICE agents on the general public and excusing the murder of protesters are chillingly similar to the thought police and the “vaporizing” of citizens in Oceana. Perhaps most strikingly, Trump’s 2018 statement “What you’re seeing and what you’re reading is not what’s happening” nearly quotes Orwell’s line: “The party told you to reject the evidence of your eyes and ears”.

The comparisons are most strained when they overstate the current reality by suggesting America has already become Oceania, while democratic institutions that were lacking completely in Oceania are still functioning in America. Unlike 1984’s Winston, Americans retain significant ability to resist and organize. There is no single state monopoly over information. State and local governments, and civil society remain vigorous and are often hostile to Trump. Additionally, some comparisons conflate authoritarian-sounding rhetoric with actual totalitarian control, which aren’t equivalent.

Speculation: The Trajectory Question

The pattern of actions I’ve documented—systematic information control, loyalty purges, attacks on institutional independence, and explicit statements about seeking a third term—suggests a consistent direction rather than random actions. If these trends continue unchecked, particularly combined with further erosion of electoral integrity, increased prosecution of political opponents through mechanisms like the “Weaponization Working Group,” greater control over media and information, and weakening of judicial independence, then the slide toward authoritarianism could accelerate. As I am writing this article, Trump continues to promote what he calls the “Board of Peace,” a proposed international organization that is an attempt to create a U.S.-led alternative to the United Nations. The scholar Alfred McCoy notes that Trump appears to be pursuing what Orwell described: a world divided into three regional blocs under strongman leaders, with weakened international institutions.

However, several factors may counter this trajectory. Strong civil society and activist movements continue organizing opposition movements. Independent state governments push back against federal overreach and robust legal challenges have blocked numerous executive actions. The free press continues investigative reporting despite attacks. Congressional resistance still exists—even Senator Booker’s 25-hour speech on constitutional abuse entered the Congressional Record as a permanent historical marker.

My speculation is that the most likely outcome is neither complete Orwellian dystopia nor a comfortable return to democratic norms, but rather what political scientists call “competitive authoritarianism” or “illiberal democracy”—where democratic forms persist but are increasingly hollowed out, opposition exists but faces systematic disadvantages, and truth becomes increasingly contested. The key question isn’t whether we’ll replicate 1984 exactly, but whether enough democratic safeguards will hold to prevent sliding further into authoritarianism. One observer standing before a giant banner of Trump’s face in Washington noted that “Orwell’s world isn’t just fiction. It’s a mirror—reflecting what happens when power faces no resistance, when truth bends to loyalty, and when silence becomes the safest response”.

The Bottom Line

The Orwell comparisons aren’t perfect historical analogies, but they’re not baseless partisan rhetoric either. They identify genuine patterns of authoritarian behavior that merit serious attention—the manipulation of language to distort reality, the systematic rewriting of historical narratives, the demand for personal loyalty over institutional integrity, and the rejection of shared factual reality. I am concerned about the increasing use of Nazi inspired phrases and themes by members of the Trump administration. Most recently, Kristy Noam’s use of the phrase “one of us-all of you”. While not a formal written Nazi policy, it reflects their practice when dealing with partisan attacks in occupied countries and can only be viewed as a threat of violence against American citizens.

Whether these patterns represent isolated troubling actions or the beginnings of systematic democratic erosion remains the crucial—and still open—question. As Orwell himself noted, he didn’t write to predict the future but to prevent it. The value of these comparisons may ultimately lie not in their precision as historical parallels, but in their power to alert citizens to concerning trends before they become irreversible.

Key Sources

  • Organization of American Historians statements on historical revisionism
  • Politico and Reuters reporting on inspector general firings
  • The Washington Post and Axios on executive order impacts
  • Stanford Professor Alex Woloch’s analysis in The World (https://theworld.org/stories/2017/01/25/people-are-saying-trumps-government-orwellian-what-does-actually-mean)
  • World Press Institute analysis (https://worldpressinstitute.org/the-orwell-effect-how-2025-america-felt-like-198/)
  • Adam Gopnik, “Orwell’s ‘1984’ and Trump’s America,” The New Yorker, Jan. 26, 2017.
  • “Trump’s America: Rethinking 1984 and Brave New World,” Monthly Review, Sept. 7, 2025.
  • “False or misleading statements by Donald Trump,” Wikipedia (overview of documented falsehoods).
  • “Trump’s Efforts to Control Information Echo, an Authoritarian Playbook,” The New York Times, Aug. 3, 2025.
  • “Trump’s 7 most authoritarian moves so far,” CNN Politics, Aug. 13, 2025.
  • “The Orwellian echoes in Trump’s push for ‘Americanism’ at the Smithsonian,” The Conversation, Aug. 20, 2025.
  • “Everything Is Content for the ‘Clicktatorship’,” WIRED, Jan. 13, 2026.
  • “’Animal Farm’ Perfectly Describes Life in the Era of Donald Trump,” Observer, May 8, 2017.
  • “Ditch the ‘Animal Farm’ Mentality in Resisting Trump Policies,” YES! Magazine, May 8, 2017.

Full disclosure: I recently bought a hat that says “Make Orwell Fiction Again”.

The Strange Tale of Spontaneous Human Combustion

Did you ever run into an idea so strange that you can’t quite understand how anyone ever took it seriously? Recently, while reading about historical curiosities in Pseudoscience by Kang and Pederson, I was reminded of one of the most enduring examples: spontaneous human combustion.

The classic image is always the same. Someone enters a room and finds a small pile of ashes where a person once sat. The body is nearly destroyed, yet the chair beneath it is barely scorched and the rest of the room looks strangely untouched. For centuries, this baffling scene was explained by a dramatic idea—that a person could suddenly burst into flames from the inside, without any external fire at all.

It sounds like something lifted straight from a gothic novel, but belief in spontaneous human combustion stretches back to at least the seventeenth century and reached its peak in the Victorian era. To understand why it gained such traction, it helps to look at the social attitudes of the time, the cases that convinced people it was real, and what modern forensic science eventually uncovered.

Much of the early belief rested on moral judgment rather than evidence. In the nineteenth century, spontaneous human combustion was widely accepted as a kind of divine punishment. Many of the alleged victims were described as heavy drinkers, often elderly, overweight, or socially isolated, and women were frequently overrepresented in the reports. To Victorian minds, this pattern felt meaningful. Alcohol was flammable, after all, and it seemed reasonable—at least then—to assume that a body saturated with spirits might somehow ignite. Sensational newspaper reporting amplified the mystery, presenting lurid details while glossing over inconvenient facts.

The idea gained intellectual credibility in 1746 when Paul Rolli, a Fellow of the Royal Society, formally used the term “spontaneous human combustion” while describing the death of Countess Cornelia Zangari Bandi. The involvement of a respected scientific figure gave the concept legitimacy that lingered for generations.

Several cases became canonical. Countess Bandi’s death in 1731 was described as leaving little more than ashes and partially intact legs, still clothed in stockings. In 1966, John Irving Bentley of Pennsylvania was found almost completely burned except for one leg, with his pipe discovered intact nearby. Mary Reeser, known as the “Cinder Woman,” died in Florida in 1951, leaving behind melted fat embedded in the rug near where she had been sitting. As recently as 2010, an Irish coroner ruled that spontaneous human combustion caused the death of Michael Faherty, whose body was found badly burned near a fireplace in a room that showed little fire damage. Over roughly three centuries, about two hundred such cases have been cited worldwide.

Believers proposed explanations that ranged from the scientific-sounding to the overtly theological. Alcoholism was the most popular theory, with some physicians genuinely arguing that chronic drinking made the human body combustible. Earlier medical thinking leaned on imbalances of bodily humors, while later writers speculated about unknown chemical reactions producing internal heat. Religious interpretations framed these deaths as punishment for sin. Even in modern times, a few proponents have suggested that acetone buildup in people with alcoholism, diabetes, or extreme diets could somehow trigger combustion.

The idea was so culturally embedded that Charles Dickens famously killed off the alcoholic character Mr. Krook by spontaneous combustion in Bleak House. When critics objected, Dickens defended the plot choice by citing what he believed were credible historical and medical sources.

The illusion of the supernatural persisted because the circumstances were almost perfectly misleading. Victims were typically alone, elderly, or physically impaired, unable to respond quickly to a smoldering fire. The localized damage looked impossible to the untrained eye. Potential ignition sources were often destroyed in the fire itself. And dramatic storytelling filled in the gaps left by incomplete investigations.

What actually happens in these cases is far less mystical and far more unsettling. Modern forensic science points to an explanation known as the “wick effect.” In this scenario, there is always an external ignition source—often a cigarette, candle, lamp, or fireplace ember. Once clothing catches fire, heat melts the person’s body fat. That liquefied fat soaks into the clothing, which then behaves like a candle wick. The fire burns slowly and steadily, sometimes for hours, consuming much of the body while leaving nearby objects relatively unscathed.

This effect has been demonstrated experimentally. In the 1960s, researchers at Leeds University showed that cloth soaked in human fat could sustain a slow burn for extended periods once ignited. In 1998, forensic scientist John de Haan famously replicated the effect for the BBC by burning a pig carcass wrapped in a blanket. The result closely matched classic spontaneous combustion scenes: severe destruction of the body, with extremities left behind and limited damage to the surrounding room.

The reason these fires don’t usually engulf the entire space is simple physics. Flames rise more easily than they spread sideways, and the heat output of a wick-effect fire is relatively localized. It’s similar to standing near a campfire—you can be close without catching fire yourself.

Investigators Joe Nickell and John F. Fischer examined dozens of historical cases and found that every one involved a plausible ignition source, details that earlier accounts often ignored or downplayed. When these factors are restored to the narrative, the mystery largely disappears.

As science writer Benjamin Radford has pointed out, if spontaneous human combustion were truly spontaneous, we would expect it to occur randomly and frequently, in public places as well as private ones. Instead, it consistently appears in situations involving isolation and an external heat source.

The bottom line is straightforward. There is no credible scientific evidence that humans can burst into flames without an external ignition source. What has been labeled spontaneous human combustion is better understood as a tragic combination of accidental fire and the wick effect. The myth endured because it blended moral judgment, fear, and incomplete science into a compelling story. Today, forensic investigation has replaced superstition with explanation, even if the results remain unsettling.

Spontaneous human combustion survives as a reminder of how easily mystery fills the space where evidence is thin—and how patiently applied science eventually closes that gap.


Sources and Further Reading

Peer-reviewed forensic and medical analyses are available through the National Center for Biotechnology Information, including “So-called Spontaneous Human Combustion” in the Journal of Forensic Sciences (https://pubmed.ncbi.nlm.nih.gov/21392004/) and Koljonen and Kluger’s 2012 review, “Spontaneous human combustion in the light of the 21st century,” published in the Journal of Burn Care & Research (https://pubmed.ncbi.nlm.nih.gov/22269823/).

General scientific and historical overviews can be found in Encyclopædia Britannica’s article “Is Spontaneous Human Combustion Real?” (https://www.britannica.com/story/is-spontaneous-human-combustion-real), Scientific American’s discussion of the wick effect (https://www.scientificamerican.com/blog/cocktail-party-physics/burn-baby-burn-understanding-the-wick-effect/), and Live Science’s summary of facts and theories (https://www.livescience.com/42080-spontaneous-human-combustion.html).

Accessible explanatory pieces are also available from HowStuffWorks (https://science.howstuffworks.com/science-vs-myth/unexplained-phenomena/shc.htm), History.com (https://www.history.com/articles/is-spontaneous-human-combustion-real), Mental Floss (https://www.mentalfloss.com/article/22236/quick-7-seven-cases-spontaneous-human-combustion), and All That’s Interesting (https://allthatsinteresting.com/spontaneous-human-combustion). Wikipedia’s entries on spontaneous human combustion and the wick effect provide comprehensive background and references at https://en.wikipedia.org/wiki/Spontaneous_human_combustion and https://en.wikipedia.org/wiki/Wick_effect.

What “Woke” Really Means: A Look at a Loaded Word

Why everyone’s fighting over a word nobody agrees on

Okay, so you’ve probably heard “woke” thrown around about a million times, right? It’s in political debates, online arguments, your uncle’s Facebook rants—basically everywhere. And here’s the weird part: depending on who’s saying it, it either means you’re enlightened or you’re insufferable.

So let’s figure out what’s actually going on with this word.

Where It All Started

Here’s something most people don’t know: “woke” wasn’t invented by social media activists or liberal college students. It goes way back to the 1930s in Black communities, and it meant something straightforward—stay alert to racism and injustice.

The earliest solid example comes from blues musician Lead Belly. In his song “Scottsboro Boys” (about nine Black teenagers falsely accused of rape in Alabama in 1931), he told Black Americans to “stay woke”—basically meaning watch your back, because the system isn’t on your side. This wasn’t abstract philosophy; it was survival advice in the Jim Crow South.

The term hung around in Black culture for decades. It got a boost in 2008 when Erykah Badu used “I stay woke” in her song “Master Teacher,” where it meant something like staying self-aware and questioning the status quo.

But the big explosion happened around 2014 during the Ferguson protests after Michael Brown was killed. Black Lives Matter activists started using “stay woke” to talk about police brutality and systemic racism. It spread through Black Twitter, then got picked up by white progressives showing solidarity with social justice movements. By the late 2010s, it had expanded to cover sexism, LGBTQ+ issues, and pretty much any social inequality you can think of.

And that’s when conservatives started using it as an insult.

The Liberal Take: It’s About Giving a Damn

For progressives, “woke” still carries that original vibe of awareness. According to a 2023 Ipsos poll, 56% of Americans (and 78% of Democrats) said “woke” means “to be informed, educated, and aware of social injustices.”

From this angle, being woke just means you’re paying attention to how race, gender, sexuality, and class affect people’s lives—and you think we should try to make things fairer. It’s not about shaming people; it’s about understanding the experiences of others.

Liberals see it as continuing the work of the civil rights movement—expanding who we empathize with and include. That might mean supporting diversity programs, using inclusive language, or rethinking how we teach history. To them, it’s just what thoughtful people do in a diverse society.

Here’s the Progressive Argument in a Nutshell

The term literally started as self-defense. Progressives argue the problems are real. Being “woke” is about recognizing that bias, inequality, and discrimination still exist. The data back some of this up—there are documented disparities in policing, sentencing, healthcare, and economic opportunity across racial lines. From this view, pointing these things out isn’t being oversensitive; it’s just stating facts.

They also point out that conservatives weaponized the term. They took a word from Black communities about awareness and justice and turned it into an all-purpose insult for anything they don’t like about the left. Some activists call this a “racial dog whistle”—a way to attack justice movements without being explicitly racist.

The concept naturally expanded from racial justice to other inequalities—sexism, LGBTQ+ discrimination, other forms of unfairness. Supporters see this as logical: if you care about one group being treated badly, why wouldn’t you care about others?

And here’s their final point: what’s the alternative? When you dismiss “wokeness,” you’re often dismissing the underlying concerns. Denying that racism still affects American life can become just another way to ignore real problems.

Bottom line from the liberal side: being “woke” means you’ve opened your eyes to how society works differently for different people, and you think we can do better.

The Conservative Take: It’s About Going Too Far

Conservatives see it completely differently. To them, “woke” isn’t about awareness—it’s about excess and control.

They see “wokeness” as an ideology that forces moral conformity and punishes anyone who disagrees. What started as social awareness has turned into censorship and moral bullying. When a professor loses their job over an unpopular opinion or comedy shows get edited for “offensive” jokes, conservatives point and say: “See? This is exactly what we’re talking about.”  To them, “woke” is just the new version of “politically correct”—except worse. It’s intolerance dressed up as virtue.

Here’s the conservative argument in a nutshell:

Wokeness has moved way beyond awareness into something harmful. They argue it creates a “victimhood culture” where status and that benefits come from claiming you’re oppressed rather than from merit or hard work. Instead of fixing injustice, they say it perpetuates it by elevating people based on identity rather than achievement.

They see it as “an intolerant and moralizing ideology” that threatens free speech. In their view, woke culture only allows viewpoints that align with progressive ideology and “cancels” dissenters or labels them “white supremacists.”

Many conservatives deny that structural racism or widespread discrimination still exists in modern America. They attribute unequal outcomes to factors other than bias. They believe America is fundamentally a great country and reject the idea that there is systematic racism or that capitalism can sometimes be unjust.

They also see real harm in certain progressive positions—like the idea that gender is principally a social construct or that children should self-determine their gender. They view these as threats to traditional values and biological reality.

Ultimately, conservatives argue that wokeness is about gaining power through moral intimidation rather than correcting injustice. In their view, the people rejecting wokeness are the real critical thinkers.

The Heart of the Clash

Here’s what makes this so messy: both sides genuinely believe they’re defending what’s right.

Liberals think “woke” means justice and empathy. Conservatives think it means judgment and control. The exact same thing—a company ad featuring diverse families, a school curriculum change, a social movement—can look like progress to one person and propaganda to another.

One person’s enlightenment is literally another person’s indoctrination.

The Word Nobody Wants Anymore

Here’s the ironic part: almost nobody calls themselves “woke” anymore. Like “politically correct” before it, the word has gotten so loaded that it’s frequently used as an insult—even by people who agree with the underlying ideas. The term has been stretched to cover everything from racial awareness to climate activism to gender identity debates, and the more it’s used, the less anyone knows what it truly means.

Recently though, some progressives have started reclaiming the term—you’re beginning to see “WOKE” on protest signs now.

So, Who’s Right?

Maybe both. Maybe neither.

If “woke” means staying aware of injustice and treating people fairly, that’s good. If it means acting morally superior and shutting down disagreement, that’s not. The truth is probably somewhere in the messy middle.

This whole debate tells us more about America than about the word itself. We’ve always struggled with how to balance freedom with fairness, justice with tolerance. “Woke” is just the latest word we’re using to have that same old argument.

The Bottom Line

Whether you love it or hate it, “woke” isn’t going anywhere soon. It captures our national struggle to figure out what awareness and fairness should look like today.

And honestly? Maybe we’d all be better off spending less time arguing about the word and more time talking about the actual values behind it—what’s fair, what’s free speech, what kind of society do we want?

Being “woke” originally meant recognizing systemic prejudices—racial injustice, discrimination, and social inequities many still experience daily. But the term’s become a cultural flashpoint.  Here’s the thing: real progress requires acknowledging both perspectives exist and finding common ground. It’s not about who’s “right”—it’s about building bridges.

 If being truly woke means staying alert to injustice while remaining open to dialogue with those who see things differently, seeking solutions that work for everyone, caring for others, being empathetic and charitable, then call me WOKE.

Bull Markets, Bear Markets and the Story Behind Wall Street’s Most Famous Animals

If you’ve ever caught a business news segment, you’ve probably heard anchors throwing around terms like “bull market” and “bear market” as if everyone just naturally knows what they mean. But beyond the basic idea that one’s good and one’s bad, the real mechanics of these market conditions—and how they got their animal nicknames—are pretty interesting.

How the Stock Market Works (The Quick Version)

Before we dive into bulls and bears, let’s cover the basics. The stock market is essentially a place where people buy and sell ownership stakes in companies. When you buy a share of stock, you’re purchasing a tiny piece of that company. The price of that share goes up or down based on how many people want to buy it versus how many want to sell it—classic supply and demand.

Companies sell shares to raise money for growth, and investors buy them hoping the company will do well and the stock price will increase. The overall “market” is tracked through indexes like the S&P 500 or Dow Jones Industrial Average, which measure how a group of major companies are performing. When most stocks are rising, we say the market is up; when most are falling, the market is down.

What Bull and Bear Markets Actually Mean

A bull market refers to a period when stock prices are rising, typically defined as a 20% or more increase from recent lows. During bull markets, investors are optimistic, companies are generally doing well, and people are more willing to take risks with their money. Bull markets usually are driven by a strong economy with low inflation and optimistic investors. Think of the economic boom of the late 1990s or the recovery after the 2008 financial crisis—those were classic bull markets where prices kept climbing for years.

A bear market is essentially the opposite: a general decline in the stock market over time, usually defined as a 20% or more price decline over at least a two-month period. During bear markets, investors get nervous, sell off their holdings, and pessimism spreads. When a 10% to 20% decline occurs, it’s classified as a correction, and bear territory always precedes a bear market. The Great Depression, the 2008 financial crisis, and the COVID-19 pandemic’s initial impact all triggered bear markets.

The Colorful History Behind the Terms

Now here’s where things get interesting. These terms didn’t come from some modern marketing genius—they trace back to 18th century London, and the story involves everything from old proverbs to violent animal fights to one of history’s biggest financial scandals.

The “bear” term came first. Etymologists point to an old proverb, warning that it is not wise “to sell the bear’s skin before one has caught the bear”. This saying was about the foolishness of counting on something before you actually have it. By the early 1700s, traders who engaged in short selling (betting that prices would fall) were called “bear-skin jobbers” because they sold a bear’s skin—the shares—before catching the bear. The term eventually got shortened to just “bears.”

The real watershed moment came with the South Sea Bubble of 1720. The South Sea Company was a British joint stock company founded by an act of Parliament in 1711, and in 1720, the company assumed most of the British national debt and convinced investors to give up state annuities for company stock sold at a very high premium. When everything collapsed, share prices dropped dramatically, starting a “bear market,” and the story became the topic of many literary works and went down in history as an infamous metaphor.

As for “bull,” the origins are a bit murkier. The first known instance of the market term “bull” popped up in 1714, shortly after the “bear” term emerged. Most historians think it arose as a natural counterpoint to “bear,” possibly influenced by bull-baiting and bear-baiting, two animal fighting sports popular at the time—though I should note that’s somewhat speculative.

There’s also a popular explanation about how the animals attack: bears swipe downward with their paws while bulls thrust upward with their horns, which nicely mirrors market movements. While that’s a helpful memory device, it’s probably more of a convenient coincidence than the actual origin. The term “bull” originally meant a speculative purchase in the expectation that stock prices would rise, and was later applied to the person making such purchases.

Why This Still Matters

These metaphors have stuck around for three centuries because they work. They’re visceral and easy to remember—you can picture a charging bull or a hibernating bear without needing an economics degree. They also capture something real about market psychology: the aggressive optimism of bulls pushing prices up versus the defensive pessimism of bears hunkering down.

Understanding these terms helps you follow financial news and, more importantly, recognize when markets are shifting. Knowing you’re in a bull market might make you less surprised by rising prices, while recognizing a bear market can help you avoid panic-selling when things look grim.

The Bull and Bear Markets are among those things I’ve heard for years and never knew their origin.  This article is an attempt to explain it to myself.

Sources:

Bull Markets, Bear Markets and the Story Behind Wall Street’s Most Famous Animals

If you’ve ever caught a business news segment, you’ve probably heard anchors throwing around terms like “bull market” and “bear market” as if everyone just naturally knows what they mean. But beyond the basic idea that one’s good and one’s bad, the real mechanics of these market conditions—and how they got their animal nicknames—are pretty interesting.

How the Stock Market Works (The Quick Version)

Before we dive into bulls and bears, let’s cover the basics. The stock market is essentially a place where people buy and sell ownership stakes in companies. When you buy a share of stock, you’re purchasing a tiny piece of that company. The price of that share goes up or down based on how many people want to buy it versus how many want to sell it—classic supply and demand.

Companies sell shares to raise money for growth, and investors buy them hoping the company will do well and the stock price will increase. The overall “market” is tracked through indexes like the S&P 500 or Dow Jones Industrial Average, which measure how a group of major companies are performing. When most stocks are rising, we say the market is up; when most are falling, the market is down.

What Bull and Bear Markets Actually Mean

A bull market refers to a period when stock prices are rising, typically defined as a 20% or more increase from recent lows. During bull markets, investors are optimistic, companies are generally doing well, and people are more willing to take risks with their money. Bull markets usually are driven by a strong economy with low inflation and optimistic investors. Think of the economic boom of the late 1990s or the recovery after the 2008 financial crisis—those were classic bull markets where prices kept climbing for years.

A bear market is essentially the opposite: a general decline in the stock market over time, usually defined as a 20% or more price decline over at least a two-month period. During bear markets, investors get nervous, sell off their holdings, and pessimism spreads. When a 10% to 20% decline occurs, it’s classified as a correction, and bear territory always precedes a bear market. The Great Depression, the 2008 financial crisis, and the COVID-19 pandemic’s initial impact all triggered bear markets.

The Colorful History Behind the Terms

Now here’s where things get interesting. These terms didn’t come from some modern marketing genius—they trace back to 18th century London, and the story involves everything from old proverbs to violent animal fights to one of history’s biggest financial scandals.

The “bear” term came first. Etymologists point to an old proverb, warning that it is not wise “to sell the bear’s skin before one has caught the bear”. This saying was about the foolishness of counting on something before you actually have it. By the early 1700s, traders who engaged in short selling (betting that prices would fall) were called “bear-skin jobbers” because they sold a bear’s skin—the shares—before catching the bear. The term eventually got shortened to just “bears.”

The real watershed moment came with the South Sea Bubble of 1720. The South Sea Company was a British joint stock company founded by an act of Parliament in 1711, and in 1720, the company assumed most of the British national debt and convinced investors to give up state annuities for company stock sold at a very high premium. When everything collapsed, share prices dropped dramatically, starting a “bear market,” and the story became the topic of many literary works and went down in history as an infamous metaphor.

As for “bull,” the origins are a bit murkier. The first known instance of the market term “bull” popped up in 1714, shortly after the “bear” term emerged. Most historians think it arose as a natural counterpoint to “bear,” possibly influenced by bull-baiting and bear-baiting, two animal fighting sports popular at the time—though I should note that’s somewhat speculative.

There’s also a popular explanation about how the animals attack: bears swipe downward with their paws while bulls thrust upward with their horns, which nicely mirrors market movements. While that’s a helpful memory device, it’s probably more of a convenient coincidence or a retroactive description than the actual origin. The term “bull” originally meant a speculative purchase in the expectation that stock prices would rise, and was later applied to the person making such purchases.

Why This Still Matters

These metaphors have stuck around for three centuries because they work. They’re visceral and easy to remember—you can picture a charging bull or a hibernating bear without needing an economics degree. They also capture something real about market psychology: the aggressive optimism of bulls pushing prices up versus the defensive pessimism of bears hunkering down.

Understanding these terms helps you follow financial news and, more importantly, recognize when markets are shifting. Knowing you’re in a bull market might make you less surprised by rising prices, while recognizing a bear market can help you avoid panic-selling when things look grim.

The Bull and Bear Markets are among those things I’ve heard for years and never knew their origin.  This article is an attempt to explain it to myself.

Sources:

Page 2 of 15

Powered by WordPress & Theme by Anders Norén