Grumpy opinions about everything.

Category: Politics Page 1 of 6

Understanding Critical Race Theory: What It Is—and Why It Divides America

When I first started hearing debates about Critical Race Theory, I thought these people can’t possibly be talking about the same thing. There seemed to be no common ground—even the words they were using seemed to have different meanings.

Critical Race Theory (CRT) has become one of the most contested intellectual concepts in contemporary American culture. Originally developed in law schools during the 1970s and 1980s, CRT has evolved into a broad analytical method of examining how race and racism operate in society. Understanding its origins, core principles, and the political debates surrounding it requires examining both its academic foundations and its journey into public consciousness.

Origins and Early Development

Legal scholars who were dissatisfied with the slow pace of racial progress following the Civil Rights Movement laid the groundwork for CRT. The early figures included Derrick Bell, often considered the father of CRT, along with Alan Freeman, Richard Delgado, Kimberlé Crenshaw, and Cheryl Harris. These scholars were frustrated that despite landmark legislation like the Civil Rights Act of 1964 and the Voting Rights Act of 1965, racial inequality persisted across American institutions.

The intellectual roots of CRT can be traced to Critical Legal Studies, a movement that challenged traditional legal scholarship’s claims of objectivity and neutrality. However, CRT scholars felt that Critical Legal Studies failed to adequately address race and racism. They drew inspiration from various sources, including the work of civil rights lawyers like Charles Hamilton Houston, sociological insights about institutional racism, and postmodern critiques of knowledge and power.

Derrick Bell’s groundbreaking work in the 1970s laid crucial foundation. His “interest convergence” theory, presented in his analysis of Brown v. Board of Education, argued that advances in civil rights occur only when they align with white interests. This insight became central to CRT’s understanding of how racial progress unfolds in American society.

Core Elements and Principles

Critical Race Theory encompasses several key tenets that distinguish it from other approaches to studying race and racism.

First, CRT posits that race is not biologically real; it’s a human invention to justify unequal treatment. It also holds that racism is not merely individual prejudice, but a systemic feature of American society embedded in legal, political, and social institutions. This “structural racism” perspective emphasizes how seemingly neutral policies and practices can perpetuate racial inequality.

Second, CRT challenges the traditional civil rights approach that emphasizes color-blindness and incremental reform. Instead, CRT scholars argue that color-blind approaches often mask and perpetuate racial inequities. They advocate for race-conscious policies and a more aggressive approach to dismantling systemic racism.

Third, CRT emphasizes the importance of lived experience in the form of storytelling and narrative. Scholars use personal narratives, historical accounts, and counter-stories to challenge dominant narratives about race and racism. This methodological approach reflects CRT’s belief that experiential knowledge from communities of color provides crucial insights often overlooked by traditional scholarship.

Fourth, CRT introduces the concept of intersectionality, a term coined by legal scholar Kimberlé Crenshaw. This framework examines how multiple forms of identity and oppression—including race, gender, class, and sexuality—intersect and compound each other’s effects.

Finally, CRT is explicitly activist-oriented with a goal of creating new norms of interracial interaction. Unlike purely descriptive academic theories, CRT aims to understand racism in order to eliminate it. This commitment to social transformation distinguishes CRT from more traditional academic approaches.

Evolution and Expansion

Since its origins in legal studies, CRT has expanded into numerous disciplines including education, sociology, political science, and ethnic studies. In education, scholars like Gloria Ladson-Billings and William Tate applied CRT frameworks to understand racial disparities in schooling. This educational application of CRT examines how school policies, curriculum, and practices contribute to achievement gaps and educational inequality.

Conservative Perspectives

Conservative critics of CRT raise several concerns about the theory and its applications. They argue that CRT’s emphasis on systemic racism is overly deterministic and fails to account for individual differences and the significant progress made in racial equality since the Civil Rights era. Many conservatives contend that CRT promotes a victim mentality that undermines personal responsibility and achievement.

From this perspective, CRT’s race-conscious approach is seen as divisive and potentially counterproductive. Critics argue that emphasizing racial differences rather than common humanity perpetuates division and resentment. They often prefer color-blind approaches that treat all individuals equally regardless of race.

Conservative critics also express concern about CRT’s application in educational settings, arguing that it introduces inappropriate political content into classrooms and may cause students to feel guilt or shame based on their racial identity. Some argue that CRT-influenced curricula amount to indoctrination rather than education.

Additionally, some conservatives view CRT as fundamentally un-American, arguing that its critique of American institutions and emphasis on systemic oppression undermines national unity and patriotism. They contend that CRT presents an overly negative view of American history and society.

Some conservatives go further, calling CRT a form of “anti-American radicalism.” They believe it rejects Enlightenment values—reason, objectivity, and universal rights—in favor of ideology and emotion. Others criticize CRT’s reliance on narrative and lived experience, arguing that it substitutes storytelling for empirical evidence.

Liberal Perspectives

Supporters of CRT argue that it provides essential tools for understanding persistent racial inequalities that other approaches fail to explain adequately. They contend that CRT’s focus on systemic racism accurately describes how racial disparities continue despite formal legal equality.

To them, CRT isn’t about blaming individuals; it’s about recognizing how systems work. Advocates say that color-blind policies often perpetuate inequality because they ignore how race has historically shaped opportunity. They see CRT as empowering marginalized communities to tell their stories and as pushing America closer to its own ideals of justice and equality.

Liberal and progressive thinkers see CRT as a reality check—a necessary tool for understanding and dismantling systemic racism. They argue that laws and policies that seem neutral can still produce racially unequal outcomes—for example disparities in school funding or redlining in housing. (Denying loans or insurance based on neighborhoods rather than individual qualifications.)

From this perspective, CRT’s race-conscious approach is necessary because color-blind policies have proven insufficient to address entrenched racial inequities. Supporters argue that acknowledging and directly confronting racism is more effective than pretending race doesn’t matter.

Liberal defenders of CRT emphasize its scholarly rigor and empirical grounding, arguing that criticism often mischaracterizes or oversimplifies the theory. They point out that CRT is primarily an analytical framework used by scholars and graduate students, not a curriculum taught to elementary school children, as some critics suggest. Progressive educators also note that much of what critics call “CRT in schools” is really teaching about historical facts—slavery, segregation, civil-rights struggles—not law-school theory. They argue that banning CRT is less about protecting students and more about suppressing uncomfortable conversations about race and history.

Supporters also argue that CRT’s emphasis on storytelling and lived experience provides valuable perspectives that have been historically marginalized in academic discourse. They see this as democratizing knowledge production rather than abandoning scholarly standards.

Furthermore, many on the left argue that attacks on CRT represent attempts to silence discussions of racism and maintain the status quo. They view criticism of CRT as part of a broader backlash against racial justice efforts.

Why It Matters

You don’t have to buy every part of CRT to see why it struck a nerve. It forces us to ask uncomfortable but important questions: Why do some inequalities persist even after laws change? How do institutions carry the weight of history?

Whether you agree or disagree with CRT, it’s hard to deny that it has shaped how Americans talk about race. The theory challenges us to look beyond personal prejudice and ask how systems distribute power and privilege. Its critics, in turn, remind us that any theory of justice must preserve individual rights and shared civic values.

The real challenge may be learning to hold both ideas at once: that racism can be systemic, and that individuals should still be treated as individuals. CRT’s greatest value—and its greatest controversy—comes from forcing that tension into the open.

Sources:

JSTOR Daily. “What Is Critical Race Theory?” https://daily.jstor.org/what-is-critical-race-theory/ (Accessed December 3, 2025)

Harvard Law Review Blog. “Derrick Bell’s Interest Convergence and the Permanence of Racism: A Reflection on Resistance.” https://harvardlawreview.org/blog/2020/08/derrick-bells-interest-convergence-and-the-permanence-of-racism-a-reflection-on-resistance/ (March 24, 2023)

Bell, Derrick A., Jr. “Brown v. Board of Education and the Interest-Convergence Dilemma.” Harvard Law Review, Vol. 93, No. 3 (January 1980), pp. 518-533.

Columbia Law School. “Kimberlé Crenshaw on Intersectionality, More than Two Decades Later.” https://www.law.columbia.edu/news/archive/kimberle-crenshaw-intersectionality-more-two-decades-later

Crenshaw, Kimberlé. “Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics.” 1989.

Britannica. “Richard Delgado | American legal scholar.” https://www.britannica.com/biography/Richard-Delgado

Wikipedia. “Critical Race Theory.” https://en.wikipedia.org/wiki/Critical_race_theory (Updated December 31, 2025)

MTSU First Amendment Encyclopedia. “Critical Race Theory.” https://www.mtsu.edu/first-amendment/article/1254/critical-race-theory (July 10, 2024)

Delgado, Richard and Jean Stefancic. “Critical Race Theory: An Introduction.” New York University Press, 2001 (2nd edition 2012, 3rd edition 2018).

Teachers College Press. “Critical Race Theory in Education.” https://www.tcpress.com/critical-race-theory-in-education-9780807765838

American Bar Association. “A Lesson on Critical Race Theory.” https://www.americanbar.org/groups/crsj/publications/human_rights_magazine_home/civil-rights-reimagining-policing/a-lesson-on-critical-race-theory/

NAACP Legal Defense and Educational Fund. “What is Critical Race Theory, Anyway? | FAQs.” https://www.naacpldf.org/critical-race-theory-faq/ (May 6, 2025)

The illustration was generated by the author using Midjourney.

Who Will Cover City Hall Now? Democracy in the Age of News Deserts

Were it left to me to decide whether we should have a government without newspapers, or newspapers without a government, I should not hesitate a moment to prefer the latter. But I should mean that every man should receive those papers and be capable of reading them. —Thomas Jefferson


I originally posted this article about a year and a half ago. I was concerned about the future of newspapers then and I’m even more concerned now. I’ve updated my original post to reflect recent losses of newspapers.
When I was growing up in Charleston WV in the 1950s and early 1960s, we had two daily newspapers. The Gazette was delivered in the morning and the Daily Mail was delivered in the afternoon. One of my first jobs as a boy was delivering The Gazette. It worked out to be about 50 cents an hour, but I was glad to have the job. (It was good money at the time.)
Ostensibly, the Gazette was a Democratic newspaper, and the Daily Mail was a Republican one. However, given the politics of the day there was not a significant difference between the two, and most people subscribed to both.
There weren’t a lot of options for news at the time. Of course, there were no 24-hour news channels. National news on the three networks was about 30 minutes an evening with local news at about 15 minutes. By the late 1960s national news had increased to 60 minutes and most local news to about 30 minutes. Although, given the limitations of time on the local stations, most of the broadcast was taken up with weather, sports, and human interest stories with little time left to expand on hard news stories.
We depended on our newspapers for news of our cities, counties, and states. And the newspapers delivered the news we needed. Almost everyone subscribed to and read the local papers. They kept us informed about our local politicians and government and provided local insight on national events. They were also our source for information about births, deaths, marriages, high school graduations and everything we wanted to know about our community.
In the 21st century there are many more supposed news options. There are 24-hour news networks as I’ve talked about in a previous post.  And of course, there are Instagram, Facebook, X and the other online entities that claim to provide news.
There has been one positive development in television news. Local news, at least in Charleston, has expanded to two hours most evenings. There is some repetition between the first and second hour and it is still heavily weighted to sports, weather, and human interest, but there is some increased coverage of local hard news. However, this is somewhat akin to reading the headlines and the first paragraph in a newspaper story. It doesn’t provide in-depth coverage, but it is improved over what otherwise is available to those who don’t watch a dedicated news show. Hopefully, it motivates people to find out more about events that concern them.
The situation has become dire in recent months. The crisis that was building when I first wrote about newspapers has now reached catastrophic proportions. On December 31, 2025, the Atlanta Journal-Constitution published its last print edition after 157 years, making Atlanta the largest U.S. metro area without a printed daily newspaper. Think about that—a major American city, home to over six million people in its metro area, now has no physical newspaper you can hold in your hands.
Just weeks ago in February 2025, the Newark Star-Ledger, New Jersey’s largest newspaper, stopped printing after nearly 200 years. The Jersey Journal, which had served Hudson County for 157 years, closed entirely. These weren’t small-town weeklies—these were major metropolitan dailies that once served millions of readers. The Pittsburgh Post Gazette, founded in 1786, has announced that it will cease publication effective May 3, 2026.
Even more alarming is what just happened at the Washington Post. Just days ago, in early February 2026, owner Jeff Bezos ordered the elimination of roughly one-third of the newspaper’s workforce—approximately 300 journalists. The Post closed its entire sports section, shuttered its books department, gutted its foreign bureaus and metro desk, and canceled its flagship daily podcast. This is the same newspaper that brought down a presidency with its Watergate coverage and has won dozens of Pulitzer Prizes. The Post’s metro desk, which once had 40 reporters covering the nation’s capital, now has just a dozen. All the paper’s photojournalists were laid off. The entire Middle East team was eliminated.
Former Washington Post executive editor Martin Baron, who led the paper from 2013 to 2021, called the cuts devastating and blamed poor management decisions, including Bezos’s decision to spike the newspaper’s presidential endorsement in 2024, which led to the cancellation of hundreds of thousands of subscriptions. The Post lost an estimated $100 million in 2024.
The numbers tell a grim story. Since 2005, more than 3,200 newspapers have closed in the United States—that’s over one-third of all the newspapers that existed just twenty years ago. Newspapers continue to disappear at a rate of more than two per week. In the past year alone, 136 newspapers shut their doors.
Fewer than 5,600 newspapers now remain in America, and less than 1,000 of those are dailies. Even among those “dailies,” more than 80 percent print fewer than seven days a week. We now have 213 counties that are complete “news deserts”—places with no local news source at all. Another 1,524 counties have only one remaining news source, usually a struggling weekly newspaper. Taken together, about 50 million Americans now have limited or no access to local news.
Will TV news be able to provide the details about our community? The format of the newspaper allows for more detailed presentations and for a larger variety of stories. The reader can pick which stories to read, when to read them and how much of each to read. The very nature of broadcast news doesn’t allow these options.
I beg everyone to please subscribe to your local newspapers if you still have one. Though I still prefer the hands-on, physical newspaper, I understand many people want to keep up with the digital age. If you do, please subscribe to the digital editions of your local newspaper and don’t pretend that the other online sources, such as social media, will provide you with local news. More likely, you’ll just get gossip, or worse.
If we lose our local news, we are in danger of losing our freedom of information and if we lose that, we’re in danger of losing our country. For those of you who think I’m fear mongering, countries that have succumbed to dictatorship have first lost their free press.
I believe that broadcast news will never be the free press that print journalism is. The broadcast is an ethereal thing. You hear it and it’s gone. Of course, it is always possible to record it and play it back, but most people don’t. If you have a newspaper, you can read it, think about it, and read it again. There are times when on my second or third reading of an editorial or an op-ed article, I’ve changed my opinion about either the subject or the writer of the piece. I don’t think a news broadcast lends itself to this type of reflection. In fact, when listening to the broadcast news I often find my mind wandering as something that the broadcaster said sends me in a different direction.
In my opinion, broadcast news is controlled by advertising dollars and viewer ratings. News seems to be treated like any entertainment program, catering to what generates ratings rather than facts. I recognize that this can be the case with newspapers as well, but it seems to me that it’s much easier to detect bias in the written word than in the spoken word. Too often we can get caught up in the emotions of the presenter or in the graphics that accompany the story.
With that in mind, I recommend that if you want unbiased journalism, please support your local newspapers before we lose them. Once they are gone, we will never get them back and we will all be much the poorer as a result.
I will leave you with one last quote.
A free press is the unsleeping guardian of every other right that free men prize; it is the most dangerous foe of tyranny. —Winston Churchill
The only way to preserve freedom is to preserve the free press. Do your part! Subscribe!
And you can quote The Grumpy Doc on that!!!!

Sources
Fortune (August 29, 2025): “Atlanta becomes largest U.S. metro without a printed daily newspaper as Journal-Constitution goes digital”
https://fortune.com/2025/08/29/atlanta-largest-metro-without-printed-newpsaper-digital-journal-constitution/
 
Northwestern University Medill School (2025): “News deserts hit new high and 50 million have limited access to local news, study finds”
https://www.medill.northwestern.edu/news/2025/news-deserts-hit-new-high-and-50-million-have-limited-access-to-local-news-study-finds.html
 
NBC News (February 2026): “Washington Post lays off one-third of its newsroom”
https://www.nbcnews.com/business/media/washington-post-layoffs-sports-rcna257354
 
CNN Business (February 4, 2026): “Jeff Bezos-owned Washington Post conducts widespread layoffs, gutting a third of its staff”
https://www.cnn.com/2026/02/04/media/washington-post-layoffs
 
Northwestern University Medill Local News Initiative (2024): “The State of Local News Report 2024”
https://localnewsinitiative.northwestern.edu/projects/state-of-local-news/2024/report/
 
Northwestern University Medill School (2025): “News deserts hit new high and 50 million have limited access to local news, study finds”
https://www.medill.northwestern.edu/news/2025/news-deserts-hit-new-high-and-50-million-have-limited-access-to-local-news-study-finds.htm

Russel Vought and the War on the Environment

Recently, there’s been a a lot of attention given to RFK Jr. and his war on vaccines. More potentially devastating than that is Russel Vought and his war on environmental science.
Russell Vought hasn’t exactly been working in the shadows. As the director of the Office of Management and Budget since February 2025, he’s been methodically implementing what he outlined years earlier in Project 2025—a blueprint that treats climate science not as settled fact, but as what he calls “climate fanaticism.” The result is undeniably the most aggressive dismantling of environmental protections in American history.
The Man Behind the Plan
Vought’s resume tells you everything you need to know about his approach. He served as OMB director during Trump’s first term, wrote a key chapter of Project 2025 focusing on consolidating presidential power, and has openly stated his goal is to make federal bureaucrats feel “traumatized” when they come to work. His philosophy on climate policy specifically? He’s called climate change a side effect of building the modern world—something to manage through deregulation rather than prevention.
Attacking the Foundation: The Endangerment Finding
The centerpiece of Vought’s climate strategy targets what EPA Administrator Lee Zeldin has called “the holy grail of the climate change religion”—the 2009 Endangerment Finding. This Obama-era scientific determination concluded that six greenhouse gases (carbon dioxide, methane, nitrous oxide, hydrofluorocarbons, perfluorocarbons, and sulfur hexafluoride) endanger public health and welfare. It sounds technical, but it’s the legal foundation for virtually every federal climate regulation enacted over the past fifteen years.
 Just last week EPA Administrator Zeldin announced that the Trump administration has repealed this finding. This action strips EPA’s authority to regulate greenhouse gas emissions under the Clean Air Act—meaning no more federal limits on power plant emissions, no vehicle fuel economy standards tied to climate concerns, and no requirement for industries to measure or report their emissions.  White House press secretary Karoline Leavitt said this action “will be the largest deregulatory action in American history.”
More than 1,000 scientists warned Zeldin not to take this step, and the Environmental Protection Network cautioned last year that repealing the finding would cause “tens of thousands of additional premature deaths due to pollution exposure” and would spark “accelerated climate destabilization.”  Abigail Dillen president of the nonprofit law firm Earthjustice said “there is no way to reconcile EPA’s decision with the law, the science and the reality of the disasters that are hitting us harder every year.” She further said they expect to see the Trump administration in court.  Obviously, the science is less important to Trump, Zeldin and Vought than the politics.
The Thirty-One Targets
In March 2025, Zeldin announced what he proudly called “the greatest day of deregulation in American history”—a plan to roll back or reconsider 31 key environmental rules covering everything from clean air to water quality. The list reads like a regulatory hit parade, including vehicle emission standards (designed to encourage electric vehicles), power plant pollution limits, methane regulations for oil and gas operations, and even particulate matter standards that protect against respiratory disease.
The vehicle standards are particularly revealing. The transportation sector is America’s largest source of greenhouse gas emissions, and the Biden-era rules were crafted to nudge automakers toward producing more electric vehicles. At Vought’s direction, the EPA is now reconsidering these, with Zeldin arguing they “regulate out of existence” segments of the economy and cost Americans “a lot of money.”
Gutting the Science Infrastructure
Vought’s agenda extends beyond specific regulations to the institutions that produce climate science itself. In Project 2025, he proposed abolishing the Office of Domestic Climate Policy and suggested the president should refuse to accept federal scientific research like the U.S. National Climate Assessment (NCA). The NCA, published every few years, involves hundreds of scientists examining how climate change is transforming the United States—research that informs everything from building codes to insurance policies.
According to reporting from E&E News in January, Vought wants the White House to exert tighter control over the next NCA, potentially elevating perspectives from climate deniers and industry representatives while excluding contributions made during the Biden administration.  This is a plan that has been in the works for years. Vought reportedly participated in a White House meeting during Trump’s first term where officials discussed firing the scientists working on the assessment.
The National Oceanic and Atmospheric Administration (NOAA) has also been targeted. In February 2025, about 800 NOAA employees—responsible for weather forecasting, climate monitoring, fisheries management, and marine research were fired. Project 2025 had proposed breaking up NOAA entirely, and concerned staff members have already begun a scramble to preserve massive amounts of climate data in case the agency is dismantled.
Budget Cuts as Policy
Vought’s Center for Renewing America has proposed eliminating the Department of Energy’s Office of Energy Efficiency and Renewable Energy, the EPA’s environmental justice fund, and the Low Income Home Energy Assistance Program. During the first Trump administration, Vought oversaw budgets proposing EPA cuts as steep as 31%—reducing the agency to funding levels not seen in decades. In a 2023 speech, he explained the logic bluntly: “We want their funding to be shut down so that the EPA can’t do all of the rules against our energy industry because they have no bandwidth financially to do so.”
This isn’t just about climate, it is also about fairness and the recognition that environmental policies have had a predominately negative effect on low income areas. EPA has cancelled 400 environmental justice grants, closed environmental justice offices at all 10 regional offices, and put the director of the $27 billion Greenhouse Gas Reduction Fund on administrative leave. The fund had been financing local economic development projects aimed at lowering energy prices and reducing emissions.
Eliminating Climate Considerations from Government
Perhaps more insidious than the high-profile rollbacks are the procedural changes that make climate considerations disappear from federal decision-making. In February, Jeffrey Clark—acting administrator of the Office of Information and Regulatory Affairs (OIRA) under Vought’s OMB—directed federal agencies to stop using the “social cost of carbon” in their analyses. This metric calculates the dollar value of damage caused by one ton of carbon pollution, allowing agencies to accurately assess whether regulations produce net benefits or defects for society.
Vought has also directed agencies to establish sunset dates for environmental regulations—essentially automatic expiration dates after which rules stop being enforced unless renewed. For existing regulations, the sunset comes after one year; for new ones, within five years. The stated goal is forcing agencies to continuously justify their rules, but the practical effect is creating a perpetual cycle of regulatory uncertainty.
The Real-World Stakes
The timing of these rollbacks offers a grim irony. As Vought was pushing to weaken the National Climate Assessment in January 2025, the Eaton and Palisades fires were devastating Los Angeles—exactly the type of climate-intensified disaster the assessment is designed to help communities prepare for. The administration’s response? Energy Secretary Chris Wright described climate change as “a side effect of building the modern world” at an industry conference.
An analysis by Energy Innovation, a nonpartisan think tank, found that Project 2025’s proposals to gut federal policies encouraging renewable electricity and electric vehicles would increase U.S. household spending on fuel and utilities by about $240 per year over the next five years. That’s before accounting for the health costs of increased air pollution or the economic damage from unmitigated climate change.
Environmental groups have vowed to challenge these changes in court, and the legal battles will likely stretch on for years. The D.C. Circuit Court of Appeals will hear many cases initially, though the Supreme Court will probably issue final decisions. Legal experts note that while Trump’s EPA moved with unprecedented speed on proposals in 2025, finalizing these rules through the required regulatory process will take much longer. As of December, none of the major climate rule repeals had been submitted to OMB for final review, partly due to what EPA called a 43-day government shutdown (which EPA blamed on Democrats, though the characterization is widely disputed).
What Makes This Different
Previous administrations have certainly rolled back environmental regulations, but Vought’s approach differs in both scope and philosophy. Rather than tweaking specific rules or relaxing enforcement, he’s systematically attacking the scientific and legal foundations that make climate regulation possible. It’s the difference between turning down the thermostat and ripping out the entire heating system.
The Environmental Defense Fund, which rarely comments on political appointees, strongly opposed Vought’s confirmation, with Executive Director Amanda Leland stating: “Russ Vought has made clear his contempt for the people working every day to ensure their fellow Americans have clean air, clean water and a safer climate.”
Looking Forward
Whether Vought’s vision becomes permanent depends largely on how courts rule on these changes. The 2007 Supreme Court decision in Massachusetts v. EPA established that the agency has authority to regulate greenhouse gases as air pollutants under the Clean Air Act—the very authority Vought is now trying to eliminate. Overturning established precedent is difficult, though the current Supreme Court’s composition makes the outcome possible, if not likely.
What we’re witnessing is essentially a test of whether one administration can permanently disable the federal government’s capacity to address climate change, or if these changes represent a temporary setback that future administrations can reverse. The stakes couldn’t be higher: atmospheric CO2 concentrations continue rising, global temperatures are breaking records, and climate-related disasters are becoming more frequent and severe. Nothing less than the future of our way of life is at stake. We must take action now.
 
Full disclosure: my undergraduate degree is in meteorology, but I would never call myself a meteorologist since I have never worked in the field. But I still maintain an interest, from both a meteorological and a medical perspective. The Grump Doc is never lacking in opinions.
 
Illustration generated by author using Midjourney.
 
Sources:
Lisa Friedman and Maxine Joselow, “Trump Allies Near ‘Total Victory’ in Wiping Out U.S. Climate Regulation,” New York Times, Feb. 9, 2026.[nytimes +1]
Lisa Friedman, “The Conservative Activists Behind One of Trump’s Biggest Climate Moves,” New York Times, Feb. 10, 2026.[nytimes +1]
Bob Sussman, “The Anti-Climate Fanaticism of the Second Trump Term (Part 1: The Purge of Climate from All Federal Programs),” Environmental Law Institute, May 7, 2025.[eli]
U.S. Environmental Protection Agency, “Trump EPA Kicks Off Formal Reconsideration of Endangerment Finding,” EPA News Release, Mar. 13, 2025.[epa]
Trump’s Climate and Clean Energy Rollback Tracker, Act On Climate/NRDC coalition, updated Jan. 11, 2026.[actonclimate]
“Trump to Repeal Landmark Climate Finding in Huge Regulatory Rollback,” Wall Street Journal, Feb. 9, 2026.[wsj]
Valerie Volcovici, “Trump Set to Repeal Landmark Climate Finding in Huge Regulatory Rollback,” Reuters, Feb. 9, 2026.[reuters]
Alex Guillén, “Trump EPA to Take Its Biggest Swing Yet Against Climate Change Rules,” Politico, Feb. 10, 2026.[politico]
“EPA Urges White House to Strike Down Landmark Climate Finding,” Washington Post, Feb. 26, 2025.[washingtonpost]
“Trump Allies Near ‘Total Victory’ in Wiping Out U.S. Climate Regulation,” Seattle Times reprint, Feb. 10, 2026.[seattletimes]
“Trump Wants to Dismantle Key Climate Research Hub in Colorado,” Earth.org, Dec. 17, 2025.[earth]
“Vought Says National Science Foundation to Break Up Federal Climate Research Center,” The Hill, Dec. 17, 2025.[thehill]
Rachel Cleetus, “One Year of the Trump Administration’s All-Out Assault on Climate and Clean Energy,” Union of Concerned Scientists, Jan. 13, 2026.[ucs]
Environmental Protection Network, “Environmental Protection Network Speaks Out Against Vought Cabinet Consideration,” Nov. 20, 2024.[environmentalprotectionnetwork]
“From Disavowal to Delivery: The Trump Administration’s Rapid Implementation of Project 2025 on Public Lands,” Center for Western Priorities, Jan. 28, 2026.[westernpriorities]
“Russ Vought Nominated for Office of Management and Budget Director,” Environmental Defense Fund statement, Mar. 6, 2025.[edf]
“Project 2025,” Heritage Foundation/Project 2025 backgrounder (as summarized in the Project 2025 Wikipedia entry).[wikipedia]
“EPA to repeal finding that serves as basis for climate change,” The Associated Press, Matthew Daly
https://vitalsigns.edf.org/story/trump-nominee-and-project-2025-architect-russell-vought-has-drastic-plans-reshape-america
https://en.wikipedia.org/wiki/Russell_Vought
https://www.commondreams.org/news/warnings-of-permanent-damage-to-people-and-planet-as-trump-epa-set-to-repeal-key-climate-rule
https://www.eenews.net/articles/trump-team-takes-aim-at-crown-jewel-of-us-climate-research/
https://www.epa.gov/newsreleases/epa-launches-biggest-deregulatory-action-us-history
https://www.pbs.org/newshour/nation/trump-administration-moves-to-repeal-epa-rule-that-allows-climate-regulation
https://www.scientificamerican.com/article/trump-epa-unveils-aggressive-plans-to-dismantle-climate-regulation/
https://www.bloomberg.com/news/articles/2026-02-10/trump-s-epa-to-scrap-landmark-emissions-policy-in-major-rollback​​​​​​​​​​​​​​​​
 
 
 
 

When They Knew: How the Fossil Fuel Industry Buried Its Own Climate Science

The story begins not with climate deniers casting doubt on new science, but with something far more troubling: companies conducting rigorous research, understanding exactly what their products would do to the planet, and then spending decades lying to the public. They treated science as an internal planning tool and then deployed public relations, front groups, and “manufactured doubt” to delay regulation and protect profits.

The Oil Industry’s Own Scientists Saw It Coming

In 1977, a scientist named James Black stood before Exxon’s management committee with an uncomfortable message. According to internal documents later uncovered by investigative journalists, Black told executives that burning fossil fuels was increasing atmospheric carbon dioxide, and that continually rising CO2 levels would increase global temperatures by two to three degrees—a projection that is still consistent with today’s scientific consensus. He warned that we had a window of just five to ten years before “hard decisions regarding changes in energy strategies might become critical”.

What happened next is remarkable for its precision. Throughout the late 1970s and 1980s, Exxon assembled what one scientist called “a credible scientific team” to investigate the climate question. They launched ambitious projects, including outfitting a supertanker with custom instruments to measure how oceans absorbed CO2—one of the most pressing scientific questions of the era. A 2023 Harvard study analyzing Exxon’s internal climate projections from 1977 to 2003 found they predicted global warming with what researchers called “shocking skill and accuracy.” Specifically, the company projected 0.20 degrees Centigrade of warming per decade, with a margin of error of just 0.04 degrees—a forecast that has proven largely correct.

Exxon wasn’t alone. Shell produced a confidential 1988 report titled “The Greenhouse Effect” that warned of climate changes “larger than any that have occurred over the last 12,000 years,” including destructive floods and mass migrations. The report revealed Shell had been running an internal climate science program since 1981. In one striking document from 1986, Shell predicted that fossil fuel emissions would cause changes “the greatest in recorded history”.

Even industry groups understood what was coming. In 1980, the American Petroleum Institute (API) invited Stanford scientist John Laurmann to brief oil company representatives at its secret “CO2 and Climate Task Force”. His presentation, now public, warned that continued fossil fuel use would be “barely noticeable” by 2005 but by the 2060s would have “globally catastrophic effects.” That same year, the API called on governments to triple coal production worldwide, publicly insisting there would be no negative consequences.

The Coal Industry Knew Even Earlier

If anything, the coal industry understood the problem first. A 1966 article in the trade publication Mining Congress Journal by James Garvey, president of Bituminous Coal Research Inc., explicitly discussed how continued coal consumption would increase atmospheric temperatures and cause “vast changes in the climates of the earth.” A combustion engineer from Peabody Coal, now the world’s largest coal company, acknowledged in the same publication that the industry was “buying time” before air pollution regulations would force action.

This 1966 evidence is particularly damning because it predates widespread public awareness by decades. The coal industry didn’t stumble into climate denial—they entered it with full knowledge of what they were obscuring.

Major coal interests also had early awareness that carbon emissions posed regulatory and market risks, particularly for coal‑fired electricity, and they participated in joint industry research and strategy discussions about climate change in the 1980s and 1990s. At the same time, coal associations helped create public campaigns such as the Information Council for the Environment (ICE—even then a disturbing acronym), whose internal planning documents explicitly set an objective to “reposition global warming as theory (not fact)” and to target specific demographic groups with tailored doubt‑based messages.

According to a report from the Union of Concerned Scientists, these efforts often relied on “grassroots” fronts, advertising, and even forged constituent letters to legislators to undermine support for climate policy and to counter the conclusions of mainstream climate science, which even the companies’ own experts did not refute.

What They Said Publicly

The contrast between private knowledge and public statements is stark. While Exxon scientists were building sophisticated climate models internally, the company’s public messaging emphasized uncertainty. In a 1997 speech, Exxon CEO Lee Raymond told an audience at the World Petroleum Conference: “Let’s agree there’s a lot we really don’t know about how climate change will change in the 21st century and beyond”.  They spread messaging that emphasized uncertainty, framed global warming as just a “theory,” and highlighted supposed flaws in climate models, even as their own scientists were using those models to make precise projections. The company and allied trade associations supported think tanks and advocacy groups such as Citizens For Sound Science, that questioned if human activity was responsible for warming and opposed binding limits on emissions, producing a stark discrepancy between internal scientific knowledge and external communication.

In 1989, Exxon helped create the Global Climate Coalition—despite its environmental sounding name, the organization worked to cast doubt on climate science and block clean energy legislation throughout the 1990s. Electric utilities and coal‑linked organizations joined this coalition to systematically attack climate scientists and lobby to weaken or stall international agreements like the Kyoto Protocol, despite internal recognition that greenhouse gases were driving warming.

Internal API documents from a 1998 meeting reveal an explicit strategy to “ensure that a majority of the American public… recognizes that significant uncertainties exist in climate science”.

In 1991, Shell produced a film, “Climate of Concern,” which stated that human driven—as opposed to greenhouse gas driven—climate change was happening “at a rate faster than at any time since the end of the ice age” and warned of extreme weather, flooding, famine, and climate refugees.  They understood the science but tried to shift the blame.

According to a 2013 Drexel University study, between 2003 and 2010 alone, approximately $558 million was distributed to about 100 climate change denial organizations. Greenpeace reports that Exxon alone spent more than $30 million on think tanks promoting climate denial.

The Tobacco Playbook

The parallels to Big Tobacco’s strategy are not coincidental—they’re intentional. Research by the Center for International Environmental Law uncovered more than 100 documents from the Tobacco Industry Archives showing that oil and tobacco companies not only used the same PR firms and research institutes, but often the same individual researchers. The connection goes back to at least the 1950s.  A report published in Scientific American suggests the oil and tobacco industries both hired the PR firm Hill & Knowlton Inc. as early as 1956.

A 1969 internal memo from R.J. Reynolds Tobacco Company stated plainly: “Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the mind of the general public”. This became the template. Create uncertainty. Emphasize what isn’t known rather than what is. Fund research that casts doubt. Attack the credibility of independent scientists. They formed organizations with scientific-sounding names that existed primarily to muddy the waters.

In one particularly brazen example, a 2015 presentation by Cloud Peak Energy executive Richard Reavey titled “Survival Is Victory: Lessons From the Tobacco Wars,” explicitly coached coal executives on how to apply tobacco industry tactics.

What makes the fossil fuel case particularly egregious is the temporal dimension. These weren’t companies caught off-guard by emerging science. They funded the research. They understood the findings. Their own scientists urged action. A 1978 Exxon memo noted this could be “the kind of opportunity we are looking for to have Exxon technology, management and leadership resources put into the context of a project aimed at benefitting mankind”.

Instead, when oil prices collapsed in the mid-1980s, Exxon pivoted from conducting climate research to funding climate denial. By the late 1980s, according to reporting by InsideClimate News, Exxon “curtailed its carbon dioxide research” and “worked instead at the forefront of climate denial”.

Where We Stand Now

Across the oil, gas, and coal industries, there is not a genuine scientific dispute inside companies but a divergence between what in‑house experts knew and what corporate leaders chose to communicate to the public and policymakers. This divergence mirrors the tobacco industry’s long‑running use of organized doubt. In both arenas, industry actors treated early recognition of harm as a legal and political threat and responded by investing in campaigns to confuse, delay, and reframe the science rather than addressing the risks their own research had identified.

The evidence trail has led to legal action. More than 20 cities, counties, and states have filed lawsuits against fossil fuel companies for damages caused by climate change, arguing the industry knowingly deceived the public. The European Parliament held hearings in 2019 on climate denial by ExxonMobil and other actors. The hashtags #ExxonKnew, #ShellKnew, and #TotalKnew have become rallying cries for accountability.

Senator Sheldon Whitehouse has explicitly compared the fossil fuel industry’s actions to the tobacco racketeering case that ultimately held cigarette makers accountable. As he noted in a Senate speech, the elements of a civil racketeering case are straightforward: defendants conducted an enterprise with a pattern of racketeering activity.

The difference between the tobacco and fossil fuel cases may be one of scale. As researchers Naomi Oreskes and Erik Conway documented in their book Merchants of Doubt, both industries worked to obscure truth for profit. But while tobacco kills individuals, climate change threatens entire ecosystems and future generations.  The time to act is now.

Sources:

Scientific American – “Exxon Knew about Climate Change Almost 40 Years Ago”
https://www.scientificamerican.com/article/exxon-knew-about-climate-change-almost-40-years-ago/
 
Harvard Gazette – Harvard-led analysis finds ExxonMobil internal research accurately predicted climate change
https://news.harvard.edu/gazette/story/2023/01/harvard-led-analysis-finds-exxonmobil-internal-research-accurately-predicted-climate-change/
 
InsideClimate News – Exxon’s Own Research Confirmed Fossil Fuels’ Role in Global Warming Decades Ago
https://insideclimatenews.org/news/02052024/from-the-archive-exxon-research-global-warming/
 
PBS Frontline – Investigation Finds Exxon Ignored Its Own Early Climate Change Warnings
https://www.pbs.org/wgbh/frontline/article/investigation-finds-exxon-ignored-its-own-early-climate-change-warnings/
 
NPR – Exxon climate predictions were accurate decades ago. Still it sowed doubt
https://www.npr.org/2023/01/12/1148376084/exxon-climate-predictions-were-accurate-decades-ago-still-it-sowed-doubt
 
Science (journal) – Assessing ExxonMobil’s global warming projections
https://www.science.org/doi/10.1126/science.abk0063
 
Climate Investigations Center – Shell Climate Documents
https://climateinvestigations.org/shell-oil-climate-documents/
 
The Conversation – What Big Oil knew about climate change, in its own words
https://theconversation.com/what-big-oil-knew-about-climate-change-in-its-own-words-170642
 
ScienceAlert – The Coal Industry Was Well Aware of Climate Change Predictions Over 50 Years Ago
https://www.sciencealert.com/coal-industry-knew-about-climate-change-in-the-60s-damning-revelations-show
 
The Intercept – A Major Coal Company Went Bust. Its Bankruptcy Filing Shows That It Was Funding Climate Change Denialism
https://theintercept.com/2019/05/16/coal-industry-climate-change-denial-cloud-peak-energy/
 
Center for International Environmental Law – Big Oil Denial Playbook Revealed by New Documents
https://www.ciel.org/news/oil-tobacco-denial-playbook/
 
Wikipedia – Tobacco industry playbook
https://en.wikipedia.org/wiki/Tobacco_industry_playbook
 
Scientific American – Tobacco and Oil Industries Used Same Researchers to Sway Public
https://www.scientificamerican.com/article/tobacco-and-oil-industries-used-same-researchers-to-sway-public1/
 
Environmental Health (journal) – The science of spin: targeted strategies to manufacture doubt with detrimental effects on environmental and public health
https://link.springer.com/article/10.1186/s12940-021-00723-0
 
Senator Sheldon Whitehouse – Time to Wake Up: Climate Denial Recalls Tobacco Racketeering
https://www.whitehouse.senate.gov/news/speeches/time-to-wake-up-climate-denial-recalls-tobacco-racketeering/
 
VICE News – Meet the ‘Merchants of Doubt’ Who Sow Confusion about Tobacco Smoke and Climate Change
https://www.vice.com/en/article/meet-the-merchants-of-doubt-who-sow-confusion-about-tobacco-smoke-and-climate-change/
 
Union of Concerned Scientists – The Climate Deception Dossiers
https://www.ucs.org/sites/default/files/attach/2015/07/The-Climate-Deception-Dossiers.pdf
 
 
Illustration generated by author using ChatGPT.
 
 
 
 
 
 

The Founding Feuds: When America’s Heroes Couldn’t Stand Each Other

The mythology of the founding fathers often portrays them as a harmonious band of brothers united in noble purpose. The reality was far messier—these brilliant, ambitious men engaged in bitter personal feuds that sometimes threatened the very republic they were creating.  In some ways, the American revolution was as much of a battle of egos as it was a war between King and colonists.

The Revolutionary War Years: Hancock, Adams, and Washington’s Critics

The tensions began even before independence was declared. John Hancock and Samuel Adams, both Massachusetts firebrands, developed a rivalry that simmered throughout the Revolution. Adams, the older political strategist, had been the dominant figure in Boston’s resistance movement. When Hancock—wealthy, vain, and eager for glory—was elected president of the Continental Congress in 1775, the austere Adams felt his protégé had grown too big for his britches. Hancock’s request for a leave of absence from the presidency of Congress in 1777 coupled with his desire for an honorific military escort home, struck Adams as a relapse into vanity. Adams even opposed a resolution of thanks for Hancock’s service, signaling open estrangement. Their relationship continued to deteriorate to the point where they barely spoke, with Adams privately mocking Hancock’s pretensions and Hancock using his position to undercut Adams politically.

The choice of Washington as commander sparked its own controversies. John Adams had nominated Washington, partly to unite the colonies by giving Virginia the top military role. Washington’s command was anything but universally admired and as the war dragged on with mixed results many critics emerged.

After the victory at Saratoga in 1777, General Horatio Gates became the focal point of what’s known as the Conway Cabal—a loose conspiracy aimed at having Gates replace Washington as commander-in-chief. General Thomas Conway wrote disparaging letters about Washington’s military abilities. Some members of Congress, including Samuel Adams, Thomas Mifflin, and Richard Henry Lee, questioned whether Washington’s defensive strategy was too cautious and if his battlefield performance was lacking. Gates himself played a duplicitous game, publicly supporting Washington while privately positioning himself as an alternative.

When Washington discovered the intrigue, his response was characteristically measured but firm.  Rather than lobbying Congress or forming a counter-faction, Washington leaned heavily on reputation and restraint. He continued to communicate respectfully with Congress, emphasizing the army’s needs rather than defending his own position.  Washington did not respond with denunciations or public accusations. Instead, he handled the situation largely behind the scenes. When he learned that Conway had written a critical letter praising Gates, Washington calmly informed him that he was aware of the letter—quoting it verbatim.

The conspiracy collapsed, in part because Washington’s personal reputation with the rank and file and with key political figures proved more resilient than his critics had anticipated. But the episode exposed deep fractures over strategy, leadership, and regional loyalties within the revolutionary coalition.

The Ideological Split: Hamilton vs. Jefferson and Madison

Perhaps the most consequential feud emerged in the 1790s between Alexander Hamilton and Thomas Jefferson, with James Madison eventually siding with Jefferson. This wasn’t just personal animosity—it represented a fundamental disagreement about America’s future.

Hamilton, Washington’s Treasury Secretary, envisioned an industrialized commercial nation with a strong central government, a national bank, and close ties to Britain. Jefferson, the Secretary of State, championed an agrarian republic of small farmers with minimal federal power and friendship with Revolutionary France. Their cabinet meetings became so contentious that Washington had to mediate. Hamilton accused Jefferson of being a dangerous radical who would destroy public credit. Jefferson called Hamilton a monarchist who wanted to recreate British aristocracy in America.

The conflict got personal. Hamilton leaked damaging information about Jefferson to friendly newspapers. Jefferson secretly funded a journalist, James Callender, to attack Hamilton in print. When Hamilton’s extramarital affair with Maria Reynolds became public in 1797, Jefferson’s allies savored every detail. The feud split the nation into the first political parties: Hamilton’s Federalists and Jefferson’s Democratic-Republicans. Madison, once Hamilton’s ally in promoting the Constitution, switched sides completely, becoming Jefferson’s closest political partner and Hamilton’s implacable foe.

The Adams-Jefferson Friendship, Rivalry, and Reconciliation

John Adams and Thomas Jefferson experienced one of history’s most remarkable personal relationships. They were close friends during the Revolution, working together in Congress and on the committee to draft the Declaration of Independence (though Jefferson did the actual writing). Both served diplomatic posts in Europe and developed deep mutual respect.

But the election of 1796 turned them into rivals. Adams won the presidency with Jefferson finishing second, making Jefferson vice president under the original constitutional system—imagine your closest competitor becoming your deputy. By the 1800 election, they were bitter enemies. The campaign was vicious, with Jefferson’s supporters calling Adams a “hideous hermaphroditical character” and Adams’s allies claiming Jefferson was an atheist who would destroy Christianity.

Jefferson won in 1800, and the two men didn’t speak for over a decade. Their relationship was so bitter that Adams left Washington early in the morning, before Jefferson’s inauguration. What makes their story extraordinary is the reconciliation. In 1812, mutual friends convinced them to resume correspondence. Their letters over the next fourteen years—158 of them—became one of the great intellectual exchanges in American history, discussing philosophy, politics, and their memories of the Revolution. Both men died on July 4, 1826, the fiftieth anniversary of the Declaration of Independence, with Adams’s last words reportedly being “Thomas Jefferson survives” (though Jefferson had actually died hours earlier).

Franklin vs. Adams: A Clash of Styles

In Paris, the relationship between Benjamin Franklin and John Adams was a tense blend of grudging professional reliance and deep personal irritation, rooted in radically different diplomatic styles and temperaments. Franklin, already a celebrated figure at Versailles, cultivated French support through charm, sociability, and patient maneuvering in salons and at court, a method that infuriated Adams. He equated such “nuances” with evasiveness and preferred direct argument, formal memorandums, and hard‑edged ultimatums. Sharing lodgings outside Paris only intensified Adams’s resentment as he watched Franklin rise late, receive endless visitors, and seemingly mix pleasure with business, leading Adams to complain that nothing would ever get done unless he did it himself, while Franklin privately judged Adams “always an honest man, often a wise one, but sometimes and in some things, absolutely out of his senses.” Their French ally, Foreign Minister Vergennes, reinforced the imbalance by insisting on dealing primarily with Franklin and effectively sidelining Adams in formal diplomacy. This deepened Adams’s sense that Franklin was both overindulged by the French and insufficiently assertive on America’s behalf. Yet despite their mutual loss of respect, the two ultimately cooperated—often uneasily—in the peace negotiations with Britain, and both signatures appear on the 1783 Treaty of Paris, a testament to the way personal feud and shared national purpose coexisted within the American diplomatic mission.

Hamilton and Burr: From Political Rivalry to Fatal Duel

The Hamilton-Burr feud ended in the most dramatic way possible: a duel at Weehawken, New Jersey, on July 11, 1804, where Hamilton was mortally wounded and Burr destroyed his own political career.

Their rivalry had been building for years. Both were New York lawyers and politicians, but Hamilton consistently blocked Burr’s ambitions. When Burr ran for governor of New York in 1804, Hamilton campaigned against him with particular venom, calling Burr dangerous and untrustworthy at a dinner party. When Burr read accounts of Hamilton’s remarks in a newspaper, he demanded an apology. Hamilton refused to apologize or deny the comments, leading to the duel challenge.

What made this especially tragic was that Hamilton’s oldest son, Philip, had been killed in a duel three years earlier defending his father’s honor. Hamilton reportedly planned to withhold his fire, but he either intentionally shot into the air or missed. Burr’s shot struck Hamilton in the abdomen, and he died the next day. Burr was charged with murder in both New York and New Jersey and fled to the South.  Though he later returned to complete his term as vice president, his political career was finished.

Adams vs. Hamilton: The Federalist Crack-Up

One of the most destructive feuds happened within the same party. John Adams and Alexander Hamilton were both Federalists, but their relationship became poisonous during Adams’s presidency (1797-1801).

Hamilton, though not in government, tried to control Adams’s cabinet from behind the scenes. When Adams pursued peace negotiations with France (the “Quasi-War” with France was raging), Hamilton wanted war. Adams discovered that several of his cabinet members were more loyal to Hamilton than to him and fired them. In the 1800 election, Hamilton wrote a fifty-four-page pamphlet attacking Adams’s character and fitness for office—extraordinary since they were in the same party. The pamphlet was meant for limited circulation among Federalist leaders, but Jefferson’s allies got hold of it and published it widely, devastating both Adams’s re-election chances and Hamilton’s reputation. The feud helped Jefferson win and essentially destroyed the Federalist Party.

Washington and Jefferson: The Unacknowledged Tension

While Washington and Jefferson never had an open feud, their relationship cooled significantly during Washington’s presidency. Jefferson, as Secretary of State, increasingly opposed the administration’s policies, particularly Hamilton’s financial program. When Washington supported the Jay Treaty with Britain in 1795—which Jefferson saw as a betrayal of France and Republican principles—Jefferson became convinced Washington had fallen under Hamilton’s spell.

Jefferson resigned from the cabinet in 1793, partly from policy disagreements but also from discomfort with what he saw as Washington’s monarchical tendencies (the formal receptions and the ceremonial aspects of the presidency). Washington, in turn, came to view Jefferson as disloyal, especially when he learned Jefferson had been secretly funding attacks on the administration in opposition newspapers and had even put a leading critic on the federal payroll. By the time Washington delivered his Farewell Address in 1796, warning against political parties and foreign entanglements, many saw it as a rebuke of Jefferson’s philosophy. They maintained outward courtesy, but their warm relationship never recovered.

Why These Feuds Mattered

These weren’t just personal squabbles—they shaped American democracy in profound ways. The Hamilton-Jefferson rivalry created our two-party system (despite Washington’s warnings). The Adams-Hamilton split showed that parties could fracture from within. The Adams-Jefferson reconciliation demonstrated that political enemies could find common ground after leaving power.

The founding fathers were human, with all the ambition, pride, jealousy, and pettiness that entails. They fought over power, principles, and personal slights. What’s remarkable isn’t that they agreed on everything—they clearly didn’t—but that despite their bitter divisions, they created a system robust enough to survive their feuds. The Constitution itself, with its checks and balances, almost seems designed to accommodate such disagreements, ensuring that no single person or faction could dominate.

SOURCES

  1. National Archives – Founders Online

https://founders.archives.gov

2.   Massachusetts Historical Society – Adams-Jefferson Letters

https://www.masshist.org/publications/adams-jefferson

       3.    Founders Online – Hamilton’s Letter Concerning John Adams

https://founders.archives.gov/documents/Hamilton/01-25-02-0110

       4.    Gilder Lehrman Institute – Hamilton and Jefferson

https://www.gilderlehrman.org/history-resources/spotlight-primary-source/alexander-hamilton-and-thomas-jefferson

       5.    National Park Service – The Conway Cabal

https://www.nps.gov/articles/000/the-conway-cabal.htm

       6.    American Battlefield Trust – Hamilton-Burr Duel

https://www.battlefields.org/learn/articles/hamilton-burr-duel

        7.   Mount Vernon – Thomas Jefferson

https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/thomas-jefferson

        8.   Monticello – Thomas Jefferson Encyclopedia

https://www.monticello.org/research-education/thomas-jefferson-encyclopedia

        9.   Library of Congress – John Adams Papers

https://www.loc.gov/collections/john-adams-papers

10. Joseph Ellis – “Founding Brothers: The Revolutionary Generation”

https://www.pulitzer.org/winners/joseph-j-ellis

Illustration generated by author using ChatGPT.

The Price Tag Mystery: Why Nobody Really Knows What Healthcare Costs in America

Imagine walking into a store where nothing has a price tag. When you get to the register, the cashier scans your items and tells you the total—but that total is different for every customer. Your neighbor might pay $50 for the same items that cost you $200. The store won’t tell you why, and you won’t find out until after you’ve already “bought” everything.

Welcome to American healthcare, where the simple question “how much does this cost?” has no simple answer.

You might think I’m exaggerating, but the evidence suggests otherwise. Research published in late 2023 by PatientRightsAdvocate.org found that prices for the same medical procedure can vary by more than 10 times within a single hospital depending on which insurance plan you have, and by as much as 33 times across different hospitals. A knee replacement that costs around $23,170 in Baltimore might run $58,193 in New York. An emergency department visit that one facility charges $486 for might cost $3,549 at another hospital for the identical service.

The fundamental problem is that hospitals and doctors don’t have one price for their services. They have dozens, sometimes hundreds, of different prices for the exact same procedure depending on who’s paying. This bizarre system evolved because most healthcare in America isn’t a simple transaction between patient and provider—there’s a third party in the middle called an insurance company, and that changes everything.

The Fiction of Chargemaster Prices

A hospital chargemaster is essentially the hospital’s internal price list—a massive catalog that assigns a dollar amount to every service, supply, test, medication, and procedure the hospital can bill for, from an aspirin to a complex surgery. These listed prices are usually very high and are not what most patients actually pay; instead, the chargemaster functions as a starting point for negotiations with insurers and government programs like Medicare and Medicaid, which typically pay much lower, pre-set rates. What an individual patient ultimately pays depends on several factors layered on top of the chargemaster price. Think of them like the manufacturer’s suggested retail price on a car: technically real, but nobody pays them.

A hospital might list an MRI at $3,000 or a blood test at $500. But then insurance companies come in. They represent thousands or millions of potential patients, which gives them serious bargaining power. They negotiate with hospitals along these lines: “We’ll send you lots of patients, but only if you give us a discount.” So, the hospital agrees to accept much less—maybe they’ll take $1,200 for that $3,000 MRI or $150 for the blood test. This discounted amount is called the “negotiated rate,” and it’s what the insurance company will really pay.

Here’s where it gets messy: every insurance company negotiates its own rates with every hospital. Blue Cross might negotiate one price, Aetna a different price, UnitedHealthcare yet another. The same exact MRI at the same hospital might be $1,200 for one insurer’s customers and $1,800 for another’s. And these negotiated rates have traditionally been kept secret—treated like confidential business information that gives each party a competitive advantage.

The Write-Off Game

What happens to that difference between the chargemaster price and the negotiated rate? The hospital “writes it off.” That’s accounting language for “we accept that we’re not getting paid this money, and we’re taking it off the books.” If the hospital charged $3,000 but agreed to accept $1,200, they write off $1,800. This isn’t lost money in the normal sense—they never expected to collect it in the first place. The chargemaster prices are inflated specifically because everyone knows discounts are coming. Some hospitals now post “discounted cash prices” that are often far below chargemaster and sometimes even below some negotiated rates. These are sometimes, though not always, offered to uninsured patients, generally referred to as self-pay. There can be a catch—some hospitals require lump-sum payment of the total bill to qualify for the lower price.

According to the American Hospital Association, U.S. hospitals collectively plan to write off approximately $760 billion in billed charges in 2025 across all categories of write-offs. That’s not a typo—$760 billion. These write-offs happen in several different situations. The most common are contractual write-offs, where the provider has agreed to accept less than their list price from insurance companies.

Hospitals have far more write-offs than just contractual.  They also write off money for charity care—treating patients who can’t afford to pay anything, and they write off bad debt when patients could pay but don’t. They write off small balances that aren’t worth the administrative cost of collection, and they write off amounts related to various billing errors, denied claims, and coverage disputes. Healthcare providers typically adjust about 10 to 12 percent of their gross revenue due to these various write-offs and claim adjustments.

Why Such Wild Variation?

Even with all these negotiated discounts built into the system, the prices still vary enormously. A 2024 study from the Baker Institute found that for emergency department visits, the price charged by hospitals in the top 10% can be three to seven times higher than the hospitals in the bottom 10% for the identical procedure. Research published in Health Affairs Scholar in early 2025 found that even after adjusting for differences between insurers and procedures, the top 25% of prices across all states is 48 percent higher than the bottom 25% of prices for inpatient services.

Several factors drive this variation. Hospitals in areas with less competition can charge more because insurers have fewer alternatives for negotiation. Prestigious hospitals can demand higher rates because insurers want them in their networks to attract customers. Some insurance companies have more bargaining power than others based on their market share. There’s no central authority setting prices—it’s all private negotiations, hospital by hospital, insurer by insurer, procedure by procedure.

For patients, this creates a nightmare scenario. Even if you have insurance, you usually have no idea what you’ll pay until after you’ve received care. Your out-of-pocket costs depend on your deductible (the amount you pay before insurance kicks in), your copay or coinsurance (your share after insurance starts paying), and whether the negotiated rate between your specific insurance and that specific hospital is high or low. Two people with different insurance plans getting the same procedure at the same hospital on the same day can end up with drastically different bills.

Research using new transparency data confirms this isn’t just anecdotal. A study from early 2025 found that for something as routine as a common office visit, mean prices ranged from $82 with Aetna to $115 with UnitedHealth. Within individual insurance companies, the price of the top 25% of office visits was 20 to 50 percent higher than the bottom 25%, meaning even within one insurer’s network, where you go or where you live makes a huge difference.

The Government Steps In

The federal government finally said “enough” and started requiring transparency. Since 2021, hospitals must post their prices online, including what they’ve negotiated with each insurance company. The Centers for Medicare and Medicaid Services (CMS) strengthened these requirements in 2024, mandating standardized formats and increasing enforcement. Health insurance plans face similar requirements to disclose their negotiated rates.

The theory was straightforward: if patients could see prices ahead of time, they could shop around, which would force prices down through competition. CMS estimated this could save as much as $80 billion by 2025. The idea seemed sound—transparency works in other markets, so why not healthcare?

In practice, it’s been messy. A Government Accountability Office (GAO) report from October 2024 found that while hospitals are posting data, stakeholders like health plans and employers have raised serious concerns about data quality. They’ve encountered inconsistent file formats, extremely complex pricing structures, and data that appears to be incomplete or possibly inaccurate. Even when hospitals post the required information, it’s often so convoluted that comparing prices across facilities becomes nearly impossible for average consumers.

An Office of Inspector General report from November 2024 found that not all selected hospitals were complying with the transparency requirements in the first place. And CMS still doesn’t have robust mechanisms to verify whether the data being posted is accurate and complete. The GAO recommended that CMS assess whether hospital pricing data are sufficiently complete and accurate to be usable, and to assess if additional enforcement if needed.

Imagine trying to comparison shop when one store lists prices in dollars, another in euros, and a third uses a proprietary currency they invented. That’s roughly where we are with healthcare price data—technically available, but practically unusable for most people trying to make informed decisions.

The Trump administration in 2025 signed a new executive order aimed at strengthening enforcement of price transparency rules and directing agencies to standardize and make hospital and insurer pricing information more accessible; this action built on rather than reduced the earlier requirements.  Hopefully this will improve the ability of patients to access real costs, but it is my opinion that the industry will continue to resist full and open compliance.

The Limits of Shopping for Healthcare

There’s also a deeper philosophical problem: for healthcare to work like a normal market where price transparency drives competition, patients would need to be able to shop around based on price. That could work for scheduled procedures like knee replacements, colonoscopies, or elective surgeries. You have time to research, compare, and choose.

But it doesn’t work at all when you’re having a heart attack, or your child breaks their arm. You go to the nearest hospital, period. You’re not calling around asking about prices while someone’s having a medical emergency. Even for non-emergencies, choosing based on price assumes equal quality across providers, which isn’t always true and is even harder to assess than price itself.

A study on price transparency tools found mixed results on whether they truly reduce spending. Some research shows modest savings when people use price comparison tools for shoppable services like imaging and lab work. But utilization of these tools remains low, and for many healthcare encounters, price shopping simply isn’t practical or appropriate.

Who Really Knows?

So, who truly understands what things cost in this system? Hospital administrators know what different insurers pay them for specific procedures, but that knowledge is limited to their facility. They don’t necessarily know what other hospitals charge. Insurance company executives know what they’ve negotiated with various hospitals in their network, but they haven’t historically shared meaningful price information with their customers in advance. And they don’t know what their competitors have negotiated.

Patients, caught in the middle, often find out their costs only when they receive a bill weeks after treatment. By that point, the care has been delivered, and the financial damage is done. Recent surveys suggest that surprise medical bills remain a significant problem, with many patients receiving unexpected charges from out-of-network providers they didn’t choose or even know were involved in their care.

The people who are starting to get a comprehensive view are researchers and policymakers analyzing the newly available transparency data. Studies published in 2024 and 2025 using these data have given us unprecedented visibility into pricing patterns and variation. But this is aggregate, statistical knowledge—it helps us understand the system but doesn’t necessarily help individual patients figure out what they’ll pay for a specific procedure.

Where We Stand

The transparency regulations represent a genuine attempt to inject some market discipline into healthcare pricing. Making negotiated rates public breaks down the information asymmetry that has allowed prices to vary so wildly. In theory, if patients and employers can see that Hospital A charges twice what Hospital B does for the same procedure, competitive pressure should push prices toward the lower end.

There’s some early evidence this might be working. A study of children’s hospitals found that price variation for common imaging procedures decreased by about 19 percent between 2023 and 2024, though overall prices continued rising. Whether this trend will continue and expand to other types of facilities remains to be seen.  I am concerned that rather than lowering overall prices it may cause hospitals at the lower end to raise their prices closer to those at the higher end.

Significant obstacles remain. The data quality issues need resolution before the information becomes truly usable. Many patients lack either the time, expertise, or practical ability to shop based on price. And the fundamental structure of American healthcare—with its complex interplay of providers, insurers, pharmacy benefit managers, and government programs—means that even perfect price transparency won’t create a simple, straightforward market.

So, to return to the original question: does anyone truly know the cost of medical care in the United States? In an aggregate sense, researchers and policymakers are starting to understand the patterns thanks to transparency requirements. The data are revealing just how variable and opaque pricing has been. But as a practical matter for individual patients trying to figure out what they’ll pay for needed care, not really. The information is becoming available but remains largely inaccessible or incomprehensible for ordinary people trying to make informed healthcare decisions.

The $760 billion in annual write-offs tells you everything you need to know: the posted prices are largely fictional, the negotiated prices vary wildly, and the system has evolved to be so complex that even the people operating within it struggle to understand the full picture. We’re making progress toward transparency, but we’re a long way from a healthcare system where patients can confidently get the answer to the simple question: “How much will this cost?”

A closing thought: All of this could be solved by development of a single-payer healthcare system such as I proposed in my previous post America’s Healthcare Paradox: Why We Pay Double and Get Less.

Assessing the Trump-Orwell Comparisons: Warning, Not Prophecy

The comparison between the Trump administration and George Orwell’s dystopian works has recently become one of the most prevalent political metaphors. one I’ve used myself. Following Trump’s second inauguration in January 2025, sales of 1984 surged once again on Amazon’s bestseller lists, just as they did during his first term.

These comparisons are rhetorically powerful, but their accuracy depends on how literally Orwell is read and how carefully distinctions are drawn between authoritarian warning signs and fully realized totalitarian systems. But how accurate are the comparisons? Let me walk you through the key parallels, the evidence supporting them, and the critical questions we should be asking.

Understanding Orwell’s Core Themes

Before diving into the comparisons, it’s worth revisiting what Orwell was actually warning us about. In 1984, published in 1949, Orwell depicted a totalitarian state where the Party manipulates reality through “Newspeak” (language control), “doublethink” (holding contradictory beliefs), the “memory hole” (historical revision), and constant surveillance by Big Brother. The novel’s famous slogans—”War is Peace, Freedom is Slavery, Ignorance is Strength”—exemplify how the Party inverts the very meaning of words.

Animal Farm, written as an allegory of the Soviet Union under Stalin, traces how a revolutionary movement devolves into dictatorship. The pigs, led by Napoleon, gradually corrupt the founding principles of equality, with Squealer serving as the regime’s propaganda minister who constantly rewrites history and justifies Napoleon’s increasingly authoritarian actions.

The Major Parallels

The most famous early comparison emerged during Trump’s first term when adviser Kellyanne Conway defended false crowd size claims with the phrase “alternative facts.” This triggered the first major 1984 sales spike in 2017. According to multiple sources, critics immediately drew connections to Orwell’s concept of manipulating language to control thought.

In the current administration, commentators have identified several Orwellian language patterns. The administration has restricted use of certain words on government websites—including “female,” “Black,” “gender,” and “sexuality”—reminiscent of how Newspeak aimed to “narrow the range of thought” by eliminating words. An executive order on January 29, 2025, titled “Ending Radical Indoctrination in K-12 Schooling” has been criticized as doublespeak, using the language of educational freedom while actually restricting what can be taught.  Doublespeak has evolved as a way of combining the ideas of newspeak and doublethink.

Perhaps the most concrete parallel involves the systematic deletion of historical content from government websites. The Organization of American Historians condemned the administration’s efforts to “reflect a glorified narrative while suppressing the voices of historically excluded groups”. Specific documented deletions include information about Harriet Tubman, the Tuskegee Airmen (later restored after public outcry), the Enola Gay airplane (accidentally caught in a purge of anything containing “gay”), and nearly 400 books removed from the U.S. Naval Academy library relating to diversity topics. The Smithsonian’s National Museum of American History also removed references to Trump’s impeachments from its “Limits of Presidential Power” exhibit, which critics including Senator Adam Schiff called “Orwellian”.

Trump’s repeated characterization of political opponents as the “enemy from within” and the media as the “enemy of the people” parallels 1984’s Emmanuel Goldstein figure and the ritualized Two Minutes Hate sessions. One analysis suggests Trump leads Americans through “a succession of Two Minute Hates—of freeloading Europeans, prevaricating Panamanians, vile Venezuelans, Black South Africans, corrupt humanitarians, illegal immigrants, and lazy Federal workers”.

Multiple sources document that new White House staff must undergo “loyalty tests” and some face polygraph examinations. Trump’s statement “I need loyalty. I expect loyalty” echoes 1984’s declaration that “There will be no loyalty, except loyalty to the Party”. Within weeks of his second inauguration, Trump dismissed dozens of inspectors general—the internal government watchdogs. According to reports from Politico and Reuters, several have filed lawsuits claiming their removal violated federal law. An executive order titled “Ensuring Accountability for All Agencies” placed previously independent agencies like the SEC and FTC under direct White House supervision.

The Animal Farm Connections

While 1984 gets more attention, Stanford literature professor Alex Woloch argues that Animal Farm might be more relevant because “it traces that sense of a ‘slippery slope'” from democracy to totalitarianism, whereas in 1984 the totalitarian system is already fully established.

There are echoes of Animal Farm in the way populist rhetoric has framed liberals, progressive institutions, and the press as enemies of “the people,” while power was being consolidated within Trump’s narrow leadership circle. Orwell’s pigs do not abandon revolutionary language; they repurpose it. The “ordinary” supporters are exhorted to endure sacrifices and to direct anger at opposing groups, while political insiders consolidate authority and wealth—echoing the pigs’ gradual move into the farmhouse and adoption of human privileges. Critics argue that Trump’s sustained use of grievance-based populism, even while wielding executive power, fits this pattern symbolically if not structurally.

Other parallels being drawn to Animal Farm include Napoleon’s propaganda minister Squealer and the administration’s communication strategy of inverting reality and the gradual corruption of founding principles while maintaining revolutionary rhetoric like “drain the swamp”. They also are scapegoating political opponents and immigrants much as Napoleon blamed Snowball for all problems. They also are taking credit for others’ achievements just as Napoleon did with the other animals’ work. In the novel, Napoleon demands full investigations of Snowball even after discovering he had nothing to do with alleged misdeeds, much as Trump demanded investigations of Hillary Clinton, James Comey, Letitia James, and Jerome Powell while avoiding scrutiny of his own conduct.

As in Orwell’s farm, where the constant invoking of enemies keeps the animals fearful and loyal, the politics of permanent crisis and blame are being used to normalize increasingly aggressive behavior by those in power.

Critical Perspectives and Limitations

These comparisons raise several important concerns that deserve serious consideration. Orwell was writing about actual totalitarian regimes—Stalinist Russia and Nazi Germany—where millions died in purges, gulags, and genocides. The United States in 2026, despite concerning trends, still maintains functioning courts, elections, a free press, and a civil society. Some observers are warning against trivializing real authoritarian regimes by making overstated comparisons.

The Trump administration’s frequent attacks on the press, civil servants, and election administrators do resemble early warning signs Orwell would have recognized—not as proof of totalitarianism, but as a stress test on democratic norms.

Conservative commentators argue that these comparisons are exaggerated partisan attacks that misrepresent Trump’s actions. They point out that some court challenges to administration actions have succeeded, media criticism continues unabated, and political opposition remains robust—none of which would be possible in Orwell’s Oceania. The question becomes whether we’re witnessing isolated, though concerning actions or rather a systematic pattern—what Professor Woloch calls the “slippery slope” question.

One opinion piece suggested Trump’s actions resemble the chaotic, rule-breaking fraternity culture of “Animal House” more than the calculated totalitarianism of Orwell’s works—emphasizing bombast and spectacle over systematic control. This view argues that the MAGA movement is more “Blutonian than Orwellian,” driven by emotional appeals and personality rather than systematic thought control.

Where the Comparisons Are Strongest and Weakest

Based on my analysis, the comparisons appear most accurate in several specific areas. The pattern of language manipulation and redefinition—calling restrictions “freedom” and censorship “transparency”—closely mirrors doublespeak. The documented systematic removal of historical content from government sources directly parallels the memory hole concept. The dismissing of senior officials such as the head of the Bureau of Labor Statistics after an unfavorable jobs report, the wholesale firing of agency inspectors general and signaling that neutral experts should conform to political expectations mirrors the Orwellian demand for loyalty.  The assumption of control of previously independent agencies, and pressure on courts to allow the administration’s consolidation of power have parallels in the total party control. Unleashing ICE agents on the general public and excusing the murder of protesters are chillingly similar to the thought police and the “vaporizing” of citizens in Oceana. Perhaps most strikingly, Trump’s 2018 statement “What you’re seeing and what you’re reading is not what’s happening” nearly quotes Orwell’s line: “The party told you to reject the evidence of your eyes and ears”.

The comparisons are most strained when they overstate the current reality by suggesting America has already become Oceania, while democratic institutions that were lacking completely in Oceania are still functioning in America. Unlike 1984’s Winston, Americans retain significant ability to resist and organize. There is no single state monopoly over information. State and local governments, and civil society remain vigorous and are often hostile to Trump. Additionally, some comparisons conflate authoritarian-sounding rhetoric with actual totalitarian control, which aren’t equivalent.

Speculation: The Trajectory Question

The pattern of actions I’ve documented—systematic information control, loyalty purges, attacks on institutional independence, and explicit statements about seeking a third term—suggests a consistent direction rather than random actions. If these trends continue unchecked, particularly combined with further erosion of electoral integrity, increased prosecution of political opponents through mechanisms like the “Weaponization Working Group,” greater control over media and information, and weakening of judicial independence, then the slide toward authoritarianism could accelerate. As I am writing this article, Trump continues to promote what he calls the “Board of Peace,” a proposed international organization that is an attempt to create a U.S.-led alternative to the United Nations. The scholar Alfred McCoy notes that Trump appears to be pursuing what Orwell described: a world divided into three regional blocs under strongman leaders, with weakened international institutions.

However, several factors may counter this trajectory. Strong civil society and activist movements continue organizing opposition movements. Independent state governments push back against federal overreach and robust legal challenges have blocked numerous executive actions. The free press continues investigative reporting despite attacks. Congressional resistance still exists—even Senator Booker’s 25-hour speech on constitutional abuse entered the Congressional Record as a permanent historical marker.

My speculation is that the most likely outcome is neither complete Orwellian dystopia nor a comfortable return to democratic norms, but rather what political scientists call “competitive authoritarianism” or “illiberal democracy”—where democratic forms persist but are increasingly hollowed out, opposition exists but faces systematic disadvantages, and truth becomes increasingly contested. The key question isn’t whether we’ll replicate 1984 exactly, but whether enough democratic safeguards will hold to prevent sliding further into authoritarianism. One observer standing before a giant banner of Trump’s face in Washington noted that “Orwell’s world isn’t just fiction. It’s a mirror—reflecting what happens when power faces no resistance, when truth bends to loyalty, and when silence becomes the safest response”.

The Bottom Line

The Orwell comparisons aren’t perfect historical analogies, but they’re not baseless partisan rhetoric either. They identify genuine patterns of authoritarian behavior that merit serious attention—the manipulation of language to distort reality, the systematic rewriting of historical narratives, the demand for personal loyalty over institutional integrity, and the rejection of shared factual reality. I am concerned about the increasing use of Nazi inspired phrases and themes by members of the Trump administration. Most recently, Kristy Noam’s use of the phrase “one of us-all of you”. While not a formal written Nazi policy, it reflects their practice when dealing with partisan attacks in occupied countries and can only be viewed as a threat of violence against American citizens.

Whether these patterns represent isolated troubling actions or the beginnings of systematic democratic erosion remains the crucial—and still open—question. As Orwell himself noted, he didn’t write to predict the future but to prevent it. The value of these comparisons may ultimately lie not in their precision as historical parallels, but in their power to alert citizens to concerning trends before they become irreversible.

Key Sources

  • Organization of American Historians statements on historical revisionism
  • Politico and Reuters reporting on inspector general firings
  • The Washington Post and Axios on executive order impacts
  • Stanford Professor Alex Woloch’s analysis in The World (https://theworld.org/stories/2017/01/25/people-are-saying-trumps-government-orwellian-what-does-actually-mean)
  • World Press Institute analysis (https://worldpressinstitute.org/the-orwell-effect-how-2025-america-felt-like-198/)
  • Adam Gopnik, “Orwell’s ‘1984’ and Trump’s America,” The New Yorker, Jan. 26, 2017.
  • “Trump’s America: Rethinking 1984 and Brave New World,” Monthly Review, Sept. 7, 2025.
  • “False or misleading statements by Donald Trump,” Wikipedia (overview of documented falsehoods).
  • “Trump’s Efforts to Control Information Echo, an Authoritarian Playbook,” The New York Times, Aug. 3, 2025.
  • “Trump’s 7 most authoritarian moves so far,” CNN Politics, Aug. 13, 2025.
  • “The Orwellian echoes in Trump’s push for ‘Americanism’ at the Smithsonian,” The Conversation, Aug. 20, 2025.
  • “Everything Is Content for the ‘Clicktatorship’,” WIRED, Jan. 13, 2026.
  • “’Animal Farm’ Perfectly Describes Life in the Era of Donald Trump,” Observer, May 8, 2017.
  • “Ditch the ‘Animal Farm’ Mentality in Resisting Trump Policies,” YES! Magazine, May 8, 2017.

Full disclosure: I recently bought a hat that says “Make Orwell Fiction Again”.

What “Woke” Really Means: A Look at a Loaded Word

Why everyone’s fighting over a word nobody agrees on

Okay, so you’ve probably heard “woke” thrown around about a million times, right? It’s in political debates, online arguments, your uncle’s Facebook rants—basically everywhere. And here’s the weird part: depending on who’s saying it, it either means you’re enlightened or you’re insufferable.

So let’s figure out what’s actually going on with this word.

Where It All Started

Here’s something most people don’t know: “woke” wasn’t invented by social media activists or liberal college students. It goes way back to the 1930s in Black communities, and it meant something straightforward—stay alert to racism and injustice.

The earliest solid example comes from blues musician Lead Belly. In his song “Scottsboro Boys” (about nine Black teenagers falsely accused of rape in Alabama in 1931), he told Black Americans to “stay woke”—basically meaning watch your back, because the system isn’t on your side. This wasn’t abstract philosophy; it was survival advice in the Jim Crow South.

The term hung around in Black culture for decades. It got a boost in 2008 when Erykah Badu used “I stay woke” in her song “Master Teacher,” where it meant something like staying self-aware and questioning the status quo.

But the big explosion happened around 2014 during the Ferguson protests after Michael Brown was killed. Black Lives Matter activists started using “stay woke” to talk about police brutality and systemic racism. It spread through Black Twitter, then got picked up by white progressives showing solidarity with social justice movements. By the late 2010s, it had expanded to cover sexism, LGBTQ+ issues, and pretty much any social inequality you can think of.

And that’s when conservatives started using it as an insult.

The Liberal Take: It’s About Giving a Damn

For progressives, “woke” still carries that original vibe of awareness. According to a 2023 Ipsos poll, 56% of Americans (and 78% of Democrats) said “woke” means “to be informed, educated, and aware of social injustices.”

From this angle, being woke just means you’re paying attention to how race, gender, sexuality, and class affect people’s lives—and you think we should try to make things fairer. It’s not about shaming people; it’s about understanding the experiences of others.

Liberals see it as continuing the work of the civil rights movement—expanding who we empathize with and include. That might mean supporting diversity programs, using inclusive language, or rethinking how we teach history. To them, it’s just what thoughtful people do in a diverse society.

Here’s the Progressive Argument in a Nutshell

The term literally started as self-defense. Progressives argue the problems are real. Being “woke” is about recognizing that bias, inequality, and discrimination still exist. The data back some of this up—there are documented disparities in policing, sentencing, healthcare, and economic opportunity across racial lines. From this view, pointing these things out isn’t being oversensitive; it’s just stating facts.

They also point out that conservatives weaponized the term. They took a word from Black communities about awareness and justice and turned it into an all-purpose insult for anything they don’t like about the left. Some activists call this a “racial dog whistle”—a way to attack justice movements without being explicitly racist.

The concept naturally expanded from racial justice to other inequalities—sexism, LGBTQ+ discrimination, other forms of unfairness. Supporters see this as logical: if you care about one group being treated badly, why wouldn’t you care about others?

And here’s their final point: what’s the alternative? When you dismiss “wokeness,” you’re often dismissing the underlying concerns. Denying that racism still affects American life can become just another way to ignore real problems.

Bottom line from the liberal side: being “woke” means you’ve opened your eyes to how society works differently for different people, and you think we can do better.

The Conservative Take: It’s About Going Too Far

Conservatives see it completely differently. To them, “woke” isn’t about awareness—it’s about excess and control.

They see “wokeness” as an ideology that forces moral conformity and punishes anyone who disagrees. What started as social awareness has turned into censorship and moral bullying. When a professor loses their job over an unpopular opinion or comedy shows get edited for “offensive” jokes, conservatives point and say: “See? This is exactly what we’re talking about.”  To them, “woke” is just the new version of “politically correct”—except worse. It’s intolerance dressed up as virtue.

Here’s the conservative argument in a nutshell:

Wokeness has moved way beyond awareness into something harmful. They argue it creates a “victimhood culture” where status and that benefits come from claiming you’re oppressed rather than from merit or hard work. Instead of fixing injustice, they say it perpetuates it by elevating people based on identity rather than achievement.

They see it as “an intolerant and moralizing ideology” that threatens free speech. In their view, woke culture only allows viewpoints that align with progressive ideology and “cancels” dissenters or labels them “white supremacists.”

Many conservatives deny that structural racism or widespread discrimination still exists in modern America. They attribute unequal outcomes to factors other than bias. They believe America is fundamentally a great country and reject the idea that there is systematic racism or that capitalism can sometimes be unjust.

They also see real harm in certain progressive positions—like the idea that gender is principally a social construct or that children should self-determine their gender. They view these as threats to traditional values and biological reality.

Ultimately, conservatives argue that wokeness is about gaining power through moral intimidation rather than correcting injustice. In their view, the people rejecting wokeness are the real critical thinkers.

The Heart of the Clash

Here’s what makes this so messy: both sides genuinely believe they’re defending what’s right.

Liberals think “woke” means justice and empathy. Conservatives think it means judgment and control. The exact same thing—a company ad featuring diverse families, a school curriculum change, a social movement—can look like progress to one person and propaganda to another.

One person’s enlightenment is literally another person’s indoctrination.

The Word Nobody Wants Anymore

Here’s the ironic part: almost nobody calls themselves “woke” anymore. Like “politically correct” before it, the word has gotten so loaded that it’s frequently used as an insult—even by people who agree with the underlying ideas. The term has been stretched to cover everything from racial awareness to climate activism to gender identity debates, and the more it’s used, the less anyone knows what it truly means.

Recently though, some progressives have started reclaiming the term—you’re beginning to see “WOKE” on protest signs now.

So, Who’s Right?

Maybe both. Maybe neither.

If “woke” means staying aware of injustice and treating people fairly, that’s good. If it means acting morally superior and shutting down disagreement, that’s not. The truth is probably somewhere in the messy middle.

This whole debate tells us more about America than about the word itself. We’ve always struggled with how to balance freedom with fairness, justice with tolerance. “Woke” is just the latest word we’re using to have that same old argument.

The Bottom Line

Whether you love it or hate it, “woke” isn’t going anywhere soon. It captures our national struggle to figure out what awareness and fairness should look like today.

And honestly? Maybe we’d all be better off spending less time arguing about the word and more time talking about the actual values behind it—what’s fair, what’s free speech, what kind of society do we want?

Being “woke” originally meant recognizing systemic prejudices—racial injustice, discrimination, and social inequities many still experience daily. But the term’s become a cultural flashpoint.  Here’s the thing: real progress requires acknowledging both perspectives exist and finding common ground. It’s not about who’s “right”—it’s about building bridges.

 If being truly woke means staying alert to injustice while remaining open to dialogue with those who see things differently, seeking solutions that work for everyone, caring for others, being empathetic and charitable, then call me WOKE.

Supply-Side Economics and Trickle-Down: What Actually Happened?

The Basic Question

You’ve probably heard politicians arguing about tax cuts—some promising they’ll supercharge the economy, others dismissing them as giveaways to the rich. These debates usually involve two terms that get thrown around like political footballs: “supply-side economics” and “trickle-down economics.” But what do these terms actually mean, and more importantly, do they work? After four decades of real-world experiments, we finally have enough data to answer that question.

Understanding Supply-Side Economics

Supply-side economics is a legitimate economic theory that emerged in the 1970s when the U.S. economy was struggling with both high inflation and high unemployment—a combination that traditional economic theories said shouldn’t happen. The core idea is straightforward: economic growth comes from producing more goods and services (the “supply” side), not just from boosting consumer demand.

The theory rests on three main pillars. First, lower taxes—the thinking is that if people and businesses keep more of their money, they’ll work harder, invest more, and create jobs. According to economist Arthur Laffer’s famous curve, there’s supposedly a sweet spot where lower tax rates can actually generate more government revenue because the economy grows so much. Second, less regulation removes government restrictions so businesses can innovate and operate more efficiently. Third, smart monetary policy keeps inflation in check while maintaining enough money in the economy to fuel growth.

All of this sounds reasonable in theory. After all, who wouldn’t work harder if they kept more of their paycheck?

The Political Rebranding: Enter “Trickle-Down”

Here’s where economic theory meets political messaging. “Trickle-down economics” isn’t an academic term—it’s essentially a catchphrase, and not a complimentary one. Critics use it to describe supply-side policies when those policies mainly benefit wealthy people and corporations. The idea behind the name: give tax breaks to rich people and big companies, and the benefits will eventually “trickle down” to everyone else through job creation, higher wages, and economic growth.

Here’s the interesting part: no economist actually calls their theory “trickle-down economics.” Even David Stockman, President Reagan’s own budget director, later admitted that “supply-side” was basically a rebranding of “trickle-down” to make tax cuts for the wealthy easier to sell politically. So while they’re not identical concepts, they’re two sides of the same coin.

The Reagan Revolution: Testing the Theory

Ronald Reagan became president in 1981 and implemented the biggest supply-side experiment in U.S. history. He slashed the top tax rate from 70% down to 50%, and eventually to just 28%, arguing this would unleash economic growth that would lift all boats.

The results were genuinely mixed. On one hand, the economy created about 20 million jobs during Reagan’s presidency, unemployment fell from 7.6% to 5.5%, and the economy grew by 26% over eight years. Those aren’t small achievements.

But the picture gets more complicated when you look deeper. The tax cuts didn’t pay for themselves as promised—they reduced government revenue by about 9% initially. Reagan had to backtrack and raise taxes multiple times in 1982, 1983, 1984, and 1987 to address the mounting deficit problem. Income inequality increased significantly during this period, and surprisingly, the poverty rate at the end of Reagan’s term was essentially the same as when he started. Perhaps most telling, government debt more than doubled as a percentage of the economy.

There’s another wrinkle worth mentioning: much of the economic recovery happened because Federal Reserve Chairman Paul Volcker had already broken the back of inflation through tight monetary policy before Reagan’s tax cuts took effect. Disentangling how much credit Reagan’s policies deserve versus Volcker’s groundwork is genuinely difficult.

The Pattern Repeats

The story didn’t end with Reagan. George W. Bush enacted major tax cuts in 2001 and 2003, especially benefiting wealthy Americans. The result? Economic growth remained sluggish, deficits ballooned, and income inequality continued its upward march.

Then there’s Bill Clinton—the plot twist in this story. In 1993, Clinton actually raised taxes on the wealthy, pushing the top rate from 31% back up to 39.6%. Conservative economists predicted economic disaster. Instead, the economy boomed with what was then the longest sustained growth period in U.S. history, creating 22.7 million jobs. Even more remarkably, the government ran a budget surplus for the first time in decades.

Donald Trump’s 2017 tax cuts, focused heavily on corporations, showed minimal wage growth for workers while generating significant stock buybacks that primarily benefited shareholders—and yes, larger deficits. Trump’s subsequent economic policies in his second term have been characterized by such volatility that reasonable long-term assessments remain difficult.

The Kansas Experiment: A Modern Test Case

At the state level, Kansas Governor Sam Brownback implemented one of the boldest modern experiments in supply-side policy between 2012 and 2017, dramatically slashing income taxes especially for businesses. Proponents called it a “real live experiment” that would demonstrate supply-side principles in action.

Instead of unleashing growth, Kansas faced severe budget shortfalls that forced cuts to education and infrastructure. Economic growth actually lagged behind neighboring states that didn’t implement such aggressive cuts, and the state legislature eventually reversed many of the tax reductions. This case has become a frequently cited cautionary tale for critics of supply-side policies.

What Does Half a Century of Data Show?

After 50 years of real-world experiments, researchers finally have enough data to move beyond political rhetoric. A comprehensive study analyzed tax policy changes across 18 developed countries over five decades, looking at what actually happened after major tax cuts for the wealthy.

The findings are remarkably consistent. Tax cuts for the rich reliably increase income inequality—no surprise there. But they show no significant effect on overall economic growth rates and no significant effect on unemployment. Perhaps most damaging to the theory, they don’t “pay for themselves” through increased growth. At best, about one-third of lost revenue gets recovered through expanded economic activity.

In simpler terms: when you cut taxes for wealthy people, wealthy people get wealthier. The promised broader benefits largely fail to materialize. The 2022 World Inequality Report reinforced these conclusions, finding that the world’s richest 10% continue capturing the vast majority of all economic gains, while the bottom half of the population holds just 2% of all wealth.

Why the Theory Doesn’t Match Reality

When you think about it logically, the disconnect makes sense. If you give a tax cut to someone who’s already wealthy, they’ll probably save or invest most of it—they were already buying what they wanted and needed. Their daily spending habits don’t change much. But if you give money to someone who’s struggling to pay bills or afford necessities, they’ll spend it immediately, directly stimulating economic activity.

Economists call this concept “marginal propensity to consume,” and it explains why giving tax breaks to working and middle-class people actually does more to boost the economy than supply-side cuts focused on the wealthy. A dollar in the hands of someone who needs to spend it has more immediate economic impact than a dollar added to an already-substantial investment portfolio.

The Bottom Line

After 40-plus years of repeated experiments, the pattern is clear. Supply-side policies and trickle-down approaches consistently increase deficits, widen inequality, and fail to significantly boost overall economic growth or create more jobs than alternative policies. Meanwhile, periods with higher taxes on the wealthy, like the Clinton years, saw strong growth, robust job creation, and balanced budgets.

The Nuance Worth Keeping

None of this means all tax cuts are bad or that high taxes are always good—economics is rarely that simple. The critical questions are: who receives the tax cuts, and what outcomes do you realistically expect? Targeted tax cuts for working families, small businesses, or specific industries facing genuine challenges can serve as effective policy tools. Child tax credits, research and development incentives, or relief for struggling sectors might accomplish specific goals.

But the evidence accumulated over four decades is clear: broad tax cuts focused primarily on the wealthy and large corporations don’t deliver the promised economic benefits for everyone else. The benefits don’t trickle down in any meaningful way.

You’ll keep hearing these arguments for years to come. Politicians will continue promising that tax cuts for businesses and the wealthy will boost the entire economy. Now you know what the actual evidence shows, and you can judge those promises accordingly.


Sources:

The Sugar Act of 1764: The Tax Cut That Sparked a Revolution

The Sugar Act of 1764: The Tax Cut That Sparked a Revolution

Imagine a time when people rose up in protest of a tax being lowered.  Welcome to the world of the Sugar Act.

The Sugar Act of 1764 stands as one of the most ironic moments in the history of taxation. Here Britain was actually lowering a tax, and yet colonists reacted with a fury that would help spark a revolution. To understanding this paradox, we must understand that this new act represented something far more threatening than any previous attempt by Britain to regulate its American colonies.

The Old System: Benign Neglect

For decades before 1764, Britain had maintained what historians call “salutary neglect” toward its American colonies. The Molasses Act of 1733 had imposed a steep duty of six pence per gallon on foreign molasses imported into the colonies. On paper, this seemed like a significant burden for the rum-distilling industry, which depended heavily on cheap molasses from French and Spanish Caribbean islands. In practice, though, the tax was rarely collected. Colonial merchants either bribed customs officials or simply smuggled the molasses past them. The British government essentially looked the other way, and everyone profited.

This informal arrangement worked because Britain’s primary interest in the colonies was commercial, not fiscal. The Navigation Acts required colonists to ship certain goods only to Britain and to buy manufactured goods from British merchants, which enriched British traders and manufacturers without requiring aggressive tax collection in America. As long as this system funneled wealth toward London, Parliament didn’t care much about collecting relatively small customs duties across the Atlantic.

Everything Changed in 1763

The Seven Years’ War (which Americans call the French and Indian War) changed this comfortable arrangement entirely. Britain won decisively, driving France out of North America and gaining vast new territories. But victory came with a staggering price tag. Britain’s national debt had nearly doubled to £130 million, and annual interest payments alone consumed half the government’s budget. Meanwhile, Britain now needed to maintain 10,000 troops in North America to defend its expanded empire and manage relations with Native American tribes.

Prime Minister George Grenville faced a political problem. British taxpayers, already heavily burdened, were in no mood for additional taxes. The logic seemed obvious: since the colonies had benefited from the war’s outcome and still required military protection, they should help pay for their own defense. Americans, paid far lower taxes than their counterparts in Britain—by some estimates, British residents paid 26 times more per capita in taxes than colonists did.

What the Act Actually Did

The Sugar Act (officially the American Revenue Act of 1764) approached colonial taxation differently than anything before it. First, it cut the duty on foreign molasses from six pence to three pence per gallon—a 50% reduction. Grenville calculated, reasonably, that merchants might actually pay a three-pence duty rather than risk getting caught smuggling, whereas the six-pence duty had been so high it encouraged universal evasion.

But the Act did far more than adjust molasses duties. It added or increased duties on foreign textiles, coffee, indigo, and wine imported into the colonies. colonialtened regulations around the colonial lumber trade and banned the import of foreign rum entirely. Most significantly, the Act included elaborate provisions designed to strictly enforce these duties for the first time.

The enforcement mechanisms represented the real revolution in British policy. Ship captains now had to post bonds before loading cargo and had to maintain detailed written cargo lists. Naval patrols increased dramatically. Smugglers faced having their ships and cargo seized.

Significantly, the burden of proof was shifted to the accused.  They were required to prove their innocence, a reversal of traditional British justice. Most controversially, accused smugglers would be tried in vice-admiralty courts, which had no juries and whose judges received a cut of any fines levied.

The Paradox of the Lower Tax

So why did colonists react so angrily to a tax cut? The answer reveals the fundamental shift in the British-American relationship that the Sugar Act represented.

First, the issue wasn’t the tax rate. It was the certainty of collection. A six-pence tax that no one paid was infinitely preferable to a three-pence tax rigorously enforced. New England’s rum distilling industry, which employed thousands of distillery workers and sailors, depended on cheap molasses from the French West Indies. Even at three pence per gallon, the tax significantly increased operating costs. Many merchants calculated they couldn’t remain profitable if they had to pay it.

Second, and more importantly, colonists recognized that the Act’s purpose had changed relationships. Previous trade regulations, even if they involved taxes, were ostensibly about regulating commerce within the empire. The Sugar Act openly stated its purpose was raising revenue—the preamble declared it was “just and necessary that a revenue be raised” in America. This might seem like a technical distinction, but to colonists it mattered enormously. British constitutional theory held that subjects could only be taxed by their own elected representatives. Colonists elected representatives to their own assemblies but sent no representatives to Parliament. Trade regulations fell under Parliament’s legitimate authority to govern imperial commerce, but taxation for revenue was something else entirely.

Third, the enforcement mechanisms offended colonial sensibilities about justice and traditional British rights. The vice-admiralty courts denied jury trials, which colonists viewed as a fundamental right of British subjects. Having to prove your innocence rather than being presumed innocent violated another core principle. Customs officials and judges profiting from convictions created obvious incentives for abuse.

Implementation and Colonial Response

The Act took effect in September 1764, and Grenville paired it with an aggressive enforcement campaign. The Royal Navy assigned 27 ships to patrol American waters. Britain appointed new customs officials and gave them instructions to strictly do their jobs rather than accept bribes. Admiralty courts in Halifax, Nova Scotia became particularly notorious.  Colonists had to travel hundreds of miles to defend themselves in a court with no jury and with a judge whose income game from convictions.

Colonists responded immediately. Boston merchants drafted a protest arguing that the act would devastate their trade. They explained that New England’s economy depended on a complex triangular trade: they sold lumber and food to the Caribbean in exchange for molasses, which they distilled into rum, which they sold to Africa for slaves, who were sold to Caribbean plantations for molasses, and the cycle repeated. Taxing molasses would break this chain and impoverish the region.

But the economic arguments quickly evolved into constitutional ones. Lawyer James Otis argued that “taxation without representation is tyranny”—a phrase that would echo through the coming decade. Colonial assemblies began passing resolutions asserting their exclusive right to tax their own constituents. They didn’t deny Parliament’s authority to regulate trade, but they drew a clear line: revenue taxation required representation.

The protests went beyond rhetoric. Colonial merchants organized boycotts of British manufactured goods. Women’s groups pledged to wear homespun cloth rather than buy British textiles. These boycotts caused enough economic pain in Britain that London merchants began lobbying Parliament for relief.

The Road to Revolution

The Sugar Act’s significance extends far beyond its immediate economic impact. It established precedents and patterns that would define the next decade of imperial crisis.

Most fundamentally, it shattered the comfortable arrangement of salutary neglect. Once Britain demonstrated it intended to actively govern and tax the colonies, the relationship could never return to its previous informality. The colonists’ constitutional objections—no taxation without representation, right to jury trials, presumption of innocence—would be repeated with increasing urgency as Parliament passed the Stamp Act (1765), Townshend Acts (1767), and Tea Act (1773).

The Sugar Act also revealed the practical difficulties of governing an empire across 3,000 miles of ocean. The vice-admiralty courts became symbols of distant, unaccountable power. When colonists couldn’t get satisfaction through established legal channels, they increasingly turned to extralegal methods, including committees of correspondence, non-importation agreements, and eventually armed resistance.

Perhaps most importantly, the Sugar Act forced colonists to articulate a political theory that ultimately proved incompatible with continued membership in the British Empire. Once they agreed to the principle that they could only be taxed by their own elected representatives, and that Parliament’s authority over them was limited to trade regulation, the logic led inexorably toward independence. Britain couldn’t accept colonial assemblies as co-equal governing bodies since Parliament claimed supreme authority over all British subjects. The colonists couldn’t accept taxation without representation since they claimed the rights of freeborn Englishmen. These positions couldn’t be reconciled.

The Sugar Act of 1764 represents the point where the British Empire’s century-long success in North America began to unravel. By trying to make the colonies pay a modest share of imperial costs through what seemed like reasonable means, Britain inadvertently set in motion forces that would break the empire apart just twelve years later.

Sources

Mount Vernon Digital Encyclopedia – Sugar Act https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/sugar-act/ Provides overview of the Act’s provisions, economic context, and relationship to British debt from the Seven Years’ War. Includes information on tax burden comparisons between Britain and the colonies.

Britannica – Sugar Act https://www.britannica.com/event/Sugar-Act Covers the specific provisions of the Act, enforcement mechanisms, vice-admiralty courts, and the shift from the Molasses Act of 1733. Useful for technical details of the legislation.

History.com – Sugar Act https://www.history.com/topics/american-revolution/sugar-act Discusses colonial constitutional objections, the “taxation without representation” argument, and the enforcement provisions including burden of proof reversal and jury trial denial.

American Battlefield Trust – Sugar Act https://www.battlefields.org/learn/articles/sugar-act Details colonial response including boycotts, James Otis’s arguments, and the triangular trade system that the Act disrupted.

Additional Recommended Sources

Library of Congress – The Sugar Act https://www.loc.gov/collections/continental-congress-and-constitutional-convention-broadsides/articles-and-essays/continental-congress-broadsides/broadsides-related-to-the-sugar-act/ Primary source collection including contemporary colonial broadsides and protests against the Act.

National Archives – The Sugar Act (Primary Source Text) https://founders.archives.gov/about/Sugar-Act The actual text of the American Revenue Act of 1764, useful for verifying specific provisions and language.

Yale Law School – Avalon Project: Resolutions of the Continental Congress (October 19, 1765) https://avalon.law.yale.edu/18th_century/resolu65.asp Colonial responses to the Sugar and Stamp Acts, showing how the arguments evolved.

Massachusetts Historical Society – James Otis’s Rights of the British Colonies Asserted and Proved (1764) https://www.masshist.org/digitalhistory/revolution/taxation-without-representation Primary source for the “taxation without representation” argument that emerged from Sugar Act opposition.

Colonial Williamsburg Foundation – Sugar Act of 1764 https://www.colonialwilliamsburg.org/learn/deep-dives/sugar-act-1764/ Discusses economic impact on colonial merchants and the rum distilling industry.

Page 1 of 6

Powered by WordPress & Theme by Anders Norén