The Grumpy Doc

Grumpy opinions about everything.

The Fatal Meeting: When Hamilton and Burr Settled Fifteen Years of Rivalry with Pistols

The story of the Hamilton-Burr duel has all the elements of a Greek tragedy: brilliant men, political ambition, an unforgiving honor culture, and an ending that destroyed both victor and vanquished alike. When Aaron Burr shot Alexander Hamilton on the morning of July 11, 1804, he didn’t just kill one of America’s founding architects—he also ended his own political career and helped doom the entire Federalist Party to irrelevance. Let’s rewind the clock more than a decade to try and understand how these two gifted lawyers and Revolutionary War veterans ended up facing each other with loaded pistols.

The Long Road to Weehawken

Hamilton and Burr moved in the same elite New York political circles from the 1790s onward, but they had remarkably different temperaments and political beliefs. Hamilton was ideological, prolific, and combative—often too much so for his own good. Burr was pragmatic, opaque, self-serving, and famously hard to pin down on principle. They distrusted each other deeply.

Their rivalry stretched back to 1791, when Burr defeated Philip Schuyler for a U.S. Senate seat representing New York. This wasn’t just any political defeat for Hamilton—Schuyler was his father-in-law and a crucial Federalist ally on whom Hamilton had counted to support his ambitious financial programs. Hamilton, who was serving in George Washington’s cabinet as Treasury Secretary, never forgave Burr for this loss. In correspondence from June 1804, Hamilton himself referenced “a course of fifteen years competition” between the two men.  

Their philosophical differences ran deep. Hamilton was an ideological Federalist who dreamed of transforming the United States into a modern economic power rivaling European empires through strong central government, industrial development, and military strength. Burr, by contrast, approached politics more pragmatically—he saw it as a vehicle for advancing his own interests and those of his allies rather than as a way to implement sweeping political visions. As Burr himself allegedly said, politics were nothing more than “fun and honor and profit”. Hamilton viewed Burr as fundamentally dangerous due to his lack of fixed ideological principles. Hamilton wrote in 1792 that he considered it his “religious duty to keep this man from office”

The election of 1800 brought their animosity to a boiling point. Due to a quirk in the original Constitution’s electoral system, Thomas Jefferson and his running mate Aaron Burr tied in the Electoral College with 73 votes each, allowing the Federalists to briefly consider elevating Burr to the presidency.  The decision went to the House of Representatives, and Hamilton—despite despising Jefferson’s Democratic-Republican politics—campaigned hard to ensure Jefferson won the presidency rather than Burr. Hamilton argued that Jefferson, however wrong in policy, had convictions, whereas Burr had none.  In the end, Jefferson gained the presidency and Burr became Vice President, but their relationship was never collegial and Burr was excluded from any meaningful participation in Jefferson’s administration.

By 1804, it was clear Jefferson would not consider Burr for a second term as Vice President. Desperate to salvage his political career, Burr made a surprising move: he sought the Federalist nomination for governor of New York, switching from his Democratic-Republican affiliation. It was a strange gambit—essentially betting that his political enemies might support him if it served their interests. Hamilton, predictably, worked vigorously to block Burr’s ambitions yet again. Although Hamilton’s opposition wasn’t the only factor, Burr lost badly to Morgan Lewis, the Democratic-Republican candidate, in April 1804.

The Cooper Letter and the Challenge

The immediate trigger for the duel came from a relatively minor slight in the context of their long feud. In February 1804, Dr. Charles Cooper attended a dinner party where Hamilton spoke forcefully against Burr’s candidacy. Cooper later wrote to Philip Schuyler describing Hamilton’s comments, noting that Hamilton had called Burr “a dangerous man” and referenced an even “more despicable opinion” of him. This letter was published in the Albany Register in April, after Burr’s electoral defeat.

When the newspaper reached Burr, he was already politically ruined—still Vice President of the United States, but with no prospects for future office. He demanded that Hamilton acknowledge or deny the statements attributed to him. What followed was a formal exchange of letters between the two men and their representatives that lasted through June. Hamilton refused to give Burr the straightforward denial he sought, explaining that he couldn’t reasonably be expected to account for everything he might have said about a political opponent during fifteen years of competition. Burr, seeing his honor impugned and his options exhausted, invoked the code of honor and issued a formal challenge to duel.

Hamilton found himself in an impossible position. If he admitted to the insults, which were substantially true, he would lose his honor. If he refused to duel, the result would be the same—his political career would effectively end. Hamilton had personal and moral objections to dueling. His eldest son Philip had died in a duel just three years earlier, at the same Weehawken location where Hamilton and Burr would meet. Hamilton calculated that his ability to maintain his political influence required him to conform to the codes of honor that governed gentlemen’s behavior in early America.

Dawn at Weehawken

At 5:00 AM on the morning of July 11, 1804, the men departed Manhattan from separate docks. They were each rowed across the Hudson River to the Heights of Weehawken, New Jersey—a popular dueling ground where at least 18 known duels took place between 1700 and 1845. They chose New Jersey because while dueling had been outlawed in both New York and New Jersey, the New Jersey penalties were less severe.

Burr arrived first around 6:30 AM, with Hamilton landing about thirty minutes later. Each man was accompanied by his “second”—an assistant responsible for ensuring the duel followed proper protocols. Hamilton brought Nathaniel Pendleton, a Revolutionary War veteran and Georgia district court judge, while Burr’s second was William Van Ness, a New York federal judge. Hamilton also brought Dr. David Hosack, a Columbia College professor of medicine and botany, in case medical attention proved necessary.

Shortly after 7 a.m., the seconds measured out ten paces, loaded the .56‑caliber pistols, and explained the firing rules before Hamilton and Burr took their positions. What exactly happened next remains one of history’s enduring mysteries. The seconds gave conflicting accounts, and historians still debate the sequence and meaning of events.

In a written statement before the duel, Hamilton expressed religious and moral objections to dueling, worry for his family and creditors, and professed no personal hatred of Burr, yet concluded that honor and future public usefulness compelled him to accept. By some accounts, Hamilton had also written to confidants indicating his intention to “throw away my shot”—essentially to deliberately miss Burr, satisfying the requirements of honor without attempting to kill his opponent. Burr, by contrast, appears to have aimed directly at Hamilton.

Some accounts suggest Hamilton fired first, with his shot hitting a tree branch above and behind Burr’s head. Other versions claim Burr shot first. There’s even a theory that Hamilton’s pistol had a hair trigger that caused an accidental discharge after Burr wounded him.

What’s undisputed is the outcome: Burr’s shot struck Hamilton in the lower abdomen, with the bullet lodging near his spine. Hamilton fell, and Burr reportedly started toward his fallen opponent before Van Ness held him back, worried about the legal consequences of lingering at the scene. The two parties crossed back to Manhattan in their respective boats, with Hamilton taken to the home of William Bayard Jr. in what is now Greenwich Village.

Hamilton survived long enough to say goodbye to his wife Eliza and their children. He died at 2 PM on July 12, 1804, approximately 31 hours after being shot.

Political Aftershocks

The nation was outraged. While duels were relatively common in early America, they rarely resulted in death, and the killing of someone as prominent as Alexander Hamilton sparked widespread condemnation. The political consequences proved catastrophic for everyone involved—and reshaped American politics for the next two decades.

Hamilton’s death turned him into a Federalist martyr. Even many who had disliked his arrogance now praised his intellect, service, and sacrifice. His economic vision, already embedded in American institutions, gained a kind of posthumous authority.

For Aaron Burr, the duel destroyed him politically and socially. Murder charges were filed against him in both New York and New Jersey, though neither reached trial—a grand jury in Bergen County, New Jersey indicted him for murder in November 1804, but the New Jersey Supreme Court quashed the indictment. Nevertheless, Burr fled to St. Simons Island, Georgia, staying at the plantation of Pierce Butler, before returning to Washington to complete his term as Vice President.

Rather than restoring his reputation as he’d hoped, the duel made Burr a pariah. He would never hold elected office again. His subsequent attempt to regain power through what historians call the “Burr Conspiracy”—an alleged plan to create an independent nation along the Mississippi River by separating territories from the United States and Spain—led to a treason trial in 1807. Chief Justice John Marshall presided and Burr was ultimately acquitted, but the trial further cemented Burr’s reputation as a dangerous schemer. He spent his later years quietly practicing law in New York.

For the Federalist Party, Hamilton’s death proved even more devastating than Burr’s personal ruin. Hamilton had been the party’s intellectual architect and most effective leader. At the time of his death, the Federalists were attempting a comeback after their national defeat in the 1800 election. Without Hamilton’s energy, strategic thinking, and ability to articulate a compelling vision for the country, the Federalists lost direction. As one historian put it, “The Federalists would be unable to find another leader as forceful and energetic as Hamilton had been, and their movement would slowly suffocate before finally petering out in the early 1820s”. The party’s decline ended what historians consider the first round of partisan struggles in American history.

An interesting footnote: while many Federalists wanted to portray Hamilton as a political martyr, Federalist clergy broke with the party line to condemn dueling itself as a violation of the sixth commandment. These ministers used Hamilton’s death as an opportunity to wage a moral crusade against the practice of dueling, helping to accelerate its decline in American culture—particularly in the northern states where it was already losing favor.

The duel produced a triple tragedy: Hamilton dead at age 47 (or 49—his birth year remains disputed), Burr politically destroyed despite being acquitted of murder charges, and the Federalist Party fatally weakened at a critical moment in American political development.

The Hamilton–Burr duel sits at the intersection of politics, personality, and culture. It reminds us that the early republic was not a calm, rational experiment run by marble statues—but a volatile environment shaped by ego, fear, and ambition. Institutions were young, norms were fragile, and reputations were all important. What began as fifteen years of professional rivalry and personal enmity ended with two brilliant men eliminating each other from the political stage, neither achieving what they’d hoped for through their fatal meeting on the heights of Weehawken.

Sources

Encyclopedia Britannica “Burr-Hamilton duel | Summary, Background, & Facts” https://www.britannica.com/event/Burr-Hamilton-duel

History.com “Aaron Burr slays Alexander Hamilton in duel” https://www.history.com/this-day-in-history/july-11/burr-slays-hamilton-in-duel

Library of Congress “Today in History – July 11” https://www.loc.gov/item/today-in-history/july-11

National Constitution Center “The Burr vs. Hamilton duel happened on this day” https://constitutioncenter.org/blog/burr-vs-hamilton-behind-the-ultimate-political-feud

National Park Service “Hamilton-Burr Duel” https://www.nps.gov/articles/000/hamilton-burr-duel.htm

PBS American Experience “Alexander Hamilton and Aaron Burr’s Duel” https://www.pbs.org/wgbh/americanexperience/features/duel-alexander-hamilton-and-aaron-burrs-duel/

The Gospel Coalition “American Prophets: Federalist Clergy’s Response to the Hamilton–Burr Duel of 1804” https://www.thegospelcoalition.org/themelios/article/american-prophets-federalist-clergys-response-to-the-hamilton-burr-duel-of-1804/

Wikipedia “Burr–Hamilton duel” https://en.wikipedia.org/wiki/Burr–Hamilton_duel

World History Encyclopedia “Hamilton-Burr Duel” https://www.worldhistory.org/article/2548/hamilton-burr-duel/​​​​​​​​​​​​​​​​

For more information about the history of dueling in early America see my earlier post: Pistols at Dawn, The Rise and Fall of the Code Duello.

Images generated by author using ChatGPT.

VO₂ Max Explained: The Fitness Metric That Predicts Health and Longevity

If you’ve ever wondered what separates elite endurance athletes from weekend warriors—or why your friend can cruise up hills while you’re gasping for air—the answer often comes down to a vital sign you’ve probably never heard of — VO2 max. Think of it as your cardiovascular system’s horsepower rating, a number that tells you how efficiently your body can use oxygen during intense exercise.

What VO2 Max Actually Means

VO2 max stands for maximal oxygen consumption; it measures the maximum amount of oxygen your body can take in, transport, and use during exercise. Scientists express it in milliliters of oxygen per kilogram of body weight per minute (ml/kg/min). When you’re working out at your absolute limit—say, sprinting up a hill until you simply can’t go any faster—your muscles are burning through oxygen to produce energy. VO2 max represents the ceiling of that process, the point where your body has maxed out its oxygen delivery system and can’t use any more oxygen even if you try to push harder.

An average sedentary man might have a VO2 max around 30-40 ml/kg/min, while an average woman might measure 25-30 ml/kg/min. Elite endurance athletes, however, occupy an entirely different universe. Cross-country skiers and distance runners can reach values of 70-85 ml/kg/min or even higher. The legendary Norwegian cyclist Oskar Svendsen reportedly recorded a VO2 max of 97.5 ml/kg/min, which is probably the upper reaches of human cardiovascular capacity.

 The rest of us are also affected by VO2 Max.  In later life, it is closely tied to our everyday activities. There’s a minimum aerobic capacity required for independent living—walking briskly, climbing stairs, carrying groceries. As VO2 max declines to that functional threshold, small losses can translate into disproportionate declines in independence. Conversely, modest improvements can produce meaningful gains in stamina, balance, and confidence.

The Gold Standard of Measurement

The most accurate way to measure VO2 max involves what’s called a graded exercise test, typically performed in a lab or clinical setting. You’ll hop on a treadmill or stationary bike while wearing a mask connected to a metabolic cart—essentially a sophisticated machine that analyzes every breath you take. The test starts easy but gets progressively harder every few minutes. The technician increases either the speed, incline, or resistance while the equipment measures exactly how much oxygen you’re consuming and how much carbon dioxide you’re producing.

You keep going until you reach exhaustion—the point where you literally cannot continue despite maximum effort. The highest oxygen consumption rate recorded during this test is your VO2 max. It’s not a particularly pleasant experience, but it’s incredibly accurate. The test also provides valuable data about your anaerobic threshold, the point where your body starts relying more heavily on systems that don’t require oxygen and where lactic acid begins accumulating in your muscles.

For those of us without access to exercise labs, there are several field tests we can use to estimate VO2 max reasonably well. The Cooper test, developed by Dr. Kenneth Cooper in the 1960s, involves running as far as you can in 12 minutes on a track (that wouldn’t be too far for me). The distance you cover correlates with your VO2 max through established formulas [VO2max: (distance covered in meters – 504.9) / 44.73 =  VO2 max in ml/kg/min].  Age and gender normed values can be found on a number of fitness websites. Many fitness watches and apps now offer VO2 max estimates based on heart rate data during runs, though these are less precise than laboratory testing.

Why This Number Matters

VO2 max serves as one of our strongest predictors of cardiovascular health and longevity. Research published in major medical journals has consistently shown that higher VO2 max values correlate with lower risks of heart disease, diabetes, and all-cause mortality. A 2018 study in the Journal of the American Medical Association (JAMA) that followed over 122,000 patients found that cardiorespiratory fitness (measured by VO2 max) was a better predictor of mortality than traditional risk factors like hypertension, diabetes, or even smoking.

The relationship is striking, for every 3.5 ml/kg/min increase in VO2 max, mortality risk drops by about 13 percent. People in the lowest fitness category (those with the poorest VO2 max scores) have death rates two to three times higher than those in the highest fitness category, even when controlling for other health factors.

Beyond mortality statistics, VO2 max influences your daily quality of life. A higher VO2 max means your heart doesn’t have to work as hard during routine activities. Climbing stairs, carrying groceries, playing with kids or grandkids—all these activities demand less relative effort when your cardiovascular system operates efficiently. Your body becomes better at delivering oxygen-rich blood to working muscles and clearing away metabolic waste products, which means you fatigue less easily and recover more quickly.

The Path to Improvement

The encouraging news is that VO2 max responds remarkably well to training, especially if you’re starting from a sedentary baseline. You can’t completely escape genetics—some people are simply born with larger hearts, more efficient lungs, or a higher percentage of slow-twitch muscle fibers—but training can typically improve VO2 max by 15-30 percent in previously untrained people.

The most effective approach combines several training methods. High-intensity interval training (HIIT) has emerged as particularly powerful tool for boosting VO2 max. These workouts involve short bursts of near-maximal effort followed by recovery periods. A classic protocol might involve running hard for four minutes at about 90-95 percent of your maximum heart rate, then recovering with light jogging for three minutes, repeated four or five times. Studies show that just two or three HIIT sessions per week can produce significant improvements in VO2 max within eight to twelve weeks.

Longer, steady-state aerobic exercise also plays a crucial role. These sessions—think longer runs at a conversational pace—improve your cardiovascular system’s efficiency and build the capillary networks that deliver oxygen to muscles. The optimal training program typically includes both high-intensity intervals and longer moderate-intensity sessions, along with adequate recovery time.

Interestingly, resistance training can indirectly support VO2 max improvements as well. While lifting weights won’t directly boost your oxygen consumption capacity the way running does, it helps maintain lean muscle mass, improves movement efficiency, and can enhance your ability to perform high-intensity cardiovascular work.

This high intensity training is all well and good for young, relatively healthy people. But what about older folks, particularly those with underlying medical problems?

The encouraging news: VO2 max responds to training well into our 70s, 80s, and beyond.  Key approaches involve the same elements but tailored to age and medical history.

Moderate-intensity aerobic exercise (brisk walking, cycling, swimming) performed most days of the week is the primary element. Individually adjusted interval training, including carefully supervised higher intensity intervals, have shown impressive VO2 max gains even in older populations.  Strength training is beneficial for older folks as well, and as an added benefit, it helps maintain and even improve bone density. A personal trainer can help design your fitness program to maximize improvement while minimizing the likelihood of injury.  

Stop any exercise immediately if you experience chest pain, dizziness, or extreme shortness of breath. Remember consistency matters more than intensity alone and, most importantly, never start any exercise program without checking with your doctor first. 

The Inevitable Decline

Here’s the less cheerful part: VO2 max naturally declines with age, typically dropping about 10 percent per decade after age 30 in sedentary people. This decline accelerates after age 70. However—and this is crucial—regular exercise dramatically slows this process. Senior athletes who maintain consistent training can preserve VO2 max values that rival or exceed those of sedentary people decades younger. A fit 60-year-old can easily have a higher VO2 max than an inactive 40-year-old.

The decline happens for several reasons: maximum heart rate decreases, cardiac output drops, muscle mass decreases, and the body becomes less efficient at extracting oxygen from blood. But none of these changes are inevitable consequences of aging alone—they’re heavily influenced by activity levels.

Putting It in Perspective

While VO2 max provides valuable information about cardiovascular fitness, it’s worth remembering that it’s just one metric among many. You don’t need the VO2 max of an Olympic athlete to be healthy and enjoy an active life (thankfully). A moderate VO2 max maintained consistently into your later years will serve you far better than a high value in your twenties followed by decades of inactivity.

The real value of understanding VO2 max lies in what it represents: your body’s fundamental capacity to generate energy and support movement. When you work to improve this capacity through regular cardiovascular exercise, you’re investing in both your current quality of life and your long-term health prospects.  Every little bit helps—so put down the remote, get up off the couch and start walking.  You’ll be glad you did.

​​​​

Sources:

  • American College of Sports Medicine on VO2 max testing: https://www.acsm.org/
  • Mayo Clinic on cardiorespiratory fitness: https://www.mayoclinic.org/
  • National Institutes of Health research on fitness and mortality: https://www.nih.gov/
  • JAMA Network 2018 study on cardiorespiratory fitness and mortality: https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2707428

Image generated by author using ChatGPT

When They Knew: How the Fossil Fuel Industry Buried Its Own Climate Science

The story begins not with climate deniers casting doubt on new science, but with something far more troubling: companies conducting rigorous research, understanding exactly what their products would do to the planet, and then spending decades lying to the public. They treated science as an internal planning tool and then deployed public relations, front groups, and “manufactured doubt” to delay regulation and protect profits.

The Oil Industry’s Own Scientists Saw It Coming

In 1977, a scientist named James Black stood before Exxon’s management committee with an uncomfortable message. According to internal documents later uncovered by investigative journalists, Black told executives that burning fossil fuels was increasing atmospheric carbon dioxide, and that continually rising CO2 levels would increase global temperatures by two to three degrees—a projection that is still consistent with today’s scientific consensus. He warned that we had a window of just five to ten years before “hard decisions regarding changes in energy strategies might become critical”.

What happened next is remarkable for its precision. Throughout the late 1970s and 1980s, Exxon assembled what one scientist called “a credible scientific team” to investigate the climate question. They launched ambitious projects, including outfitting a supertanker with custom instruments to measure how oceans absorbed CO2—one of the most pressing scientific questions of the era. A 2023 Harvard study analyzing Exxon’s internal climate projections from 1977 to 2003 found they predicted global warming with what researchers called “shocking skill and accuracy.” Specifically, the company projected 0.20 degrees Centigrade of warming per decade, with a margin of error of just 0.04 degrees—a forecast that has proven largely correct.

Exxon wasn’t alone. Shell produced a confidential 1988 report titled “The Greenhouse Effect” that warned of climate changes “larger than any that have occurred over the last 12,000 years,” including destructive floods and mass migrations. The report revealed Shell had been running an internal climate science program since 1981. In one striking document from 1986, Shell predicted that fossil fuel emissions would cause changes “the greatest in recorded history”.

Even industry groups understood what was coming. In 1980, the American Petroleum Institute (API) invited Stanford scientist John Laurmann to brief oil company representatives at its secret “CO2 and Climate Task Force”. His presentation, now public, warned that continued fossil fuel use would be “barely noticeable” by 2005 but by the 2060s would have “globally catastrophic effects.” That same year, the API called on governments to triple coal production worldwide, publicly insisting there would be no negative consequences.

The Coal Industry Knew Even Earlier

If anything, the coal industry understood the problem first. A 1966 article in the trade publication Mining Congress Journal by James Garvey, president of Bituminous Coal Research Inc., explicitly discussed how continued coal consumption would increase atmospheric temperatures and cause “vast changes in the climates of the earth.” A combustion engineer from Peabody Coal, now the world’s largest coal company, acknowledged in the same publication that the industry was “buying time” before air pollution regulations would force action.

This 1966 evidence is particularly damning because it predates widespread public awareness by decades. The coal industry didn’t stumble into climate denial—they entered it with full knowledge of what they were obscuring.

Major coal interests also had early awareness that carbon emissions posed regulatory and market risks, particularly for coal‑fired electricity, and they participated in joint industry research and strategy discussions about climate change in the 1980s and 1990s. At the same time, coal associations helped create public campaigns such as the Information Council for the Environment (ICE—even then a disturbing acronym), whose internal planning documents explicitly set an objective to “reposition global warming as theory (not fact)” and to target specific demographic groups with tailored doubt‑based messages.

According to a report from the Union of Concerned Scientists, these efforts often relied on “grassroots” fronts, advertising, and even forged constituent letters to legislators to undermine support for climate policy and to counter the conclusions of mainstream climate science, which even the companies’ own experts did not refute.

What They Said Publicly

The contrast between private knowledge and public statements is stark. While Exxon scientists were building sophisticated climate models internally, the company’s public messaging emphasized uncertainty. In a 1997 speech, Exxon CEO Lee Raymond told an audience at the World Petroleum Conference: “Let’s agree there’s a lot we really don’t know about how climate change will change in the 21st century and beyond”.  They spread messaging that emphasized uncertainty, framed global warming as just a “theory,” and highlighted supposed flaws in climate models, even as their own scientists were using those models to make precise projections. The company and allied trade associations supported think tanks and advocacy groups such as Citizens For Sound Science, that questioned if human activity was responsible for warming and opposed binding limits on emissions, producing a stark discrepancy between internal scientific knowledge and external communication.

In 1989, Exxon helped create the Global Climate Coalition—despite its environmental sounding name, the organization worked to cast doubt on climate science and block clean energy legislation throughout the 1990s. Electric utilities and coal‑linked organizations joined this coalition to systematically attack climate scientists and lobby to weaken or stall international agreements like the Kyoto Protocol, despite internal recognition that greenhouse gases were driving warming.

Internal API documents from a 1998 meeting reveal an explicit strategy to “ensure that a majority of the American public… recognizes that significant uncertainties exist in climate science”.

In 1991, Shell produced a film, “Climate of Concern,” which stated that human driven—as opposed to greenhouse gas driven—climate change was happening “at a rate faster than at any time since the end of the ice age” and warned of extreme weather, flooding, famine, and climate refugees.  They understood the science but tried to shift the blame.

According to a 2013 Drexel University study, between 2003 and 2010 alone, approximately $558 million was distributed to about 100 climate change denial organizations. Greenpeace reports that Exxon alone spent more than $30 million on think tanks promoting climate denial.

The Tobacco Playbook

The parallels to Big Tobacco’s strategy are not coincidental—they’re intentional. Research by the Center for International Environmental Law uncovered more than 100 documents from the Tobacco Industry Archives showing that oil and tobacco companies not only used the same PR firms and research institutes, but often the same individual researchers. The connection goes back to at least the 1950s.  A report published in Scientific American suggests the oil and tobacco industries both hired the PR firm Hill & Knowlton Inc. as early as 1956.

A 1969 internal memo from R.J. Reynolds Tobacco Company stated plainly: “Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the mind of the general public”. This became the template. Create uncertainty. Emphasize what isn’t known rather than what is. Fund research that casts doubt. Attack the credibility of independent scientists. They formed organizations with scientific-sounding names that existed primarily to muddy the waters.

In one particularly brazen example, a 2015 presentation by Cloud Peak Energy executive Richard Reavey titled “Survival Is Victory: Lessons From the Tobacco Wars,” explicitly coached coal executives on how to apply tobacco industry tactics.

What makes the fossil fuel case particularly egregious is the temporal dimension. These weren’t companies caught off-guard by emerging science. They funded the research. They understood the findings. Their own scientists urged action. A 1978 Exxon memo noted this could be “the kind of opportunity we are looking for to have Exxon technology, management and leadership resources put into the context of a project aimed at benefitting mankind”.

Instead, when oil prices collapsed in the mid-1980s, Exxon pivoted from conducting climate research to funding climate denial. By the late 1980s, according to reporting by InsideClimate News, Exxon “curtailed its carbon dioxide research” and “worked instead at the forefront of climate denial”.

Where We Stand Now

Across the oil, gas, and coal industries, there is not a genuine scientific dispute inside companies but a divergence between what in‑house experts knew and what corporate leaders chose to communicate to the public and policymakers. This divergence mirrors the tobacco industry’s long‑running use of organized doubt. In both arenas, industry actors treated early recognition of harm as a legal and political threat and responded by investing in campaigns to confuse, delay, and reframe the science rather than addressing the risks their own research had identified.

The evidence trail has led to legal action. More than 20 cities, counties, and states have filed lawsuits against fossil fuel companies for damages caused by climate change, arguing the industry knowingly deceived the public. The European Parliament held hearings in 2019 on climate denial by ExxonMobil and other actors. The hashtags #ExxonKnew, #ShellKnew, and #TotalKnew have become rallying cries for accountability.

Senator Sheldon Whitehouse has explicitly compared the fossil fuel industry’s actions to the tobacco racketeering case that ultimately held cigarette makers accountable. As he noted in a Senate speech, the elements of a civil racketeering case are straightforward: defendants conducted an enterprise with a pattern of racketeering activity.

The difference between the tobacco and fossil fuel cases may be one of scale. As researchers Naomi Oreskes and Erik Conway documented in their book Merchants of Doubt, both industries worked to obscure truth for profit. But while tobacco kills individuals, climate change threatens entire ecosystems and future generations.  The time to act is now.

Sources:

Scientific American – “Exxon Knew about Climate Change Almost 40 Years Ago”
https://www.scientificamerican.com/article/exxon-knew-about-climate-change-almost-40-years-ago/
 
Harvard Gazette – Harvard-led analysis finds ExxonMobil internal research accurately predicted climate change
https://news.harvard.edu/gazette/story/2023/01/harvard-led-analysis-finds-exxonmobil-internal-research-accurately-predicted-climate-change/
 
InsideClimate News – Exxon’s Own Research Confirmed Fossil Fuels’ Role in Global Warming Decades Ago
https://insideclimatenews.org/news/02052024/from-the-archive-exxon-research-global-warming/
 
PBS Frontline – Investigation Finds Exxon Ignored Its Own Early Climate Change Warnings
https://www.pbs.org/wgbh/frontline/article/investigation-finds-exxon-ignored-its-own-early-climate-change-warnings/
 
NPR – Exxon climate predictions were accurate decades ago. Still it sowed doubt
https://www.npr.org/2023/01/12/1148376084/exxon-climate-predictions-were-accurate-decades-ago-still-it-sowed-doubt
 
Science (journal) – Assessing ExxonMobil’s global warming projections
https://www.science.org/doi/10.1126/science.abk0063
 
Climate Investigations Center – Shell Climate Documents
https://climateinvestigations.org/shell-oil-climate-documents/
 
The Conversation – What Big Oil knew about climate change, in its own words
https://theconversation.com/what-big-oil-knew-about-climate-change-in-its-own-words-170642
 
ScienceAlert – The Coal Industry Was Well Aware of Climate Change Predictions Over 50 Years Ago
https://www.sciencealert.com/coal-industry-knew-about-climate-change-in-the-60s-damning-revelations-show
 
The Intercept – A Major Coal Company Went Bust. Its Bankruptcy Filing Shows That It Was Funding Climate Change Denialism
https://theintercept.com/2019/05/16/coal-industry-climate-change-denial-cloud-peak-energy/
 
Center for International Environmental Law – Big Oil Denial Playbook Revealed by New Documents
https://www.ciel.org/news/oil-tobacco-denial-playbook/
 
Wikipedia – Tobacco industry playbook
https://en.wikipedia.org/wiki/Tobacco_industry_playbook
 
Scientific American – Tobacco and Oil Industries Used Same Researchers to Sway Public
https://www.scientificamerican.com/article/tobacco-and-oil-industries-used-same-researchers-to-sway-public1/
 
Environmental Health (journal) – The science of spin: targeted strategies to manufacture doubt with detrimental effects on environmental and public health
https://link.springer.com/article/10.1186/s12940-021-00723-0
 
Senator Sheldon Whitehouse – Time to Wake Up: Climate Denial Recalls Tobacco Racketeering
https://www.whitehouse.senate.gov/news/speeches/time-to-wake-up-climate-denial-recalls-tobacco-racketeering/
 
VICE News – Meet the ‘Merchants of Doubt’ Who Sow Confusion about Tobacco Smoke and Climate Change
https://www.vice.com/en/article/meet-the-merchants-of-doubt-who-sow-confusion-about-tobacco-smoke-and-climate-change/
 
Union of Concerned Scientists – The Climate Deception Dossiers
https://www.ucs.org/sites/default/files/attach/2015/07/The-Climate-Deception-Dossiers.pdf
 
 
Illustration generated by author using ChatGPT.
 
 
 
 
 
 

The Founding Feuds: When America’s Heroes Couldn’t Stand Each Other

The mythology of the founding fathers often portrays them as a harmonious band of brothers united in noble purpose. The reality was far messier—these brilliant, ambitious men engaged in bitter personal feuds that sometimes threatened the very republic they were creating.  In some ways, the American revolution was as much of a battle of egos as it was a war between King and colonists.

The Revolutionary War Years: Hancock, Adams, and Washington’s Critics

The tensions began even before independence was declared. John Hancock and Samuel Adams, both Massachusetts firebrands, developed a rivalry that simmered throughout the Revolution. Adams, the older political strategist, had been the dominant figure in Boston’s resistance movement. When Hancock—wealthy, vain, and eager for glory—was elected president of the Continental Congress in 1775, the austere Adams felt his protégé had grown too big for his britches. Hancock’s request for a leave of absence from the presidency of Congress in 1777 coupled with his desire for an honorific military escort home, struck Adams as a relapse into vanity. Adams even opposed a resolution of thanks for Hancock’s service, signaling open estrangement. Their relationship continued to deteriorate to the point where they barely spoke, with Adams privately mocking Hancock’s pretensions and Hancock using his position to undercut Adams politically.

The choice of Washington as commander sparked its own controversies. John Adams had nominated Washington, partly to unite the colonies by giving Virginia the top military role. Washington’s command was anything but universally admired and as the war dragged on with mixed results many critics emerged.

After the victory at Saratoga in 1777, General Horatio Gates became the focal point of what’s known as the Conway Cabal—a loose conspiracy aimed at having Gates replace Washington as commander-in-chief. General Thomas Conway wrote disparaging letters about Washington’s military abilities. Some members of Congress, including Samuel Adams, Thomas Mifflin, and Richard Henry Lee, questioned whether Washington’s defensive strategy was too cautious and if his battlefield performance was lacking. Gates himself played a duplicitous game, publicly supporting Washington while privately positioning himself as an alternative.

When Washington discovered the intrigue, his response was characteristically measured but firm.  Rather than lobbying Congress or forming a counter-faction, Washington leaned heavily on reputation and restraint. He continued to communicate respectfully with Congress, emphasizing the army’s needs rather than defending his own position.  Washington did not respond with denunciations or public accusations. Instead, he handled the situation largely behind the scenes. When he learned that Conway had written a critical letter praising Gates, Washington calmly informed him that he was aware of the letter—quoting it verbatim.

The conspiracy collapsed, in part because Washington’s personal reputation with the rank and file and with key political figures proved more resilient than his critics had anticipated. But the episode exposed deep fractures over strategy, leadership, and regional loyalties within the revolutionary coalition.

The Ideological Split: Hamilton vs. Jefferson and Madison

Perhaps the most consequential feud emerged in the 1790s between Alexander Hamilton and Thomas Jefferson, with James Madison eventually siding with Jefferson. This wasn’t just personal animosity—it represented a fundamental disagreement about America’s future.

Hamilton, Washington’s Treasury Secretary, envisioned an industrialized commercial nation with a strong central government, a national bank, and close ties to Britain. Jefferson, the Secretary of State, championed an agrarian republic of small farmers with minimal federal power and friendship with Revolutionary France. Their cabinet meetings became so contentious that Washington had to mediate. Hamilton accused Jefferson of being a dangerous radical who would destroy public credit. Jefferson called Hamilton a monarchist who wanted to recreate British aristocracy in America.

The conflict got personal. Hamilton leaked damaging information about Jefferson to friendly newspapers. Jefferson secretly funded a journalist, James Callender, to attack Hamilton in print. When Hamilton’s extramarital affair with Maria Reynolds became public in 1797, Jefferson’s allies savored every detail. The feud split the nation into the first political parties: Hamilton’s Federalists and Jefferson’s Democratic-Republicans. Madison, once Hamilton’s ally in promoting the Constitution, switched sides completely, becoming Jefferson’s closest political partner and Hamilton’s implacable foe.

The Adams-Jefferson Friendship, Rivalry, and Reconciliation

John Adams and Thomas Jefferson experienced one of history’s most remarkable personal relationships. They were close friends during the Revolution, working together in Congress and on the committee to draft the Declaration of Independence (though Jefferson did the actual writing). Both served diplomatic posts in Europe and developed deep mutual respect.

But the election of 1796 turned them into rivals. Adams won the presidency with Jefferson finishing second, making Jefferson vice president under the original constitutional system—imagine your closest competitor becoming your deputy. By the 1800 election, they were bitter enemies. The campaign was vicious, with Jefferson’s supporters calling Adams a “hideous hermaphroditical character” and Adams’s allies claiming Jefferson was an atheist who would destroy Christianity.

Jefferson won in 1800, and the two men didn’t speak for over a decade. Their relationship was so bitter that Adams left Washington early in the morning, before Jefferson’s inauguration. What makes their story extraordinary is the reconciliation. In 1812, mutual friends convinced them to resume correspondence. Their letters over the next fourteen years—158 of them—became one of the great intellectual exchanges in American history, discussing philosophy, politics, and their memories of the Revolution. Both men died on July 4, 1826, the fiftieth anniversary of the Declaration of Independence, with Adams’s last words reportedly being “Thomas Jefferson survives” (though Jefferson had actually died hours earlier).

Franklin vs. Adams: A Clash of Styles

In Paris, the relationship between Benjamin Franklin and John Adams was a tense blend of grudging professional reliance and deep personal irritation, rooted in radically different diplomatic styles and temperaments. Franklin, already a celebrated figure at Versailles, cultivated French support through charm, sociability, and patient maneuvering in salons and at court, a method that infuriated Adams. He equated such “nuances” with evasiveness and preferred direct argument, formal memorandums, and hard‑edged ultimatums. Sharing lodgings outside Paris only intensified Adams’s resentment as he watched Franklin rise late, receive endless visitors, and seemingly mix pleasure with business, leading Adams to complain that nothing would ever get done unless he did it himself, while Franklin privately judged Adams “always an honest man, often a wise one, but sometimes and in some things, absolutely out of his senses.” Their French ally, Foreign Minister Vergennes, reinforced the imbalance by insisting on dealing primarily with Franklin and effectively sidelining Adams in formal diplomacy. This deepened Adams’s sense that Franklin was both overindulged by the French and insufficiently assertive on America’s behalf. Yet despite their mutual loss of respect, the two ultimately cooperated—often uneasily—in the peace negotiations with Britain, and both signatures appear on the 1783 Treaty of Paris, a testament to the way personal feud and shared national purpose coexisted within the American diplomatic mission.

Hamilton and Burr: From Political Rivalry to Fatal Duel

The Hamilton-Burr feud ended in the most dramatic way possible: a duel at Weehawken, New Jersey, on July 11, 1804, where Hamilton was mortally wounded and Burr destroyed his own political career.

Their rivalry had been building for years. Both were New York lawyers and politicians, but Hamilton consistently blocked Burr’s ambitions. When Burr ran for governor of New York in 1804, Hamilton campaigned against him with particular venom, calling Burr dangerous and untrustworthy at a dinner party. When Burr read accounts of Hamilton’s remarks in a newspaper, he demanded an apology. Hamilton refused to apologize or deny the comments, leading to the duel challenge.

What made this especially tragic was that Hamilton’s oldest son, Philip, had been killed in a duel three years earlier defending his father’s honor. Hamilton reportedly planned to withhold his fire, but he either intentionally shot into the air or missed. Burr’s shot struck Hamilton in the abdomen, and he died the next day. Burr was charged with murder in both New York and New Jersey and fled to the South.  Though he later returned to complete his term as vice president, his political career was finished.

Adams vs. Hamilton: The Federalist Crack-Up

One of the most destructive feuds happened within the same party. John Adams and Alexander Hamilton were both Federalists, but their relationship became poisonous during Adams’s presidency (1797-1801).

Hamilton, though not in government, tried to control Adams’s cabinet from behind the scenes. When Adams pursued peace negotiations with France (the “Quasi-War” with France was raging), Hamilton wanted war. Adams discovered that several of his cabinet members were more loyal to Hamilton than to him and fired them. In the 1800 election, Hamilton wrote a fifty-four-page pamphlet attacking Adams’s character and fitness for office—extraordinary since they were in the same party. The pamphlet was meant for limited circulation among Federalist leaders, but Jefferson’s allies got hold of it and published it widely, devastating both Adams’s re-election chances and Hamilton’s reputation. The feud helped Jefferson win and essentially destroyed the Federalist Party.

Washington and Jefferson: The Unacknowledged Tension

While Washington and Jefferson never had an open feud, their relationship cooled significantly during Washington’s presidency. Jefferson, as Secretary of State, increasingly opposed the administration’s policies, particularly Hamilton’s financial program. When Washington supported the Jay Treaty with Britain in 1795—which Jefferson saw as a betrayal of France and Republican principles—Jefferson became convinced Washington had fallen under Hamilton’s spell.

Jefferson resigned from the cabinet in 1793, partly from policy disagreements but also from discomfort with what he saw as Washington’s monarchical tendencies (the formal receptions and the ceremonial aspects of the presidency). Washington, in turn, came to view Jefferson as disloyal, especially when he learned Jefferson had been secretly funding attacks on the administration in opposition newspapers and had even put a leading critic on the federal payroll. By the time Washington delivered his Farewell Address in 1796, warning against political parties and foreign entanglements, many saw it as a rebuke of Jefferson’s philosophy. They maintained outward courtesy, but their warm relationship never recovered.

Why These Feuds Mattered

These weren’t just personal squabbles—they shaped American democracy in profound ways. The Hamilton-Jefferson rivalry created our two-party system (despite Washington’s warnings). The Adams-Hamilton split showed that parties could fracture from within. The Adams-Jefferson reconciliation demonstrated that political enemies could find common ground after leaving power.

The founding fathers were human, with all the ambition, pride, jealousy, and pettiness that entails. They fought over power, principles, and personal slights. What’s remarkable isn’t that they agreed on everything—they clearly didn’t—but that despite their bitter divisions, they created a system robust enough to survive their feuds. The Constitution itself, with its checks and balances, almost seems designed to accommodate such disagreements, ensuring that no single person or faction could dominate.

SOURCES

  1. National Archives – Founders Online

https://founders.archives.gov

2.   Massachusetts Historical Society – Adams-Jefferson Letters

https://www.masshist.org/publications/adams-jefferson

       3.    Founders Online – Hamilton’s Letter Concerning John Adams

https://founders.archives.gov/documents/Hamilton/01-25-02-0110

       4.    Gilder Lehrman Institute – Hamilton and Jefferson

https://www.gilderlehrman.org/history-resources/spotlight-primary-source/alexander-hamilton-and-thomas-jefferson

       5.    National Park Service – The Conway Cabal

https://www.nps.gov/articles/000/the-conway-cabal.htm

       6.    American Battlefield Trust – Hamilton-Burr Duel

https://www.battlefields.org/learn/articles/hamilton-burr-duel

        7.   Mount Vernon – Thomas Jefferson

https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/thomas-jefferson

        8.   Monticello – Thomas Jefferson Encyclopedia

https://www.monticello.org/research-education/thomas-jefferson-encyclopedia

        9.   Library of Congress – John Adams Papers

https://www.loc.gov/collections/john-adams-papers

10. Joseph Ellis – “Founding Brothers: The Revolutionary Generation”

https://www.pulitzer.org/winners/joseph-j-ellis

Illustration generated by author using ChatGPT.

Fecal Microbiota Transplantation: When Waste Becomes Therapy

Today I’m going to talk about something that may sound unbelievable and maybe even a little gross—fecal transplant. Yes, it’s exactly what it sounds like. Getting a transplant of someone else’s poop.

The human gut contains trillions of microorganisms—bacteria, viruses, fungi—living in a complex ecosystem that influences everything from digestion to immune function. This is called the microbiome.  When this ecosystem gets disrupted, the consequences can range from uncomfortable to life-threatening. Enter one of medicine’s most counterintuitive treatments: fecal microbiota transplantation, or FMT, where stool from a healthy donor is transferred to a patient to restore a healthy community of gut microbes.

What Is FMT

The basic idea is simple: if someone’s microbiome has been badly disrupted (most commonly by repeated antibiotic exposure), replacing it with a balanced microbial ecosystem can help the gut recover.  At its core, FMT is taking fecal matter from a healthy donor and introducing it into a patient’s gastrointestinal tract. But it’s not the solid waste itself that matters; it’s the billions of beneficial bacteria and other microorganisms living in that material. Think of it as a probiotic treatment on steroids, delivering an entire functioning ecosystem rather than just a few select bacterial strains.

The gut microbiome plays crucial roles in digestion, vitamin production, immune system regulation, and even protection against harmful pathogens. When antibiotics, illness, or other factors devastate this ecosystem, dangerous bacteria like Clostridioides difficile (C. diff) can take over, causing severe diarrhea, inflammation, and potentially fatal infections.

The Clinical Track Record

While it may sound like “weird science”, FMT has been around for centuries. It was used in ancient Chinese medicine in a formulation called “yellow soup“ to treat food poisoning and intractable diarrhea. It was used as early as the 16th century in Europe to treat sick farm animals, particularly sheep and cattle.

FMT’s most dramatic success story involves C. diff infections, particularly the recurrent cases that don’t respond to antibiotics. Multiple randomized controlled trials have shown FMT to be remarkably effective—with cure rates often exceeding 80-90% for recurrent C. diff infections, compared to roughly 25-30% for continued antibiotic therapy. A landmark 2013 study reported in the New England Journal of Medicine was stopped early because FMT was so dramatically superior to standard treatment that continuing to withhold it from the control group seemed unethical.

Beyond C. diff, researchers are investigating FMT for inflammatory bowel diseases like ulcerative colitis and Crohn’s disease, with mixed but occasionally promising results. Some studies have shown potential for ulcerative colitis, with remission rates around 24-27%. The research into Crohn’s disease, irritable bowel syndrome, metabolic disorders, and even neurological conditions is ongoing but less conclusive. The FDA currently considers FMT an investigational treatment for most conditions except recurrent C. diff, where it’s become a recognized therapeutic option.

How It Works

The actual process of FMT can use several routes. The most common approaches involve colonoscopy, where the donated material is delivered directly to the colon, or through nasogastric or nasoduodenal tubes that thread through the nose down to the small intestine. More recently, oral capsules containing frozen, encapsulated donor stool have become available, offering a less invasive alternative that patients often prefer.

Before the transplant, the donated stool is carefully processed. It’s typically mixed with a saline solution and filtered to remove large particles while preserving the microbial communities. The resulting liquid suspension is what gets delivered to the patient. For frozen preparations, this material is mixed with a cryoprotectant, frozen at extremely cold temperatures, and can be stored for months before use.

The preparation isn’t just about the donor material—patients often undergo their own preparation. Many protocols include antibiotics to reduce the overgrowth of harmful bacteria before the transplant, followed by bowel cleansing similar to what you’d do before a colonoscopy. The idea is to create a relatively clean slate where the new microbial ecosystem can establish itself.

Sources of Donor Material

This brings us to one of the most critical aspects: donor selection and screening. Not just anyone can donate stool for medical use. The screening process is extensive and rigorous, rivaling or exceeding the scrutiny applied to blood donation.

Donors undergo detailed health questionnaires covering everything from recent travel and antibiotic use to gastrointestinal symptoms and risk factors for infectious diseases. They provide blood and stool samples that are tested for a long list of potential pathogens: C. diff, Helicobacter pylori, parasites, hepatitis A, B, and C, HIV, syphilis, and various other bacteria and viruses. The FDA issued guidance requiring additional testing for multi-drug resistant organisms after several patients contracted serious infections from FMT.

Donors generally fall into two categories: directed donors and universal donors. Directed donors are typically family members or friends who undergo screening and provide stool specifically for one patient. Universal donors go through the same rigorous screening but provide samples that can be used for multiple patients. These universal donors often work with stool banks—specialized facilities that collect, process, screen, and distribute donor material to healthcare providers.

The largest stool bank in the United States, OpenBiome, was founded in 2012 and has processed material from thousands of donors for tens of thousands of treatments. They report that only about 2-3% of volunteer donors successfully make it through the screening process, highlighting just how selective the criteria are. These banks have made FMT more widely available, eliminating the need for individual healthcare facilities to find and screen their own donors.

The Balance of Promise and Caution

While FMT represents a genuine breakthrough for recurrent C. diff infections, the medical community remains appropriately cautious about expanding its use. The FDA regulates FMT and has expressed concerns about potential risks, particularly after cases where patients developed serious infections from inadequately screened donors. There questions about the long-term effects of introducing another person’s microbiome, and there are theoretical concerns about transmitting conditions or predispositions we don’t fully understand.

The research into FMT for conditions beyond C. diff continues, but many trials have shown modest or inconsistent results. The microbiome’s role in health and disease is incredibly complex, and what works dramatically for one condition may not translate to others. Still, the fundamental insight—that our gut microbiome profoundly influences our health and that we can therapeutically manipulate it—has opened potential new avenues in medicine.

Sources

                1. van Nood, E., et al. (2013). “Duodenal Infusion of Donor Feces for Recurrent Clostridium difficile.” New England Journal of Medicine, 368(5), 407-415. https://www.nejm.org/doi/full/10.1056/NEJMoa1205037

                2. U.S. Food and Drug Administration. “Fecal Microbiota for Transplantation: Safety Information.” https://www.fda.gov/vaccines-blood-biologics/safety-availability-biologics/fecal-microbiota-transplantation-safety-information

                3. Cammarota, G., et al. (2017). “European consensus conference on faecal microbiota transplantation in clinical practice.” Gut, 66(4), 569-580. https://gut.bmj.com/content/66/4/569

                4. Moayyedi, P., et al. (2015). “Fecal Microbiota Transplantation Induces Remission in Patients With Active Ulcerative Colitis in a Randomized Controlled Trial.” Gastroenterology, 149(1), 102-109. https://www.gastrojournal.org/article/S0016-5085(15)00381-5/fulltext

                5. Kelly, C.R., et al. (2016). “Update on Fecal Microbiota Transplantation 2015: Indications, Methodologies, Mechanisms, and Outlook.” Gastroenterology, 150(1), 276-290. https://www.gastrojournal.org/article/S0016-5085(15)01626-7/fulltext

                6. OpenBiome. “Our Process: Screening.” https://www.openbiome.org/safety

                7. Quraishi, M.N., et al. (2017). “Systematic review with meta-analysis: the efficacy of faecal microbiota transplantation for the treatment of recurrent and refractory Clostridium difficile infection.” Alimentary Pharmacology & Therapeutics, 46(5), 479-493. https://onlinelibrary.wiley.com/doi/full/10.1111/apt.14201​​​​​​​​​​​​​​​​

Illustration generated by author using Midjourney

Truth at a Crossroads: How Trust, Identity, and Information Shape What We Believe

When Oxford Dictionaries declared “post-truth” its word of the year in 2016, it crystallized something many people had been feeling: that we’d entered a strange new era where objective facts seemed less influential in shaping public opinion than appeals to emotion and personal belief. The term exploded in usage that year, becoming shorthand for a troubling shift in how we process information. But have we really entered uncharted territory, or is this just the latest chapter in a very old story?

The short answer is: it’s complicated. The phenomenon itself isn’t new, but the scale and speed at which misinformation spreads certainly is. We are in a new world where the boundary between truth and untruth is blurred, institutions that once arbitrated facts are losing authority, and politics are running on “truthiness” and spectacle more than evidence.

The Psychology of Believing What We Want to Believe

To understand why people increasingly seem to choose sources over facts, we need to dive into how our minds actually work. People now seem to routinely sort themselves into information camps, each with its own “truth,” trusted voices, and shared worldview. But why is this and why does it seem to be getting worse?

Psychologists have spent decades studying something called confirmation bias—essentially, the tendency to seek out information that supports our existing beliefs while avoiding or dismissing information that contradicts them. This isn’t just about being stubborn. Research shows we actively sample more information from sources that align with what we already believe, and the higher our confidence in our initial beliefs, the more biased our information gathering becomes.

But there’s something even more powerful at play called motivated reasoning. While confirmation bias is about seeking information that confirms our beliefs, motivated reasoning is about protecting ideological beliefs by selectively crediting or discrediting facts to fit our identity-defining group’s position. In other words, we don’t just want to be right—we want to belong.

This matters because humans are fundamentally tribal creatures. When we form attachments to groups like political parties or ideological movements, we develop strong motivations to advance the group’s relative status and experience emotions like pride, shame, and anger on behalf of the group. Information processing becomes less about truth-seeking and more about identity protection.

Why Source Trumps Fact

So why do people trust a source they identify with over objective facts that contradict their worldview? Research points to several interconnected reasons.

First, there’s the practical matter of cognitive shortcuts. We’re bombarded with information daily, and people judge the reliability of evidence by using mental shortcuts called heuristics, such as how readily a particular idea comes to mind. If someone we trust says something, that’s an easier mental pathway than laboriously fact-checking every claim. This reliance becomes problematic when “trusted” means ideologically comfortable rather than factually reliable.

Analysts of the post‑truth phenomenon also highlight declining trust in traditional “truth tellers” such as mainstream media, scientific institutions, and government agencies. As these institutions lose authority, counter‑elites or influencers can present alternative narratives that followers treat as at least as plausible as established facts

Second, and more importantly, is the issue of identity. When individuals engage in identity-protective thinking, their processing of information more likely guides them to positions that are congruent with their membership in ideologically or culturally defined groups than to ones that reflect the best available scientific evidence. Being wrong about a fact might sting for a moment, but being cast out of your social group could have real consequences for your emotional support, social standing, and sense of self.

Third, there’s a feedback loop at work. In social media, confirmation bias is amplified by filter bubbles and algorithmic editing, which display to individuals only information they’re likely to agree with while excluding opposing views. The more we’re exposed only to sources that confirm our beliefs, the more alien and untrustworthy contradictory information appears.

Interestingly, being smarter doesn’t necessarily protect you from these biases. Some research suggests that people who are adept at using effortful, analytical modes of information processing may actually be even better at fitting their beliefs to their group identities, using their intelligence to construct more sophisticated justifications for what they already want to believe.

The Historical Echo Chamber

Despite the way it feels, this isn’t the first time truth has had competition. History is full of eras when myth, rumor, propaganda, and identity overshadowed facts.

During The Reformation of the1500s, misinformation was spread on both sides of the catholic-protestant divide.  Pamphlets—many of them highly distorted or outright fabricated—spread rapidly thanks to the printing press. Propaganda became a political weapon. Ordinary people suddenly had access to arguments they weren’t equipped to verify.  People were ostracized and some even executed based on little more than rumors or lies.  We might have hoped for better from religious leaders.

 The French Revolution (1780s–1790s) was awash in claims and counterclaims, many of them—if not most—had little basis in fact.Competing newspapers told wildly different stories about the same events. Rumors fueled paranoia, purges, and violence. Truth became secondary to whichever faction controlled the narrative.

Following the Civil War and Reconstruction, the “Lost Cause” narrative became a powerful example of source-driven myth making. Despite historical evidence, generations accepted a version of events shaped by postwar Southern elites, not by facts. Echoes of it still reverberate today, driving much of the opposition to the civil rights movement.

Fast forward to the 1890s, and we see something remarkably familiar. Yellow journalism, characterized by sensationalism and manipulated facts, emerged from the circulation war between Joseph Pulitzer’s New York World and William Randolph Hearst’s New York Journal. These papers used exaggerated headlines, unverified claims, faked interviews, misleading headlines, and pseudoscience to boost sales.

As early as 1898, a publication for the newspaper industry wrote that “the public is becoming heartily sick of fake news and fake extras”—sound familiar?

During the 20th-century propaganda states, typified by both fascist and communist regimes perfected source-based truth. The leader or the party defined reality, and disagreement was literally dangerous. In these systems, truth wasn’t debated—it was assigned.

What Makes Now Different?

While the psychological mechanisms and even the tactics aren’t new, several factors make our current moment distinct. The speed and scale of information spread is unprecedented. A false claim can circle the globe in hours. Studies show that people are bombarded by fake information online, leading the distinction between facts and fiction to become increasingly blurred as blogs, social media, and citizen journalism are awarded similar or greater credibility than other information sources.

We’re also experiencing a fragmentation of trusted authorities. Where once a handful of major newspapers and broadcast networks served as gatekeepers, now the fragmentation of centralized mass media gatekeepers has fundamentally altered information seeking, including ways of knowing, shared authorities, and trust in institutions.

So Are We in a Post-Truth Era?

Yes and no. The term “post-truth” captures something real about our current moment—the scale, speed, and sophistication of misinformation is unprecedented. But calling it “post-truth” suggests we’ve crossed some bright line into entirely new territory.  I’d argue we’re not quite there—but we are navigating a world where truth is sometimes lost in the collision of ancient human tendencies and modern technology

The data clearly show that confirmation bias, motivated reasoning, and identity-protective cognition are real and powerful forces. Historical evidence demonstrates that propaganda, misinformation, and the choice of tribal loyalty over objective fact have been with us for millennia. What’s changed is our information ecosystem driven by the technology that allows false information to spread faster than ever, and the by the fragmentation of shared sources of authority that once helped create common ground.

Perhaps a better framing would be that we’re in an era of “turbo-charged tribal epistemology”—where our very human tendency to trust our tribe’s narrative over contradicting evidence has been supercharged by algorithms that feed us what we want to hear and isolate us from alternative perspectives.  (I wish I could take credit for the term turbo-charged tribal epistemology. I really like it, but I read it somewhere, I just can’t remember where.) 

The question isn’t really whether we’re in a post-truth society. The question is whether we can develop the individual and collective skills to navigate an information environment that exploits every cognitive bias we have. The environment has changed, but the task remains the same: finding ways to establish shared facts despite our deep-seated tendency to believe what we want to believe.

Sources:

13 Stars, Betsy Ross and the Story of the American Flag

On a steamy June day in 1777, the Continental Congress took a brief break from the monumental task of running a revolution to deal with something that seems surprisingly simple in retrospect: what should the American flag look like? The resolution they passed on June 14th was refreshingly concise, stating that “the flag of the United States be thirteen stripes, alternate red and white; that the union be thirteen stars, white in a blue field, representing a new constellation.”

That poetic phrase about a “new constellation” turned out to be both inspiring and maddeningly vague. Congress didn’t specify how the stars should be arranged, how many points they should have, or even whether the flag should start with a red or white stripe at the top. This ambiguity led to one of the interesting aspects of early American flag history—for decades, no two flags looked exactly alike.

The 1777 resolution came out of Congress’s Marine Committee business, and at least some historians caution that it may have been understood initially as a naval ensign, not a fully standardized “national flag for all uses.”

A Constellation of Designs

The lack of official guidance meant that flag makers exercised considerable artistic freedom. Smithsonian researcher Grace Rogers Cooper found at least 17 different examples of 13-star flags dating from 1779 to around 1796, and flag expert Jeff Bridgman has documented 32 different star arrangements from the era. Some makers arranged the stars in neat rows, others formed them into a single large star, and still others created elaborate patterns that spelled out “U.S.” or formed other symbolic shapes.  An official star pattern would not be specified until 1912 and versions of the 13-star flag remained in ceremonial use until the mid-1800s.

The most famous arrangement, of course, is the Betsy Ross design with its circle of 13 stars. What many people don’t realize is that experts date the earliest known example of this circular pattern to 1792—in a painting by John Trumbull, not on an actual flag from 1776.

Did the Continental Army Actually Use This Flag?

Here’s where things get interesting and a bit murky. The short answer is: not much, and not right away. The Continental Army had been fighting for over two years before Congress even adopted the Stars and Stripes, and by that point, individual regiments had already developed their own distinctive colors and banners. These regimental flags served practical military purposes—they helped units identify each other in the chaos of battle and gave soldiers something to rally around.  Additionally, the Continental Army frequently used the Grand Union Flag (13 stripes with a British Union in the canton), which predates the 13-star design.

What’s more revealing is a series of letters from 1779—two full years after the Flag Resolution—between George Washington and Richard Peters, Secretary of the Board of War. In these letters, Peters is essentially asking Washington what flag he wants the army to use. This correspondence raises an obvious question: if Congress had settled the flag issue in 1777, why was Washington still trying to figure it out in 1779? The evidence suggests that variations of the 13-star flag were primarily used by the Navy in those early years, while the Army continued to use various regimental standards.

Navy Captain John Manley expressed this confusion perfectly when he wrote in 1779 that the United States “had no national colors” and that each ship simply flew whatever flag the captain preferred. Even as late as 1779, the War Board hadn’t settled on a standard design for the Army. When they finally wrote to Washington for his input, they proposed a flag that included a serpent and numbers representing different states—a design that never caught on.

National “stars and stripes” banners did exist during the late war years and appear in some period art and descriptions, but clear, securely dated 13‑star Army battle flags are rare and often disputed.13‑star flags are better documented in early federal service such as maritime and lighthouse use in the 1790s than they are in Continental Army field service before 1783.

The Betsy Ross Question

Now we come to one of America’s most enduring flag legend. The story is familiar to most Americans: in 1776, George Washington, Robert Morris, and George Ross visited Philadelphia upholsterer Betsy Ross and asked her to sew the first American flag. She suggested changing the six-pointed stars to five-pointed ones, demonstrated her one-snip technique for making a perfect five-pointed star, and she then produced the first Stars and Stripes.

It’s a great story. There’s just one problem: historians have found virtually no documentary evidence to support it. The tale didn’t surface publicly until 1870—nearly a century after the supposed event—when Betsy Ross’s grandson, William Canby, presented it in a speech to the Historical Society of Pennsylvania. Canby relied entirely on family oral history, including affidavits from Ross’s daughter, granddaughter, and other relatives who claimed they had heard Betsy tell the story herself. But Canby himself admitted that his search through official records revealed nothing to corroborate the account.

Historians don’t dispute that Betsy Ross was a real person who did real work. Documentary evidence shows that on May 29, 1777, the Pennsylvania State Navy Board paid her a substantial sum for “making ships colours.” She ran a successful upholstery business and continued making flags for the government for more than 50 years. But as historian Marla Miller puts it, “The flag, like the Revolution it represents, was the work of many hands.” Modern scholars generally view the question not as whether Ross designed the flag—she almost certainly didn’t—but whether she may have been among the many people who produced early flags.

Who Really Designed It?

If not Betsy Ross, then who? The strongest candidate is Francis Hopkinson, the New Jersey delegate to the Continental Congress who also helped design the Great Seal of the United States and early American currency. In 1780, Hopkinson sent Congress a bill requesting payment for his design work, specifically mentioning “the flag of the United States of America.”    He likely designed a flag with the stars arranged in rows rather than circles, and his bills for payment submitted to Congress mentioned six-pointed stars rather than the five-pointed ones that became standard.

 Unfortunately for Hopkinson, Congress refused to pay him, arguing that he wasn’t the only person on the Navy Committee and therefore shouldn’t receive singular credit or compensation.

The irony is rich: Hopkinson was asking for a quarter cask of wine or £2,700 for designing what would become one of the world’s most recognizable symbols.  Congress essentially told him, “Thanks, but we’re not paying.” There’s a lesson about government contracts in there somewhere.

What Survived

Of the hundreds of flags made and carried during the Revolutionary War, only about 30 are known to survive today. These rare artifacts offer fascinating glimpses into how Americans visualized their new nation. The Museum of the American Revolution brought together 17 of these original flags in a 2025 exhibition—the largest gathering of such flags since 1783.

The most significant surviving 13-star flag is probably Washington’s Headquarters Standard, a small blue silk flag measuring about two feet by three feet. It features 13 white, six-pointed stars on a blue field and descended through George Washington’s family with the tradition that it marked the General’s presence on the battlefield throughout the war. Experts consider it the earliest surviving 13-star American flag. Due to light damage, it can only be displayed on special occasions.

Other surviving flags tell different stories. The Brandywine Flag, used at the September 1777 battle of the same name, is one of the earliest stars and stripes—the flag is red, with a red and white American flag image in the canton.

 The Dansey Flag, captured from the Delaware militia by a British soldier, was taken to England as a war trophy and remained in his family until 1927. The flag features a green field with 13 alternating red and white stripes in the upper left corner signifying the 13 colonies.

These and other flags weren’t just military equipment—they were powerful symbols that people fought under and, sometimes, died defending.

The Bigger Picture

What makes the story of the 13-star flag so compelling isn’t really about who sewed it or exactly when it first flew. It’s about what the flag represented in an era when the very concept of the United States was still being invented. The June 1777 resolution called for stars forming “a new constellation”—a beautiful metaphor for a new nation finding its place among the powers of the world.

The fact that no two early flags looked exactly alike might seem like a problem from our standardized modern perspective, but it tells us something important about the Revolution itself. Just as the colonies were learning to act as united states while maintaining their individual identities, flag makers across the new nation were interpreting a simple congressional resolution in their own ways, creating variations on a shared theme.

As historian Laurel Thatcher Ulrich points out, there was no “first flag” worth arguing over. The American flag evolved organically, shaped by the practical needs of the Navy, the Army, militias, and civilian flag makers who each contributed to its development. Whether Betsy Ross made one of those early flags or not, her story endures because it captures something Americans want to believe about our origins: that ordinary citizens, working in small shops and homes, helped create the symbols of the new republic.

Sources:

History.com: https://www.history.com/this-day-in-history/june-14/congress-adopts-the-stars-and-stripes

Flags of the World: https://www.crwflags.com/fotw/flags/us-1777.html

Wikipedia Flag of the United States: https://en.wikipedia.org/wiki/Flag_of_the_United_States

Museum of the American Revolution: https://www.amrevmuseum.org/

American Battlefield Trust: https://www.battlefields.org/learn/articles/short-history-united-states-flag

US History (Betsy Ross): https://www.ushistory.org/betsy/

Library of Congress “Today in History”: https://www.loc.gov/item/today-in-history/june-14/

Flag images from Wikimedia Commons

The Price Tag Mystery: Why Nobody Really Knows What Healthcare Costs in America

Imagine walking into a store where nothing has a price tag. When you get to the register, the cashier scans your items and tells you the total—but that total is different for every customer. Your neighbor might pay $50 for the same items that cost you $200. The store won’t tell you why, and you won’t find out until after you’ve already “bought” everything.

Welcome to American healthcare, where the simple question “how much does this cost?” has no simple answer.

You might think I’m exaggerating, but the evidence suggests otherwise. Research published in late 2023 by PatientRightsAdvocate.org found that prices for the same medical procedure can vary by more than 10 times within a single hospital depending on which insurance plan you have, and by as much as 33 times across different hospitals. A knee replacement that costs around $23,170 in Baltimore might run $58,193 in New York. An emergency department visit that one facility charges $486 for might cost $3,549 at another hospital for the identical service.

The fundamental problem is that hospitals and doctors don’t have one price for their services. They have dozens, sometimes hundreds, of different prices for the exact same procedure depending on who’s paying. This bizarre system evolved because most healthcare in America isn’t a simple transaction between patient and provider—there’s a third party in the middle called an insurance company, and that changes everything.

The Fiction of Chargemaster Prices

A hospital chargemaster is essentially the hospital’s internal price list—a massive catalog that assigns a dollar amount to every service, supply, test, medication, and procedure the hospital can bill for, from an aspirin to a complex surgery. These listed prices are usually very high and are not what most patients actually pay; instead, the chargemaster functions as a starting point for negotiations with insurers and government programs like Medicare and Medicaid, which typically pay much lower, pre-set rates. What an individual patient ultimately pays depends on several factors layered on top of the chargemaster price. Think of them like the manufacturer’s suggested retail price on a car: technically real, but nobody pays them.

A hospital might list an MRI at $3,000 or a blood test at $500. But then insurance companies come in. They represent thousands or millions of potential patients, which gives them serious bargaining power. They negotiate with hospitals along these lines: “We’ll send you lots of patients, but only if you give us a discount.” So, the hospital agrees to accept much less—maybe they’ll take $1,200 for that $3,000 MRI or $150 for the blood test. This discounted amount is called the “negotiated rate,” and it’s what the insurance company will really pay.

Here’s where it gets messy: every insurance company negotiates its own rates with every hospital. Blue Cross might negotiate one price, Aetna a different price, UnitedHealthcare yet another. The same exact MRI at the same hospital might be $1,200 for one insurer’s customers and $1,800 for another’s. And these negotiated rates have traditionally been kept secret—treated like confidential business information that gives each party a competitive advantage.

The Write-Off Game

What happens to that difference between the chargemaster price and the negotiated rate? The hospital “writes it off.” That’s accounting language for “we accept that we’re not getting paid this money, and we’re taking it off the books.” If the hospital charged $3,000 but agreed to accept $1,200, they write off $1,800. This isn’t lost money in the normal sense—they never expected to collect it in the first place. The chargemaster prices are inflated specifically because everyone knows discounts are coming. Some hospitals now post “discounted cash prices” that are often far below chargemaster and sometimes even below some negotiated rates. These are sometimes, though not always, offered to uninsured patients, generally referred to as self-pay. There can be a catch—some hospitals require lump-sum payment of the total bill to qualify for the lower price.

According to the American Hospital Association, U.S. hospitals collectively plan to write off approximately $760 billion in billed charges in 2025 across all categories of write-offs. That’s not a typo—$760 billion. These write-offs happen in several different situations. The most common are contractual write-offs, where the provider has agreed to accept less than their list price from insurance companies.

Hospitals have far more write-offs than just contractual.  They also write off money for charity care—treating patients who can’t afford to pay anything, and they write off bad debt when patients could pay but don’t. They write off small balances that aren’t worth the administrative cost of collection, and they write off amounts related to various billing errors, denied claims, and coverage disputes. Healthcare providers typically adjust about 10 to 12 percent of their gross revenue due to these various write-offs and claim adjustments.

Why Such Wild Variation?

Even with all these negotiated discounts built into the system, the prices still vary enormously. A 2024 study from the Baker Institute found that for emergency department visits, the price charged by hospitals in the top 10% can be three to seven times higher than the hospitals in the bottom 10% for the identical procedure. Research published in Health Affairs Scholar in early 2025 found that even after adjusting for differences between insurers and procedures, the top 25% of prices across all states is 48 percent higher than the bottom 25% of prices for inpatient services.

Several factors drive this variation. Hospitals in areas with less competition can charge more because insurers have fewer alternatives for negotiation. Prestigious hospitals can demand higher rates because insurers want them in their networks to attract customers. Some insurance companies have more bargaining power than others based on their market share. There’s no central authority setting prices—it’s all private negotiations, hospital by hospital, insurer by insurer, procedure by procedure.

For patients, this creates a nightmare scenario. Even if you have insurance, you usually have no idea what you’ll pay until after you’ve received care. Your out-of-pocket costs depend on your deductible (the amount you pay before insurance kicks in), your copay or coinsurance (your share after insurance starts paying), and whether the negotiated rate between your specific insurance and that specific hospital is high or low. Two people with different insurance plans getting the same procedure at the same hospital on the same day can end up with drastically different bills.

Research using new transparency data confirms this isn’t just anecdotal. A study from early 2025 found that for something as routine as a common office visit, mean prices ranged from $82 with Aetna to $115 with UnitedHealth. Within individual insurance companies, the price of the top 25% of office visits was 20 to 50 percent higher than the bottom 25%, meaning even within one insurer’s network, where you go or where you live makes a huge difference.

The Government Steps In

The federal government finally said “enough” and started requiring transparency. Since 2021, hospitals must post their prices online, including what they’ve negotiated with each insurance company. The Centers for Medicare and Medicaid Services (CMS) strengthened these requirements in 2024, mandating standardized formats and increasing enforcement. Health insurance plans face similar requirements to disclose their negotiated rates.

The theory was straightforward: if patients could see prices ahead of time, they could shop around, which would force prices down through competition. CMS estimated this could save as much as $80 billion by 2025. The idea seemed sound—transparency works in other markets, so why not healthcare?

In practice, it’s been messy. A Government Accountability Office (GAO) report from October 2024 found that while hospitals are posting data, stakeholders like health plans and employers have raised serious concerns about data quality. They’ve encountered inconsistent file formats, extremely complex pricing structures, and data that appears to be incomplete or possibly inaccurate. Even when hospitals post the required information, it’s often so convoluted that comparing prices across facilities becomes nearly impossible for average consumers.

An Office of Inspector General report from November 2024 found that not all selected hospitals were complying with the transparency requirements in the first place. And CMS still doesn’t have robust mechanisms to verify whether the data being posted is accurate and complete. The GAO recommended that CMS assess whether hospital pricing data are sufficiently complete and accurate to be usable, and to assess if additional enforcement if needed.

Imagine trying to comparison shop when one store lists prices in dollars, another in euros, and a third uses a proprietary currency they invented. That’s roughly where we are with healthcare price data—technically available, but practically unusable for most people trying to make informed decisions.

The Trump administration in 2025 signed a new executive order aimed at strengthening enforcement of price transparency rules and directing agencies to standardize and make hospital and insurer pricing information more accessible; this action built on rather than reduced the earlier requirements.  Hopefully this will improve the ability of patients to access real costs, but it is my opinion that the industry will continue to resist full and open compliance.

The Limits of Shopping for Healthcare

There’s also a deeper philosophical problem: for healthcare to work like a normal market where price transparency drives competition, patients would need to be able to shop around based on price. That could work for scheduled procedures like knee replacements, colonoscopies, or elective surgeries. You have time to research, compare, and choose.

But it doesn’t work at all when you’re having a heart attack, or your child breaks their arm. You go to the nearest hospital, period. You’re not calling around asking about prices while someone’s having a medical emergency. Even for non-emergencies, choosing based on price assumes equal quality across providers, which isn’t always true and is even harder to assess than price itself.

A study on price transparency tools found mixed results on whether they truly reduce spending. Some research shows modest savings when people use price comparison tools for shoppable services like imaging and lab work. But utilization of these tools remains low, and for many healthcare encounters, price shopping simply isn’t practical or appropriate.

Who Really Knows?

So, who truly understands what things cost in this system? Hospital administrators know what different insurers pay them for specific procedures, but that knowledge is limited to their facility. They don’t necessarily know what other hospitals charge. Insurance company executives know what they’ve negotiated with various hospitals in their network, but they haven’t historically shared meaningful price information with their customers in advance. And they don’t know what their competitors have negotiated.

Patients, caught in the middle, often find out their costs only when they receive a bill weeks after treatment. By that point, the care has been delivered, and the financial damage is done. Recent surveys suggest that surprise medical bills remain a significant problem, with many patients receiving unexpected charges from out-of-network providers they didn’t choose or even know were involved in their care.

The people who are starting to get a comprehensive view are researchers and policymakers analyzing the newly available transparency data. Studies published in 2024 and 2025 using these data have given us unprecedented visibility into pricing patterns and variation. But this is aggregate, statistical knowledge—it helps us understand the system but doesn’t necessarily help individual patients figure out what they’ll pay for a specific procedure.

Where We Stand

The transparency regulations represent a genuine attempt to inject some market discipline into healthcare pricing. Making negotiated rates public breaks down the information asymmetry that has allowed prices to vary so wildly. In theory, if patients and employers can see that Hospital A charges twice what Hospital B does for the same procedure, competitive pressure should push prices toward the lower end.

There’s some early evidence this might be working. A study of children’s hospitals found that price variation for common imaging procedures decreased by about 19 percent between 2023 and 2024, though overall prices continued rising. Whether this trend will continue and expand to other types of facilities remains to be seen.  I am concerned that rather than lowering overall prices it may cause hospitals at the lower end to raise their prices closer to those at the higher end.

Significant obstacles remain. The data quality issues need resolution before the information becomes truly usable. Many patients lack either the time, expertise, or practical ability to shop based on price. And the fundamental structure of American healthcare—with its complex interplay of providers, insurers, pharmacy benefit managers, and government programs—means that even perfect price transparency won’t create a simple, straightforward market.

So, to return to the original question: does anyone truly know the cost of medical care in the United States? In an aggregate sense, researchers and policymakers are starting to understand the patterns thanks to transparency requirements. The data are revealing just how variable and opaque pricing has been. But as a practical matter for individual patients trying to figure out what they’ll pay for needed care, not really. The information is becoming available but remains largely inaccessible or incomprehensible for ordinary people trying to make informed healthcare decisions.

The $760 billion in annual write-offs tells you everything you need to know: the posted prices are largely fictional, the negotiated prices vary wildly, and the system has evolved to be so complex that even the people operating within it struggle to understand the full picture. We’re making progress toward transparency, but we’re a long way from a healthcare system where patients can confidently get the answer to the simple question: “How much will this cost?”

A closing thought: All of this could be solved by development of a single-payer healthcare system such as I proposed in my previous post America’s Healthcare Paradox: Why We Pay Double and Get Less.

Hepatitis B Vaccine: Three Shots and You’re Done for Life?

If you’re trying to figure out whether you need a hepatitis B vaccine or wondering if the one you got years ago is still protecting you, you’re not alone. The hepatitis B vaccine is one of those medical interventions that raises straightforward questions: How many shots do you need? And does it really last forever?  I thought I should follow up last week’s general discussion of hepatitis with some specifics on this vaccine.

The Shot Schedule

The traditional hepatitis B vaccine series requires three shots spaced over six months. You get the first dose, then return for a second shot one to two months later and finally complete the series with a third dose at the six-month mark.  There is also a combination hepatitis A and B vaccine that follows the same schedule. This schedule has been the standard for decades and works well for both children and adults.

But here’s something newer: In 2017, the FDA approved a two-dose hepatitis B vaccine called Heplisav-B for adults 18 and older. With this option, you only need two shots spaced one month apart. For parents of young children, there is Pediarix, a combination vaccine that bundles hepatitis B protection with vaccines for other diseases, streamlining the infant immunization schedule.

Does It Really Last a Lifetime?

This is where the science gets interesting. The short answer is yes, for most people the protection appears to be lifelong. But the mechanism behind this is more nuanced than you might expect.

After you complete the vaccine series, your body produces antibodies against hepatitis B. Over time—sometimes after just a few years—the level of these antibodies in your blood can decline to the point where they’re barely detectable or even undetectable. On the surface, that sounds concerning. But here’s the key: your immune system has memory.

Even when antibody levels drop, your body retains specialized immune cells that “remember” hepatitis B. If you encounter the virus years or decades later, these memory cells spring into action, rapidly producing new antibodies to fight off the infection before it can establish itself. Researchers have followed vaccinated individuals for more than 30 years and found that this immune memory remains protective even when blood tests show low antibody levels.

Who Might Need a Booster?

For most people with healthy immune systems, the CDC doesn’t recommend booster shots. Once you’ve completed the series and your body has responded appropriately, you’re considered protected. However, there are exceptions. People with compromised immune systems—such as those undergoing dialysis, living with HIV, or taking immunosuppressive medications—may need periodic booster doses. These individuals should work with their healthcare providers to monitor their antibody levels and determine if additional shots are necessary.

The Bottom Line

The hepatitis B vaccine is a three-shot series (or two shots with the newer formulation) that provides protection that researchers believe lasts a lifetime for most people. While your antibody levels might decline over the years, your immune system’s memory keeps you safe. It’s one of those rare cases where you can check something off your health to-do list and genuinely move on.

Sources:

Critical Ignoring: The Skill You Didn’t Know You Needed

You’ve probably spent years learning how to pay attention—reading closely, analyzing deeply, and thinking critically. But here’s something nobody taught you in school: in today’s digital world, knowing what not to pay attention to might be just as important as knowing what deserves your focus.

That’s the essence of critical ignoring, a concept developed by researchers Anastasia Kozyreva, Sam Wineburg, Stephan Lewandowsky, and Ralph Hertwig  . It’s basically the skill of deliberately and strategically choosing what information to ignore so you can invest your limited attention where it truly matters.  I first became aware of this concept just a few weeks ago while reading an article by Christopher Mims in the Wall Street Journal.

Why This Matters Now

Think about your typical day online. You’re bombarded with news alerts, social media posts, clickbait headlines, and outrage-inducing content designed specifically to hijack your attention. Traditional advice tells you to carefully evaluate each source, read critically, and fact-check thoroughly. But here’s the problem: if you’re investing serious mental energy evaluating sources that should have been ignored in the first place, your attention has already been stolen.

The researchers make a crucial observation about how the digital world has changed the game. In the past, information was scarce and we had to seek it out. Now we’re drowning in it, and much of it is deliberately designed to be attention-grabbing through tactics like sparking curiosity, outrage, or anger. Our attention has become the scarce resource that advertisers and content providers are constantly trying to seize and exploit.

Critical ignoring is not sticking your head in the sand or refusing to hear anything that challenges you. Apathy is “I don’t care about any of this.”  Critical ignoring is “I care enough to be selective, so that I can focus on what truly matters.”  Denial is “I refuse to believe or even look at uncomfortable evidence.” Critical ignoring is “I’m not going to invest my time in sources that are clearly unreliable, or in discussions that are going nowhere, so I can better examine serious evidence elsewhere.”

The key distinction is that critical ignoring always serves better judgment, not comfort at any cost.

How To Actually Do It

The researchers outline three practical strategies you can use right away:

Self-Nudging: This is about redesigning your digital environment to remove temptations before they become problems. Think of it as changing your information ecosystem. Instead of relying on willpower alone, you might unsubscribe from inflammatory newsletters, turn off news notifications that stress you out, or use browser extensions to block certain websites during work hours. The idea is to design your environment so you can implement the resolutions you’ve made.

Lateral Reading: This one’s particularly clever. Instead of reading a website from top to bottom like you’ve always done, professional fact-checkers will open another browser tab and quickly research who’s behind the source. That way, you spend sixty seconds searching for information about the source rather than spending twenty minutes carefully reading content from a source that turns out to be backed by a lobbying group or known misinformation peddler. The researchers note this is often faster and more effective than trying to critically evaluate the content itself.

Don’t Feed the Trolls: This strategy advises you not to reward malicious actors with your attention.  When you encounter inflammatory comments, deliberately misleading posts, or content clearly designed to provoke anger, the best response is often no response at all. Engaging with trolls or bad-faith content just amplifies it and wastes your mental energy.

I’ll Add Another

Ignore the Influencers: Refuse to click on miracle‑cure headlines or anecdote‑driven threads when you can go directly to professional medical sources, systematic reviews, or guidelines from reliable sources.  Ignore influencers’ health claims unless they clearly cite solid evidence and expertise.

The Bigger Picture

What makes critical ignoring different from just being selective is that it’s strategic and informed. To know what to ignore, you need to understand the landscape first. It’s not about burying your head in the sand—it’s about being intentional with your attention budget.

The traditional approach of “pay careful attention to everything” made sense in a world of vetted textbooks and curated libraries. But on the unvetted internet, that approach often ends up being a colossal waste of time and energy. The admonition to “pay careful attention” is exactly what attention thieves exploit.

Making It Work For You

Start by taking inventory of your information landscape —all the apps, websites, notifications, and sources competing for your attention. Which ones consistently deliver value? Which ones leave you feeling manipulated, angry, or stressed? Practice self-nudging by removing or limiting access to the latter category.

When you encounter a new source making bold claims, resist the urge to dive deep into their content immediately. Instead, spend a minute or two doing lateral reading. Search for “who runs [site name]” or “[organization name] funding.” You’ll be amazed how quickly you can identify whether something deserves your time.

And when you see obvious rage-bait or trolling, practice the “scroll on by” technique. Your attention is valuable—don’t give it away for free to people trying to manipulate you.

Critical ignoring isn’t about being less informed. It’s about being better informed by focusing your limited cognitive resources on reliable sources and meaningful content rather than letting the algorithm’s latest outrage-of-the-day consume your mental bandwidth.

Sources:         

Kozyreva, A., Wineburg, S., Lewandowsky, S., & Hertwig, R. (2023). Critical Ignoring as a Core Competence for Digital Citizens. Current Directions in Psychological Science, 32(1), 81-88. https://journals.sagepub.com/doi/10.1177/09637214221121570

                ∙Full text also available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC7615324/

                ∙Interview with lead researcher: https://www.mpg.de/19554217/new-digital-competencies-critical-ignoring

Mims, Christopher. “Your Key Survival Skill for 2026: Critical Ignoring.” The Wall Street Journal, January 3, 2026.

American Psychological Association.  https://www.apa.org/news/podcasts/speaking-of-psychology/attention-spans

Lane, S. & Atchley, P. “Human Capacity in the Attention Economy”, American Psychological Association, 2020.

Page 2 of 25

Powered by WordPress & Theme by Anders Norén