Grumpy opinions about everything.

Month: January 2026 Page 1 of 2

13 Stars, Betsy Ross and the Story of the American Flag

On a steamy June day in 1777, the Continental Congress took a brief break from the monumental task of running a revolution to deal with something that seems surprisingly simple in retrospect: what should the American flag look like? The resolution they passed on June 14th was refreshingly concise, stating that “the flag of the United States be thirteen stripes, alternate red and white; that the union be thirteen stars, white in a blue field, representing a new constellation.”

That poetic phrase about a “new constellation” turned out to be both inspiring and maddeningly vague. Congress didn’t specify how the stars should be arranged, how many points they should have, or even whether the flag should start with a red or white stripe at the top. This ambiguity led to one of the interesting aspects of early American flag history—for decades, no two flags looked exactly alike.

The 1777 resolution came out of Congress’s Marine Committee business, and at least some historians caution that it may have been understood initially as a naval ensign, not a fully standardized “national flag for all uses.”

A Constellation of Designs

The lack of official guidance meant that flag makers exercised considerable artistic freedom. Smithsonian researcher Grace Rogers Cooper found at least 17 different examples of 13-star flags dating from 1779 to around 1796, and flag expert Jeff Bridgman has documented 32 different star arrangements from the era. Some makers arranged the stars in neat rows, others formed them into a single large star, and still others created elaborate patterns that spelled out “U.S.” or formed other symbolic shapes.  An official star pattern would not be specified until 1912 and versions of the 13-star flag remained in ceremonial use until the mid-1800s.

The most famous arrangement, of course, is the Betsy Ross design with its circle of 13 stars. What many people don’t realize is that experts date the earliest known example of this circular pattern to 1792—in a painting by John Trumbull, not on an actual flag from 1776.

Did the Continental Army Actually Use This Flag?

Here’s where things get interesting and a bit murky. The short answer is: not much, and not right away. The Continental Army had been fighting for over two years before Congress even adopted the Stars and Stripes, and by that point, individual regiments had already developed their own distinctive colors and banners. These regimental flags served practical military purposes—they helped units identify each other in the chaos of battle and gave soldiers something to rally around.  Additionally, the Continental Army frequently used the Grand Union Flag (13 stripes with a British Union in the canton), which predates the 13-star design.

What’s more revealing is a series of letters from 1779—two full years after the Flag Resolution—between George Washington and Richard Peters, Secretary of the Board of War. In these letters, Peters is essentially asking Washington what flag he wants the army to use. This correspondence raises an obvious question: if Congress had settled the flag issue in 1777, why was Washington still trying to figure it out in 1779? The evidence suggests that variations of the 13-star flag were primarily used by the Navy in those early years, while the Army continued to use various regimental standards.

Navy Captain John Manley expressed this confusion perfectly when he wrote in 1779 that the United States “had no national colors” and that each ship simply flew whatever flag the captain preferred. Even as late as 1779, the War Board hadn’t settled on a standard design for the Army. When they finally wrote to Washington for his input, they proposed a flag that included a serpent and numbers representing different states—a design that never caught on.

National “stars and stripes” banners did exist during the late war years and appear in some period art and descriptions, but clear, securely dated 13‑star Army battle flags are rare and often disputed.13‑star flags are better documented in early federal service such as maritime and lighthouse use in the 1790s than they are in Continental Army field service before 1783.

The Betsy Ross Question

Now we come to one of America’s most enduring flag legend. The story is familiar to most Americans: in 1776, George Washington, Robert Morris, and George Ross visited Philadelphia upholsterer Betsy Ross and asked her to sew the first American flag. She suggested changing the six-pointed stars to five-pointed ones, demonstrated her one-snip technique for making a perfect five-pointed star, and she then produced the first Stars and Stripes.

It’s a great story. There’s just one problem: historians have found virtually no documentary evidence to support it. The tale didn’t surface publicly until 1870—nearly a century after the supposed event—when Betsy Ross’s grandson, William Canby, presented it in a speech to the Historical Society of Pennsylvania. Canby relied entirely on family oral history, including affidavits from Ross’s daughter, granddaughter, and other relatives who claimed they had heard Betsy tell the story herself. But Canby himself admitted that his search through official records revealed nothing to corroborate the account.

Historians don’t dispute that Betsy Ross was a real person who did real work. Documentary evidence shows that on May 29, 1777, the Pennsylvania State Navy Board paid her a substantial sum for “making ships colours.” She ran a successful upholstery business and continued making flags for the government for more than 50 years. But as historian Marla Miller puts it, “The flag, like the Revolution it represents, was the work of many hands.” Modern scholars generally view the question not as whether Ross designed the flag—she almost certainly didn’t—but whether she may have been among the many people who produced early flags.

Who Really Designed It?

If not Betsy Ross, then who? The strongest candidate is Francis Hopkinson, the New Jersey delegate to the Continental Congress who also helped design the Great Seal of the United States and early American currency. In 1780, Hopkinson sent Congress a bill requesting payment for his design work, specifically mentioning “the flag of the United States of America.”    He likely designed a flag with the stars arranged in rows rather than circles, and his bills for payment submitted to Congress mentioned six-pointed stars rather than the five-pointed ones that became standard.

 Unfortunately for Hopkinson, Congress refused to pay him, arguing that he wasn’t the only person on the Navy Committee and therefore shouldn’t receive singular credit or compensation.

The irony is rich: Hopkinson was asking for a quarter cask of wine or £2,700 for designing what would become one of the world’s most recognizable symbols.  Congress essentially told him, “Thanks, but we’re not paying.” There’s a lesson about government contracts in there somewhere.

What Survived

Of the hundreds of flags made and carried during the Revolutionary War, only about 30 are known to survive today. These rare artifacts offer fascinating glimpses into how Americans visualized their new nation. The Museum of the American Revolution brought together 17 of these original flags in a 2025 exhibition—the largest gathering of such flags since 1783.

The most significant surviving 13-star flag is probably Washington’s Headquarters Standard, a small blue silk flag measuring about two feet by three feet. It features 13 white, six-pointed stars on a blue field and descended through George Washington’s family with the tradition that it marked the General’s presence on the battlefield throughout the war. Experts consider it the earliest surviving 13-star American flag. Due to light damage, it can only be displayed on special occasions.

Other surviving flags tell different stories. The Brandywine Flag, used at the September 1777 battle of the same name, is one of the earliest stars and stripes—the flag is red, with a red and white American flag image in the canton.

 The Dansey Flag, captured from the Delaware militia by a British soldier, was taken to England as a war trophy and remained in his family until 1927. The flag features a green field with 13 alternating red and white stripes in the upper left corner signifying the 13 colonies.

These and other flags weren’t just military equipment—they were powerful symbols that people fought under and, sometimes, died defending.

The Bigger Picture

What makes the story of the 13-star flag so compelling isn’t really about who sewed it or exactly when it first flew. It’s about what the flag represented in an era when the very concept of the United States was still being invented. The June 1777 resolution called for stars forming “a new constellation”—a beautiful metaphor for a new nation finding its place among the powers of the world.

The fact that no two early flags looked exactly alike might seem like a problem from our standardized modern perspective, but it tells us something important about the Revolution itself. Just as the colonies were learning to act as united states while maintaining their individual identities, flag makers across the new nation were interpreting a simple congressional resolution in their own ways, creating variations on a shared theme.

As historian Laurel Thatcher Ulrich points out, there was no “first flag” worth arguing over. The American flag evolved organically, shaped by the practical needs of the Navy, the Army, militias, and civilian flag makers who each contributed to its development. Whether Betsy Ross made one of those early flags or not, her story endures because it captures something Americans want to believe about our origins: that ordinary citizens, working in small shops and homes, helped create the symbols of the new republic.

Sources:

History.com: https://www.history.com/this-day-in-history/june-14/congress-adopts-the-stars-and-stripes

Flags of the World: https://www.crwflags.com/fotw/flags/us-1777.html

Wikipedia Flag of the United States: https://en.wikipedia.org/wiki/Flag_of_the_United_States

Museum of the American Revolution: https://www.amrevmuseum.org/

American Battlefield Trust: https://www.battlefields.org/learn/articles/short-history-united-states-flag

US History (Betsy Ross): https://www.ushistory.org/betsy/

Library of Congress “Today in History”: https://www.loc.gov/item/today-in-history/june-14/

Flag images from Wikimedia Commons

The Price Tag Mystery: Why Nobody Really Knows What Healthcare Costs in America

Imagine walking into a store where nothing has a price tag. When you get to the register, the cashier scans your items and tells you the total—but that total is different for every customer. Your neighbor might pay $50 for the same items that cost you $200. The store won’t tell you why, and you won’t find out until after you’ve already “bought” everything.

Welcome to American healthcare, where the simple question “how much does this cost?” has no simple answer.

You might think I’m exaggerating, but the evidence suggests otherwise. Research published in late 2023 by PatientRightsAdvocate.org found that prices for the same medical procedure can vary by more than 10 times within a single hospital depending on which insurance plan you have, and by as much as 33 times across different hospitals. A knee replacement that costs around $23,170 in Baltimore might run $58,193 in New York. An emergency department visit that one facility charges $486 for might cost $3,549 at another hospital for the identical service.

The fundamental problem is that hospitals and doctors don’t have one price for their services. They have dozens, sometimes hundreds, of different prices for the exact same procedure depending on who’s paying. This bizarre system evolved because most healthcare in America isn’t a simple transaction between patient and provider—there’s a third party in the middle called an insurance company, and that changes everything.

The Fiction of Chargemaster Prices

A hospital chargemaster is essentially the hospital’s internal price list—a massive catalog that assigns a dollar amount to every service, supply, test, medication, and procedure the hospital can bill for, from an aspirin to a complex surgery. These listed prices are usually very high and are not what most patients actually pay; instead, the chargemaster functions as a starting point for negotiations with insurers and government programs like Medicare and Medicaid, which typically pay much lower, pre-set rates. What an individual patient ultimately pays depends on several factors layered on top of the chargemaster price. Think of them like the manufacturer’s suggested retail price on a car: technically real, but nobody pays them.

A hospital might list an MRI at $3,000 or a blood test at $500. But then insurance companies come in. They represent thousands or millions of potential patients, which gives them serious bargaining power. They negotiate with hospitals along these lines: “We’ll send you lots of patients, but only if you give us a discount.” So, the hospital agrees to accept much less—maybe they’ll take $1,200 for that $3,000 MRI or $150 for the blood test. This discounted amount is called the “negotiated rate,” and it’s what the insurance company will really pay.

Here’s where it gets messy: every insurance company negotiates its own rates with every hospital. Blue Cross might negotiate one price, Aetna a different price, UnitedHealthcare yet another. The same exact MRI at the same hospital might be $1,200 for one insurer’s customers and $1,800 for another’s. And these negotiated rates have traditionally been kept secret—treated like confidential business information that gives each party a competitive advantage.

The Write-Off Game

What happens to that difference between the chargemaster price and the negotiated rate? The hospital “writes it off.” That’s accounting language for “we accept that we’re not getting paid this money, and we’re taking it off the books.” If the hospital charged $3,000 but agreed to accept $1,200, they write off $1,800. This isn’t lost money in the normal sense—they never expected to collect it in the first place. The chargemaster prices are inflated specifically because everyone knows discounts are coming. Some hospitals now post “discounted cash prices” that are often far below chargemaster and sometimes even below some negotiated rates. These are sometimes, though not always, offered to uninsured patients, generally referred to as self-pay. There can be a catch—some hospitals require lump-sum payment of the total bill to qualify for the lower price.

According to the American Hospital Association, U.S. hospitals collectively plan to write off approximately $760 billion in billed charges in 2025 across all categories of write-offs. That’s not a typo—$760 billion. These write-offs happen in several different situations. The most common are contractual write-offs, where the provider has agreed to accept less than their list price from insurance companies.

Hospitals have far more write-offs than just contractual.  They also write off money for charity care—treating patients who can’t afford to pay anything, and they write off bad debt when patients could pay but don’t. They write off small balances that aren’t worth the administrative cost of collection, and they write off amounts related to various billing errors, denied claims, and coverage disputes. Healthcare providers typically adjust about 10 to 12 percent of their gross revenue due to these various write-offs and claim adjustments.

Why Such Wild Variation?

Even with all these negotiated discounts built into the system, the prices still vary enormously. A 2024 study from the Baker Institute found that for emergency department visits, the price charged by hospitals in the top 10% can be three to seven times higher than the hospitals in the bottom 10% for the identical procedure. Research published in Health Affairs Scholar in early 2025 found that even after adjusting for differences between insurers and procedures, the top 25% of prices across all states is 48 percent higher than the bottom 25% of prices for inpatient services.

Several factors drive this variation. Hospitals in areas with less competition can charge more because insurers have fewer alternatives for negotiation. Prestigious hospitals can demand higher rates because insurers want them in their networks to attract customers. Some insurance companies have more bargaining power than others based on their market share. There’s no central authority setting prices—it’s all private negotiations, hospital by hospital, insurer by insurer, procedure by procedure.

For patients, this creates a nightmare scenario. Even if you have insurance, you usually have no idea what you’ll pay until after you’ve received care. Your out-of-pocket costs depend on your deductible (the amount you pay before insurance kicks in), your copay or coinsurance (your share after insurance starts paying), and whether the negotiated rate between your specific insurance and that specific hospital is high or low. Two people with different insurance plans getting the same procedure at the same hospital on the same day can end up with drastically different bills.

Research using new transparency data confirms this isn’t just anecdotal. A study from early 2025 found that for something as routine as a common office visit, mean prices ranged from $82 with Aetna to $115 with UnitedHealth. Within individual insurance companies, the price of the top 25% of office visits was 20 to 50 percent higher than the bottom 25%, meaning even within one insurer’s network, where you go or where you live makes a huge difference.

The Government Steps In

The federal government finally said “enough” and started requiring transparency. Since 2021, hospitals must post their prices online, including what they’ve negotiated with each insurance company. The Centers for Medicare and Medicaid Services (CMS) strengthened these requirements in 2024, mandating standardized formats and increasing enforcement. Health insurance plans face similar requirements to disclose their negotiated rates.

The theory was straightforward: if patients could see prices ahead of time, they could shop around, which would force prices down through competition. CMS estimated this could save as much as $80 billion by 2025. The idea seemed sound—transparency works in other markets, so why not healthcare?

In practice, it’s been messy. A Government Accountability Office (GAO) report from October 2024 found that while hospitals are posting data, stakeholders like health plans and employers have raised serious concerns about data quality. They’ve encountered inconsistent file formats, extremely complex pricing structures, and data that appears to be incomplete or possibly inaccurate. Even when hospitals post the required information, it’s often so convoluted that comparing prices across facilities becomes nearly impossible for average consumers.

An Office of Inspector General report from November 2024 found that not all selected hospitals were complying with the transparency requirements in the first place. And CMS still doesn’t have robust mechanisms to verify whether the data being posted is accurate and complete. The GAO recommended that CMS assess whether hospital pricing data are sufficiently complete and accurate to be usable, and to assess if additional enforcement if needed.

Imagine trying to comparison shop when one store lists prices in dollars, another in euros, and a third uses a proprietary currency they invented. That’s roughly where we are with healthcare price data—technically available, but practically unusable for most people trying to make informed decisions.

The Trump administration in 2025 signed a new executive order aimed at strengthening enforcement of price transparency rules and directing agencies to standardize and make hospital and insurer pricing information more accessible; this action built on rather than reduced the earlier requirements.  Hopefully this will improve the ability of patients to access real costs, but it is my opinion that the industry will continue to resist full and open compliance.

The Limits of Shopping for Healthcare

There’s also a deeper philosophical problem: for healthcare to work like a normal market where price transparency drives competition, patients would need to be able to shop around based on price. That could work for scheduled procedures like knee replacements, colonoscopies, or elective surgeries. You have time to research, compare, and choose.

But it doesn’t work at all when you’re having a heart attack, or your child breaks their arm. You go to the nearest hospital, period. You’re not calling around asking about prices while someone’s having a medical emergency. Even for non-emergencies, choosing based on price assumes equal quality across providers, which isn’t always true and is even harder to assess than price itself.

A study on price transparency tools found mixed results on whether they truly reduce spending. Some research shows modest savings when people use price comparison tools for shoppable services like imaging and lab work. But utilization of these tools remains low, and for many healthcare encounters, price shopping simply isn’t practical or appropriate.

Who Really Knows?

So, who truly understands what things cost in this system? Hospital administrators know what different insurers pay them for specific procedures, but that knowledge is limited to their facility. They don’t necessarily know what other hospitals charge. Insurance company executives know what they’ve negotiated with various hospitals in their network, but they haven’t historically shared meaningful price information with their customers in advance. And they don’t know what their competitors have negotiated.

Patients, caught in the middle, often find out their costs only when they receive a bill weeks after treatment. By that point, the care has been delivered, and the financial damage is done. Recent surveys suggest that surprise medical bills remain a significant problem, with many patients receiving unexpected charges from out-of-network providers they didn’t choose or even know were involved in their care.

The people who are starting to get a comprehensive view are researchers and policymakers analyzing the newly available transparency data. Studies published in 2024 and 2025 using these data have given us unprecedented visibility into pricing patterns and variation. But this is aggregate, statistical knowledge—it helps us understand the system but doesn’t necessarily help individual patients figure out what they’ll pay for a specific procedure.

Where We Stand

The transparency regulations represent a genuine attempt to inject some market discipline into healthcare pricing. Making negotiated rates public breaks down the information asymmetry that has allowed prices to vary so wildly. In theory, if patients and employers can see that Hospital A charges twice what Hospital B does for the same procedure, competitive pressure should push prices toward the lower end.

There’s some early evidence this might be working. A study of children’s hospitals found that price variation for common imaging procedures decreased by about 19 percent between 2023 and 2024, though overall prices continued rising. Whether this trend will continue and expand to other types of facilities remains to be seen.  I am concerned that rather than lowering overall prices it may cause hospitals at the lower end to raise their prices closer to those at the higher end.

Significant obstacles remain. The data quality issues need resolution before the information becomes truly usable. Many patients lack either the time, expertise, or practical ability to shop based on price. And the fundamental structure of American healthcare—with its complex interplay of providers, insurers, pharmacy benefit managers, and government programs—means that even perfect price transparency won’t create a simple, straightforward market.

So, to return to the original question: does anyone truly know the cost of medical care in the United States? In an aggregate sense, researchers and policymakers are starting to understand the patterns thanks to transparency requirements. The data are revealing just how variable and opaque pricing has been. But as a practical matter for individual patients trying to figure out what they’ll pay for needed care, not really. The information is becoming available but remains largely inaccessible or incomprehensible for ordinary people trying to make informed healthcare decisions.

The $760 billion in annual write-offs tells you everything you need to know: the posted prices are largely fictional, the negotiated prices vary wildly, and the system has evolved to be so complex that even the people operating within it struggle to understand the full picture. We’re making progress toward transparency, but we’re a long way from a healthcare system where patients can confidently get the answer to the simple question: “How much will this cost?”

A closing thought: All of this could be solved by development of a single-payer healthcare system such as I proposed in my previous post America’s Healthcare Paradox: Why We Pay Double and Get Less.

Hepatitis B Vaccine: Three Shots and You’re Done for Life?

If you’re trying to figure out whether you need a hepatitis B vaccine or wondering if the one you got years ago is still protecting you, you’re not alone. The hepatitis B vaccine is one of those medical interventions that raises straightforward questions: How many shots do you need? And does it really last forever?  I thought I should follow up last week’s general discussion of hepatitis with some specifics on this vaccine.

The Shot Schedule

The traditional hepatitis B vaccine series requires three shots spaced over six months. You get the first dose, then return for a second shot one to two months later and finally complete the series with a third dose at the six-month mark.  There is also a combination hepatitis A and B vaccine that follows the same schedule. This schedule has been the standard for decades and works well for both children and adults.

But here’s something newer: In 2017, the FDA approved a two-dose hepatitis B vaccine called Heplisav-B for adults 18 and older. With this option, you only need two shots spaced one month apart. For parents of young children, there is Pediarix, a combination vaccine that bundles hepatitis B protection with vaccines for other diseases, streamlining the infant immunization schedule.

Does It Really Last a Lifetime?

This is where the science gets interesting. The short answer is yes, for most people the protection appears to be lifelong. But the mechanism behind this is more nuanced than you might expect.

After you complete the vaccine series, your body produces antibodies against hepatitis B. Over time—sometimes after just a few years—the level of these antibodies in your blood can decline to the point where they’re barely detectable or even undetectable. On the surface, that sounds concerning. But here’s the key: your immune system has memory.

Even when antibody levels drop, your body retains specialized immune cells that “remember” hepatitis B. If you encounter the virus years or decades later, these memory cells spring into action, rapidly producing new antibodies to fight off the infection before it can establish itself. Researchers have followed vaccinated individuals for more than 30 years and found that this immune memory remains protective even when blood tests show low antibody levels.

Who Might Need a Booster?

For most people with healthy immune systems, the CDC doesn’t recommend booster shots. Once you’ve completed the series and your body has responded appropriately, you’re considered protected. However, there are exceptions. People with compromised immune systems—such as those undergoing dialysis, living with HIV, or taking immunosuppressive medications—may need periodic booster doses. These individuals should work with their healthcare providers to monitor their antibody levels and determine if additional shots are necessary.

The Bottom Line

The hepatitis B vaccine is a three-shot series (or two shots with the newer formulation) that provides protection that researchers believe lasts a lifetime for most people. While your antibody levels might decline over the years, your immune system’s memory keeps you safe. It’s one of those rare cases where you can check something off your health to-do list and genuinely move on.

Sources:

Critical Ignoring: The Skill You Didn’t Know You Needed

You’ve probably spent years learning how to pay attention—reading closely, analyzing deeply, and thinking critically. But here’s something nobody taught you in school: in today’s digital world, knowing what not to pay attention to might be just as important as knowing what deserves your focus.

That’s the essence of critical ignoring, a concept developed by researchers Anastasia Kozyreva, Sam Wineburg, Stephan Lewandowsky, and Ralph Hertwig  . It’s basically the skill of deliberately and strategically choosing what information to ignore so you can invest your limited attention where it truly matters.  I first became aware of this concept just a few weeks ago while reading an article by Christopher Mims in the Wall Street Journal.

Why This Matters Now

Think about your typical day online. You’re bombarded with news alerts, social media posts, clickbait headlines, and outrage-inducing content designed specifically to hijack your attention. Traditional advice tells you to carefully evaluate each source, read critically, and fact-check thoroughly. But here’s the problem: if you’re investing serious mental energy evaluating sources that should have been ignored in the first place, your attention has already been stolen.

The researchers make a crucial observation about how the digital world has changed the game. In the past, information was scarce and we had to seek it out. Now we’re drowning in it, and much of it is deliberately designed to be attention-grabbing through tactics like sparking curiosity, outrage, or anger. Our attention has become the scarce resource that advertisers and content providers are constantly trying to seize and exploit.

Critical ignoring is not sticking your head in the sand or refusing to hear anything that challenges you. Apathy is “I don’t care about any of this.”  Critical ignoring is “I care enough to be selective, so that I can focus on what truly matters.”  Denial is “I refuse to believe or even look at uncomfortable evidence.” Critical ignoring is “I’m not going to invest my time in sources that are clearly unreliable, or in discussions that are going nowhere, so I can better examine serious evidence elsewhere.”

The key distinction is that critical ignoring always serves better judgment, not comfort at any cost.

How To Actually Do It

The researchers outline three practical strategies you can use right away:

Self-Nudging: This is about redesigning your digital environment to remove temptations before they become problems. Think of it as changing your information ecosystem. Instead of relying on willpower alone, you might unsubscribe from inflammatory newsletters, turn off news notifications that stress you out, or use browser extensions to block certain websites during work hours. The idea is to design your environment so you can implement the resolutions you’ve made.

Lateral Reading: This one’s particularly clever. Instead of reading a website from top to bottom like you’ve always done, professional fact-checkers will open another browser tab and quickly research who’s behind the source. That way, you spend sixty seconds searching for information about the source rather than spending twenty minutes carefully reading content from a source that turns out to be backed by a lobbying group or known misinformation peddler. The researchers note this is often faster and more effective than trying to critically evaluate the content itself.

Don’t Feed the Trolls: This strategy advises you not to reward malicious actors with your attention.  When you encounter inflammatory comments, deliberately misleading posts, or content clearly designed to provoke anger, the best response is often no response at all. Engaging with trolls or bad-faith content just amplifies it and wastes your mental energy.

I’ll Add Another

Ignore the Influencers: Refuse to click on miracle‑cure headlines or anecdote‑driven threads when you can go directly to professional medical sources, systematic reviews, or guidelines from reliable sources.  Ignore influencers’ health claims unless they clearly cite solid evidence and expertise.

The Bigger Picture

What makes critical ignoring different from just being selective is that it’s strategic and informed. To know what to ignore, you need to understand the landscape first. It’s not about burying your head in the sand—it’s about being intentional with your attention budget.

The traditional approach of “pay careful attention to everything” made sense in a world of vetted textbooks and curated libraries. But on the unvetted internet, that approach often ends up being a colossal waste of time and energy. The admonition to “pay careful attention” is exactly what attention thieves exploit.

Making It Work For You

Start by taking inventory of your information landscape —all the apps, websites, notifications, and sources competing for your attention. Which ones consistently deliver value? Which ones leave you feeling manipulated, angry, or stressed? Practice self-nudging by removing or limiting access to the latter category.

When you encounter a new source making bold claims, resist the urge to dive deep into their content immediately. Instead, spend a minute or two doing lateral reading. Search for “who runs [site name]” or “[organization name] funding.” You’ll be amazed how quickly you can identify whether something deserves your time.

And when you see obvious rage-bait or trolling, practice the “scroll on by” technique. Your attention is valuable—don’t give it away for free to people trying to manipulate you.

Critical ignoring isn’t about being less informed. It’s about being better informed by focusing your limited cognitive resources on reliable sources and meaningful content rather than letting the algorithm’s latest outrage-of-the-day consume your mental bandwidth.

Sources:         

Kozyreva, A., Wineburg, S., Lewandowsky, S., & Hertwig, R. (2023). Critical Ignoring as a Core Competence for Digital Citizens. Current Directions in Psychological Science, 32(1), 81-88. https://journals.sagepub.com/doi/10.1177/09637214221121570

                ∙Full text also available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC7615324/

                ∙Interview with lead researcher: https://www.mpg.de/19554217/new-digital-competencies-critical-ignoring

Mims, Christopher. “Your Key Survival Skill for 2026: Critical Ignoring.” The Wall Street Journal, January 3, 2026.

American Psychological Association.  https://www.apa.org/news/podcasts/speaking-of-psychology/attention-spans

Lane, S. & Atchley, P. “Human Capacity in the Attention Economy”, American Psychological Association, 2020.

Assessing the Trump-Orwell Comparisons: Warning, Not Prophecy

The comparison between the Trump administration and George Orwell’s dystopian works has recently become one of the most prevalent political metaphors. one I’ve used myself. Following Trump’s second inauguration in January 2025, sales of 1984 surged once again on Amazon’s bestseller lists, just as they did during his first term.

These comparisons are rhetorically powerful, but their accuracy depends on how literally Orwell is read and how carefully distinctions are drawn between authoritarian warning signs and fully realized totalitarian systems. But how accurate are the comparisons? Let me walk you through the key parallels, the evidence supporting them, and the critical questions we should be asking.

Understanding Orwell’s Core Themes

Before diving into the comparisons, it’s worth revisiting what Orwell was actually warning us about. In 1984, published in 1949, Orwell depicted a totalitarian state where the Party manipulates reality through “Newspeak” (language control), “doublethink” (holding contradictory beliefs), the “memory hole” (historical revision), and constant surveillance by Big Brother. The novel’s famous slogans—”War is Peace, Freedom is Slavery, Ignorance is Strength”—exemplify how the Party inverts the very meaning of words.

Animal Farm, written as an allegory of the Soviet Union under Stalin, traces how a revolutionary movement devolves into dictatorship. The pigs, led by Napoleon, gradually corrupt the founding principles of equality, with Squealer serving as the regime’s propaganda minister who constantly rewrites history and justifies Napoleon’s increasingly authoritarian actions.

The Major Parallels

The most famous early comparison emerged during Trump’s first term when adviser Kellyanne Conway defended false crowd size claims with the phrase “alternative facts.” This triggered the first major 1984 sales spike in 2017. According to multiple sources, critics immediately drew connections to Orwell’s concept of manipulating language to control thought.

In the current administration, commentators have identified several Orwellian language patterns. The administration has restricted use of certain words on government websites—including “female,” “Black,” “gender,” and “sexuality”—reminiscent of how Newspeak aimed to “narrow the range of thought” by eliminating words. An executive order on January 29, 2025, titled “Ending Radical Indoctrination in K-12 Schooling” has been criticized as doublespeak, using the language of educational freedom while actually restricting what can be taught.  Doublespeak has evolved as a way of combining the ideas of newspeak and doublethink.

Perhaps the most concrete parallel involves the systematic deletion of historical content from government websites. The Organization of American Historians condemned the administration’s efforts to “reflect a glorified narrative while suppressing the voices of historically excluded groups”. Specific documented deletions include information about Harriet Tubman, the Tuskegee Airmen (later restored after public outcry), the Enola Gay airplane (accidentally caught in a purge of anything containing “gay”), and nearly 400 books removed from the U.S. Naval Academy library relating to diversity topics. The Smithsonian’s National Museum of American History also removed references to Trump’s impeachments from its “Limits of Presidential Power” exhibit, which critics including Senator Adam Schiff called “Orwellian”.

Trump’s repeated characterization of political opponents as the “enemy from within” and the media as the “enemy of the people” parallels 1984’s Emmanuel Goldstein figure and the ritualized Two Minutes Hate sessions. One analysis suggests Trump leads Americans through “a succession of Two Minute Hates—of freeloading Europeans, prevaricating Panamanians, vile Venezuelans, Black South Africans, corrupt humanitarians, illegal immigrants, and lazy Federal workers”.

Multiple sources document that new White House staff must undergo “loyalty tests” and some face polygraph examinations. Trump’s statement “I need loyalty. I expect loyalty” echoes 1984’s declaration that “There will be no loyalty, except loyalty to the Party”. Within weeks of his second inauguration, Trump dismissed dozens of inspectors general—the internal government watchdogs. According to reports from Politico and Reuters, several have filed lawsuits claiming their removal violated federal law. An executive order titled “Ensuring Accountability for All Agencies” placed previously independent agencies like the SEC and FTC under direct White House supervision.

The Animal Farm Connections

While 1984 gets more attention, Stanford literature professor Alex Woloch argues that Animal Farm might be more relevant because “it traces that sense of a ‘slippery slope'” from democracy to totalitarianism, whereas in 1984 the totalitarian system is already fully established.

There are echoes of Animal Farm in the way populist rhetoric has framed liberals, progressive institutions, and the press as enemies of “the people,” while power was being consolidated within Trump’s narrow leadership circle. Orwell’s pigs do not abandon revolutionary language; they repurpose it. The “ordinary” supporters are exhorted to endure sacrifices and to direct anger at opposing groups, while political insiders consolidate authority and wealth—echoing the pigs’ gradual move into the farmhouse and adoption of human privileges. Critics argue that Trump’s sustained use of grievance-based populism, even while wielding executive power, fits this pattern symbolically if not structurally.

Other parallels being drawn to Animal Farm include Napoleon’s propaganda minister Squealer and the administration’s communication strategy of inverting reality and the gradual corruption of founding principles while maintaining revolutionary rhetoric like “drain the swamp”. They also are scapegoating political opponents and immigrants much as Napoleon blamed Snowball for all problems. They also are taking credit for others’ achievements just as Napoleon did with the other animals’ work. In the novel, Napoleon demands full investigations of Snowball even after discovering he had nothing to do with alleged misdeeds, much as Trump demanded investigations of Hillary Clinton, James Comey, Letitia James, and Jerome Powell while avoiding scrutiny of his own conduct.

As in Orwell’s farm, where the constant invoking of enemies keeps the animals fearful and loyal, the politics of permanent crisis and blame are being used to normalize increasingly aggressive behavior by those in power.

Critical Perspectives and Limitations

These comparisons raise several important concerns that deserve serious consideration. Orwell was writing about actual totalitarian regimes—Stalinist Russia and Nazi Germany—where millions died in purges, gulags, and genocides. The United States in 2026, despite concerning trends, still maintains functioning courts, elections, a free press, and a civil society. Some observers are warning against trivializing real authoritarian regimes by making overstated comparisons.

The Trump administration’s frequent attacks on the press, civil servants, and election administrators do resemble early warning signs Orwell would have recognized—not as proof of totalitarianism, but as a stress test on democratic norms.

Conservative commentators argue that these comparisons are exaggerated partisan attacks that misrepresent Trump’s actions. They point out that some court challenges to administration actions have succeeded, media criticism continues unabated, and political opposition remains robust—none of which would be possible in Orwell’s Oceania. The question becomes whether we’re witnessing isolated, though concerning actions or rather a systematic pattern—what Professor Woloch calls the “slippery slope” question.

One opinion piece suggested Trump’s actions resemble the chaotic, rule-breaking fraternity culture of “Animal House” more than the calculated totalitarianism of Orwell’s works—emphasizing bombast and spectacle over systematic control. This view argues that the MAGA movement is more “Blutonian than Orwellian,” driven by emotional appeals and personality rather than systematic thought control.

Where the Comparisons Are Strongest and Weakest

Based on my analysis, the comparisons appear most accurate in several specific areas. The pattern of language manipulation and redefinition—calling restrictions “freedom” and censorship “transparency”—closely mirrors doublespeak. The documented systematic removal of historical content from government sources directly parallels the memory hole concept. The dismissing of senior officials such as the head of the Bureau of Labor Statistics after an unfavorable jobs report, the wholesale firing of agency inspectors general and signaling that neutral experts should conform to political expectations mirrors the Orwellian demand for loyalty.  The assumption of control of previously independent agencies, and pressure on courts to allow the administration’s consolidation of power have parallels in the total party control. Unleashing ICE agents on the general public and excusing the murder of protesters are chillingly similar to the thought police and the “vaporizing” of citizens in Oceana. Perhaps most strikingly, Trump’s 2018 statement “What you’re seeing and what you’re reading is not what’s happening” nearly quotes Orwell’s line: “The party told you to reject the evidence of your eyes and ears”.

The comparisons are most strained when they overstate the current reality by suggesting America has already become Oceania, while democratic institutions that were lacking completely in Oceania are still functioning in America. Unlike 1984’s Winston, Americans retain significant ability to resist and organize. There is no single state monopoly over information. State and local governments, and civil society remain vigorous and are often hostile to Trump. Additionally, some comparisons conflate authoritarian-sounding rhetoric with actual totalitarian control, which aren’t equivalent.

Speculation: The Trajectory Question

The pattern of actions I’ve documented—systematic information control, loyalty purges, attacks on institutional independence, and explicit statements about seeking a third term—suggests a consistent direction rather than random actions. If these trends continue unchecked, particularly combined with further erosion of electoral integrity, increased prosecution of political opponents through mechanisms like the “Weaponization Working Group,” greater control over media and information, and weakening of judicial independence, then the slide toward authoritarianism could accelerate. As I am writing this article, Trump continues to promote what he calls the “Board of Peace,” a proposed international organization that is an attempt to create a U.S.-led alternative to the United Nations. The scholar Alfred McCoy notes that Trump appears to be pursuing what Orwell described: a world divided into three regional blocs under strongman leaders, with weakened international institutions.

However, several factors may counter this trajectory. Strong civil society and activist movements continue organizing opposition movements. Independent state governments push back against federal overreach and robust legal challenges have blocked numerous executive actions. The free press continues investigative reporting despite attacks. Congressional resistance still exists—even Senator Booker’s 25-hour speech on constitutional abuse entered the Congressional Record as a permanent historical marker.

My speculation is that the most likely outcome is neither complete Orwellian dystopia nor a comfortable return to democratic norms, but rather what political scientists call “competitive authoritarianism” or “illiberal democracy”—where democratic forms persist but are increasingly hollowed out, opposition exists but faces systematic disadvantages, and truth becomes increasingly contested. The key question isn’t whether we’ll replicate 1984 exactly, but whether enough democratic safeguards will hold to prevent sliding further into authoritarianism. One observer standing before a giant banner of Trump’s face in Washington noted that “Orwell’s world isn’t just fiction. It’s a mirror—reflecting what happens when power faces no resistance, when truth bends to loyalty, and when silence becomes the safest response”.

The Bottom Line

The Orwell comparisons aren’t perfect historical analogies, but they’re not baseless partisan rhetoric either. They identify genuine patterns of authoritarian behavior that merit serious attention—the manipulation of language to distort reality, the systematic rewriting of historical narratives, the demand for personal loyalty over institutional integrity, and the rejection of shared factual reality. I am concerned about the increasing use of Nazi inspired phrases and themes by members of the Trump administration. Most recently, Kristy Noam’s use of the phrase “one of us-all of you”. While not a formal written Nazi policy, it reflects their practice when dealing with partisan attacks in occupied countries and can only be viewed as a threat of violence against American citizens.

Whether these patterns represent isolated troubling actions or the beginnings of systematic democratic erosion remains the crucial—and still open—question. As Orwell himself noted, he didn’t write to predict the future but to prevent it. The value of these comparisons may ultimately lie not in their precision as historical parallels, but in their power to alert citizens to concerning trends before they become irreversible.

Key Sources

  • Organization of American Historians statements on historical revisionism
  • Politico and Reuters reporting on inspector general firings
  • The Washington Post and Axios on executive order impacts
  • Stanford Professor Alex Woloch’s analysis in The World (https://theworld.org/stories/2017/01/25/people-are-saying-trumps-government-orwellian-what-does-actually-mean)
  • World Press Institute analysis (https://worldpressinstitute.org/the-orwell-effect-how-2025-america-felt-like-198/)
  • Adam Gopnik, “Orwell’s ‘1984’ and Trump’s America,” The New Yorker, Jan. 26, 2017.
  • “Trump’s America: Rethinking 1984 and Brave New World,” Monthly Review, Sept. 7, 2025.
  • “False or misleading statements by Donald Trump,” Wikipedia (overview of documented falsehoods).
  • “Trump’s Efforts to Control Information Echo, an Authoritarian Playbook,” The New York Times, Aug. 3, 2025.
  • “Trump’s 7 most authoritarian moves so far,” CNN Politics, Aug. 13, 2025.
  • “The Orwellian echoes in Trump’s push for ‘Americanism’ at the Smithsonian,” The Conversation, Aug. 20, 2025.
  • “Everything Is Content for the ‘Clicktatorship’,” WIRED, Jan. 13, 2026.
  • “’Animal Farm’ Perfectly Describes Life in the Era of Donald Trump,” Observer, May 8, 2017.
  • “Ditch the ‘Animal Farm’ Mentality in Resisting Trump Policies,” YES! Magazine, May 8, 2017.

Full disclosure: I recently bought a hat that says “Make Orwell Fiction Again”.

Understanding Hepatitis: A Guide to Types A, B, and C

If you’ve heard of hepatitis, you probably know it has something to do with the liver. But there’s a whole family of hepatitis viruses, each with its own personality when it comes to how it spreads, what it does to your body, and how we can prevent or treat it. Let’s walk through the three most common types—hepatitis A, B, and C—and then dive into a controversy that’s making headlines right now: the hepatitis B vaccine.

What Is Hepatitis, Anyway?

At its core, hepatitis just means inflammation of the liver. Your liver is a workhorse organ that filters toxins, produces essential proteins like albumin, processes amino acids, and stores energy. When a hepatitis virus attacks it, the inflammation can range from a minor inconvenience to a life-threatening condition. The three main culprits—hepatitis A, B, and C viruses—are completely different organisms that just happen to target the same organ.

Hepatitis A: The Food and Water Troublemaker

Hepatitis A is often called “traveler’s hepatitis” because it spreads through food and water that are contaminated with fecal matter. Think of it as the virus you might pick up from eating unwashed produce, drinking contaminated water, or consuming raw shellfish from polluted waters. Other risk factors include unprotected sex and IV drug use.  According to the CDC, there were an estimated 3,300 acute infections in 2023 in the United States.

The good news about hepatitis A is that it typically heals itself within 2 months. When symptoms appear—which take about 15 to 50 days after infection—they can include jaundice (that yellowing of the skin and eyes), fever, fatigue, nausea, and dark urine. Many young children don’t show any symptoms at all. The virus doesn’t become chronic, and once you’ve had it, your body produces antibodies that protect you for life.

Prevention is straightforward: there’s a safe and effective vaccine, and basic hygiene goes a long way. Wash your hands thoroughly, especially after using the bathroom and before preparing food. When traveling to areas with questionable water quality, stick to bottled or boiled water and avoid washing raw food in local water.

Treatment is mostly supportive—rest, fluids, and time. Your liver does the healing work itself.

Hepatitis B: The Blood and Body Fluid Virus

Hepatitis B is where things get more serious. This virus spreads through blood and other body fluids, which means it can be transmitted through sexual contact, sharing needles, or from mother to baby during childbirth. Healthcare workers are especially at risk from needle sticks and sharps injuries. It’s a highly infectious and tough virus that can live on surfaces for up to a week. Even tiny amounts of dried blood on seemingly innocent things like razors, nail clippers, or toothbrushes can potentially spread the infection.

According to the CDC, there were an estimated 14,400 acute infections in 2023, Approximately 640,000 adults were living with chronic hepatitis B during the 2017-2020 period and that’s what makes it particularly concerning: while the hepatitis B virus often causes short-term illness, it can become chronic.

The incubation period is long—typically 90 days with a range of 60 to 150 days. When symptoms do appear, they mirror hepatitis A: jaundice, fatigue, abdominal pain, nausea, and dark urine. But here’s the frightening part: most young children and many adults show no symptoms at all, meaning they can spread the virus without knowing they’re infected.

The chronic infection risk varies dramatically by age. If you’re infected as a newborn, you have a 90% chance of developing chronic hepatitis B. For adults, the risk drops to under 5%. Those with chronic infection face serious long-term consequences—15% to 25% of people with chronic infection develop serious liver disease, including cirrhosis, liver failure, or liver cancer.

Treatment for acute hepatitis B is supportive, but several antiviral medications are available for people with chronic infection. These don’t completely eradicate the disease but produce a “functional cure” that significantly slows liver damage and reduces complications.

Prevention is critical. There’s a highly effective vaccine—we’ll talk more about the controversy surrounding it in a moment.  Avoiding exposure to infected blood and body fluids is essential. This means safe sex practices, never sharing needles or personal care items that might have blood on them, and ensuring proper sterilization of medical and tattooing equipment.

Hepatitis C: The Silent Epidemic

Hepatitis C is transmitted primarily through blood-to-blood contact. The most common route is sharing needles among people who inject drugs, though it can also spread through contaminated medical equipment, and rarely through sexual contact. Mother-to-child transmission during childbirth is possible but uncommon.  Screening of blood products has made transfusion related infections rare.  About 10% of cases have no identified source.

What makes hepatitis C insidious is its stealthy nature. Many people with hepatitis C don’t have symptoms, and acute hepatitis with jaundice is rare, occurring in only about 10% of infections. The symptoms that do appear—fatigue, mild flu-like feelings—are easily dismissed. Meanwhile, the majority of people (60-70%) develop chronic infection.  I recommend a screening blood test at least once for all adults over age 55, as they are the group most likely to have hepatitis C without an identifiable source.

The incubation period ranges widely, from 2 weeks to 6 months, typically 6 to 9 weeks. Without treatment, chronic hepatitis C can lead to cirrhosis and liver cancer over decades. Before modern treatments, it was a leading cause of liver transplants.

Treatment for hepatitis C has undergone a revolution. The old approach—interferon injections combined with ribavirin—had terrible side effects and worked in only about half of patients. Today, we have direct-acting antivirals (DAAs), which can cure more than 95% of cases with just 8-12 weeks of well-tolerated oral medication. These drugs target specific proteins the virus needs to replicate, essentially starving it out of existence. The treatment is so effective that hepatitis C is now considered a curable disease.

Prevention focuses on avoiding blood-to-blood contact. Never share needles, syringes, or any drug equipment. If you’re getting a tattoo or piercing, ensure the facility follows proper sterilization procedures. Healthcare workers should follow standard precautions with blood and body fluids. Unfortunately, there’s no vaccine for hepatitis C yet, though researchers continue working on one.

The Hepatitis B Vaccine Controversy: What’s Really Happening

Now let’s address the elephant in the room—the recent controversy over the hepatitis B vaccine for newborns. This topic exploded in the news in December 2025, and it’s worth understanding what’s currently going on versus what the science says.

The Recent Development

On December 5, 2025, the CDC’s Advisory Committee on Immunization Practices (ACIP) voted 8-3 to recommend hepatitis B vaccination at birth only for infants born to mothers who test positive for the virus or whose status is unknown. This reverses decades of policy that recommended universal hepatitis B vaccination for all newborns within 24 hours of birth.

The Arguments For Changing the Policy

Some ACIP members raised concerns about vaccine safety and parental hesitancy. Committee member Retsef Levi heralded the move as “a fundamental change in the approach to this vaccine,” which would encourage parents to “carefully think about whether they want to take the risk of giving another vaccine to their child”. The controversy includes historical concerns about possible links between the hepatitis B vaccine and conditions like multiple sclerosis, autism, and other autoimmune disorders.

What Science Actually Shows

The evidence on vaccine safety is quite robust.  Concerns about multiple sclerosis emerged in France in the 1990s. Since then, a large body of scientific evidence shows that hepatitis B vaccination does not cause or worsen MS. The World Health Organization’s Global Advisory Committee on Vaccine Safety has concluded there is no association between the hepatitis B vaccine and MS.  It is one of the safest vaccines studied.

As for other safety concerns, CDC reviewed VAERS reports from 2005-2015 and found no new or unexpected safety concerns. The most common side effects are minor: soreness at the injection site, headache, and fatigue lasting 1-2 days.

Why the Universal Birth Dose Matters

The scientific and medical communities have strongly opposed this policy change. The American Academy of Pediatrics states that from 2011-2019, rates of reported acute hepatitis B remained low among children and adolescents, likely explained in part by the implementation of childhood hepatitis B vaccine recommendations published in 1991.

Here’s why newborns are so vulnerable: infected infants have a 90% chance of developing chronic hepatitis B, and a quarter of those will die prematurely from liver disease when they become adults.

The “just target high-risk babies” approach has a major flaw: the CDC estimates about 640,000 adults have chronic hepatitis B, but about half don’t know they’re infected. Before universal vaccination, about half of infected children under 10 got it from their mothers—the rest contracted it through other exposures not identified by maternal screening.

The Global Context

Claims that the U.S. is an outlier don’t hold up. As of September 2025, 116 of 194 WHO member states recommend universal hepatitis B birth dose vaccination.  European countries that do not recommend a universal birth dose have a much lower hepatitis B incidence rate and more robust antenatal maternal screening.  The majority still recommend vaccination at two to three months.

The Bottom Line

All three types of hepatitis pose serious health risks, but we have powerful tools to prevent and treat them. Hepatitis A and B have safe, effective vaccines that have dramatically reduced disease rates. Hepatitis C, while lacking a vaccine, is now curable with modern antiviral medications.

The hepatitis B vaccine controversy highlights a broader tension in public health: balancing individual autonomy with community protection. The scientific evidence strongly supports the vaccine’s safety and the effectiveness of universal newborn vaccination in preventing a disease that can be fatal. Multiple studies, decades of safety data, and recommendations from medical organizations worldwide back this up.

For parents making decisions about their newborns, the facts are these: hepatitis B is a serious disease with a high risk of becoming chronic in infants, the vaccine is highly effective at preventing infection, and extensive safety monitoring has found it to be safe with only minor, temporary side effects. As hepatitis research continues, we’re seeing remarkable progress—from the near-eradication of hepatitis A in vaccinated populations to the transformation of hepatitis C from a chronic, often fatal disease to a curable one. These advances remind us how far we’ve come in understanding and combating these liver viruses.

Sources

The Strange Tale of Spontaneous Human Combustion

Did you ever run into an idea so strange that you can’t quite understand how anyone ever took it seriously? Recently, while reading about historical curiosities in Pseudoscience by Kang and Pederson, I was reminded of one of the most enduring examples: spontaneous human combustion.

The classic image is always the same. Someone enters a room and finds a small pile of ashes where a person once sat. The body is nearly destroyed, yet the chair beneath it is barely scorched and the rest of the room looks strangely untouched. For centuries, this baffling scene was explained by a dramatic idea—that a person could suddenly burst into flames from the inside, without any external fire at all.

It sounds like something lifted straight from a gothic novel, but belief in spontaneous human combustion stretches back to at least the seventeenth century and reached its peak in the Victorian era. To understand why it gained such traction, it helps to look at the social attitudes of the time, the cases that convinced people it was real, and what modern forensic science eventually uncovered.

Much of the early belief rested on moral judgment rather than evidence. In the nineteenth century, spontaneous human combustion was widely accepted as a kind of divine punishment. Many of the alleged victims were described as heavy drinkers, often elderly, overweight, or socially isolated, and women were frequently overrepresented in the reports. To Victorian minds, this pattern felt meaningful. Alcohol was flammable, after all, and it seemed reasonable—at least then—to assume that a body saturated with spirits might somehow ignite. Sensational newspaper reporting amplified the mystery, presenting lurid details while glossing over inconvenient facts.

The idea gained intellectual credibility in 1746 when Paul Rolli, a Fellow of the Royal Society, formally used the term “spontaneous human combustion” while describing the death of Countess Cornelia Zangari Bandi. The involvement of a respected scientific figure gave the concept legitimacy that lingered for generations.

Several cases became canonical. Countess Bandi’s death in 1731 was described as leaving little more than ashes and partially intact legs, still clothed in stockings. In 1966, John Irving Bentley of Pennsylvania was found almost completely burned except for one leg, with his pipe discovered intact nearby. Mary Reeser, known as the “Cinder Woman,” died in Florida in 1951, leaving behind melted fat embedded in the rug near where she had been sitting. As recently as 2010, an Irish coroner ruled that spontaneous human combustion caused the death of Michael Faherty, whose body was found badly burned near a fireplace in a room that showed little fire damage. Over roughly three centuries, about two hundred such cases have been cited worldwide.

Believers proposed explanations that ranged from the scientific-sounding to the overtly theological. Alcoholism was the most popular theory, with some physicians genuinely arguing that chronic drinking made the human body combustible. Earlier medical thinking leaned on imbalances of bodily humors, while later writers speculated about unknown chemical reactions producing internal heat. Religious interpretations framed these deaths as punishment for sin. Even in modern times, a few proponents have suggested that acetone buildup in people with alcoholism, diabetes, or extreme diets could somehow trigger combustion.

The idea was so culturally embedded that Charles Dickens famously killed off the alcoholic character Mr. Krook by spontaneous combustion in Bleak House. When critics objected, Dickens defended the plot choice by citing what he believed were credible historical and medical sources.

The illusion of the supernatural persisted because the circumstances were almost perfectly misleading. Victims were typically alone, elderly, or physically impaired, unable to respond quickly to a smoldering fire. The localized damage looked impossible to the untrained eye. Potential ignition sources were often destroyed in the fire itself. And dramatic storytelling filled in the gaps left by incomplete investigations.

What actually happens in these cases is far less mystical and far more unsettling. Modern forensic science points to an explanation known as the “wick effect.” In this scenario, there is always an external ignition source—often a cigarette, candle, lamp, or fireplace ember. Once clothing catches fire, heat melts the person’s body fat. That liquefied fat soaks into the clothing, which then behaves like a candle wick. The fire burns slowly and steadily, sometimes for hours, consuming much of the body while leaving nearby objects relatively unscathed.

This effect has been demonstrated experimentally. In the 1960s, researchers at Leeds University showed that cloth soaked in human fat could sustain a slow burn for extended periods once ignited. In 1998, forensic scientist John de Haan famously replicated the effect for the BBC by burning a pig carcass wrapped in a blanket. The result closely matched classic spontaneous combustion scenes: severe destruction of the body, with extremities left behind and limited damage to the surrounding room.

The reason these fires don’t usually engulf the entire space is simple physics. Flames rise more easily than they spread sideways, and the heat output of a wick-effect fire is relatively localized. It’s similar to standing near a campfire—you can be close without catching fire yourself.

Investigators Joe Nickell and John F. Fischer examined dozens of historical cases and found that every one involved a plausible ignition source, details that earlier accounts often ignored or downplayed. When these factors are restored to the narrative, the mystery largely disappears.

As science writer Benjamin Radford has pointed out, if spontaneous human combustion were truly spontaneous, we would expect it to occur randomly and frequently, in public places as well as private ones. Instead, it consistently appears in situations involving isolation and an external heat source.

The bottom line is straightforward. There is no credible scientific evidence that humans can burst into flames without an external ignition source. What has been labeled spontaneous human combustion is better understood as a tragic combination of accidental fire and the wick effect. The myth endured because it blended moral judgment, fear, and incomplete science into a compelling story. Today, forensic investigation has replaced superstition with explanation, even if the results remain unsettling.

Spontaneous human combustion survives as a reminder of how easily mystery fills the space where evidence is thin—and how patiently applied science eventually closes that gap.


Sources and Further Reading

Peer-reviewed forensic and medical analyses are available through the National Center for Biotechnology Information, including “So-called Spontaneous Human Combustion” in the Journal of Forensic Sciences (https://pubmed.ncbi.nlm.nih.gov/21392004/) and Koljonen and Kluger’s 2012 review, “Spontaneous human combustion in the light of the 21st century,” published in the Journal of Burn Care & Research (https://pubmed.ncbi.nlm.nih.gov/22269823/).

General scientific and historical overviews can be found in Encyclopædia Britannica’s article “Is Spontaneous Human Combustion Real?” (https://www.britannica.com/story/is-spontaneous-human-combustion-real), Scientific American’s discussion of the wick effect (https://www.scientificamerican.com/blog/cocktail-party-physics/burn-baby-burn-understanding-the-wick-effect/), and Live Science’s summary of facts and theories (https://www.livescience.com/42080-spontaneous-human-combustion.html).

Accessible explanatory pieces are also available from HowStuffWorks (https://science.howstuffworks.com/science-vs-myth/unexplained-phenomena/shc.htm), History.com (https://www.history.com/articles/is-spontaneous-human-combustion-real), Mental Floss (https://www.mentalfloss.com/article/22236/quick-7-seven-cases-spontaneous-human-combustion), and All That’s Interesting (https://allthatsinteresting.com/spontaneous-human-combustion). Wikipedia’s entries on spontaneous human combustion and the wick effect provide comprehensive background and references at https://en.wikipedia.org/wiki/Spontaneous_human_combustion and https://en.wikipedia.org/wiki/Wick_effect.

How Do We Know What We Know? An Introduction to Epistemology

In a world awash with conflicting information, how do we know what is true? How do we know what to believe? How can we even begin to assess it?  My ongoing interest in critical thinking has led me to epistemology.

Epistemology is the branch of philosophy that asks one of the most fundamental questions humans can consider: How do we know what we know? It’s essentially the study of knowledge itself—what counts as knowledge, how we acquire it, and what makes our beliefs justified or true.

Think about it this way: You believe the Earth revolves around the Sun. But why do you believe that? Maybe you learned it in school, maybe you’ve seen evidence from astronomy, or maybe you just trust what scientists tell you. Epistemology digs into questions like these—examining the difference between simply believing something and actually knowing it.

The field explores several core questions. What’s the difference between knowledge and mere opinion? Can we ever be absolutely certain about anything, or is all knowledge provisional?  What are the sources of knowledge—experience, reason, intuition, testimony from others? Epistemology also wrestles with skepticism—the worry that our beliefs might be systematically wrong. How do we know the world isn’t an illusion? How do we justify trusting memory, perception, or testimony? When is it rational to believe something, and when should we remain skeptical?

Epistemologists have developed various theories over the centuries. Some argue that true knowledge comes primarily through sensory experience (empiricism), while others emphasize the role of reason and logic (rationalism). Still others focus on whether knowledge requires absolute certainty or just a high degree of justified confidence.

These might seem like abstract concerns, but epistemology has real-world implications. When you’re deciding whether to trust a news source, evaluating scientific claims, or determining what evidence you need before making an important decision, you’re engaging with epistemological questions.  Epistemology doesn’t tell you what to believe about climate change, vaccines, or economics—but it sharpens your sense of why some beliefs deserve more confidence than others. It encourages intellectual humility without sliding into cynicism.

Ultimately, epistemology concerns itself with concepts of knowledge, belief, truth, and justification. Its primary focus is understanding not only what is believed, but the reasons those beliefs are considered warranted. Far from being limited to abstract philosophy, epistemology fosters disciplined critical thinking—a vital skill in societies inundated with competing perspectives. It is less about ivory-tower theory and more about disciplined thinking.

Source: For a comprehensive academic overview of epistemology and its central questions, see the Stanford Encyclopedia of Philosophy’s entry: https://plato.stanford.edu/entries/epistemology/

What “Woke” Really Means: A Look at a Loaded Word

Why everyone’s fighting over a word nobody agrees on

Okay, so you’ve probably heard “woke” thrown around about a million times, right? It’s in political debates, online arguments, your uncle’s Facebook rants—basically everywhere. And here’s the weird part: depending on who’s saying it, it either means you’re enlightened or you’re insufferable.

So let’s figure out what’s actually going on with this word.

Where It All Started

Here’s something most people don’t know: “woke” wasn’t invented by social media activists or liberal college students. It goes way back to the 1930s in Black communities, and it meant something straightforward—stay alert to racism and injustice.

The earliest solid example comes from blues musician Lead Belly. In his song “Scottsboro Boys” (about nine Black teenagers falsely accused of rape in Alabama in 1931), he told Black Americans to “stay woke”—basically meaning watch your back, because the system isn’t on your side. This wasn’t abstract philosophy; it was survival advice in the Jim Crow South.

The term hung around in Black culture for decades. It got a boost in 2008 when Erykah Badu used “I stay woke” in her song “Master Teacher,” where it meant something like staying self-aware and questioning the status quo.

But the big explosion happened around 2014 during the Ferguson protests after Michael Brown was killed. Black Lives Matter activists started using “stay woke” to talk about police brutality and systemic racism. It spread through Black Twitter, then got picked up by white progressives showing solidarity with social justice movements. By the late 2010s, it had expanded to cover sexism, LGBTQ+ issues, and pretty much any social inequality you can think of.

And that’s when conservatives started using it as an insult.

The Liberal Take: It’s About Giving a Damn

For progressives, “woke” still carries that original vibe of awareness. According to a 2023 Ipsos poll, 56% of Americans (and 78% of Democrats) said “woke” means “to be informed, educated, and aware of social injustices.”

From this angle, being woke just means you’re paying attention to how race, gender, sexuality, and class affect people’s lives—and you think we should try to make things fairer. It’s not about shaming people; it’s about understanding the experiences of others.

Liberals see it as continuing the work of the civil rights movement—expanding who we empathize with and include. That might mean supporting diversity programs, using inclusive language, or rethinking how we teach history. To them, it’s just what thoughtful people do in a diverse society.

Here’s the Progressive Argument in a Nutshell

The term literally started as self-defense. Progressives argue the problems are real. Being “woke” is about recognizing that bias, inequality, and discrimination still exist. The data back some of this up—there are documented disparities in policing, sentencing, healthcare, and economic opportunity across racial lines. From this view, pointing these things out isn’t being oversensitive; it’s just stating facts.

They also point out that conservatives weaponized the term. They took a word from Black communities about awareness and justice and turned it into an all-purpose insult for anything they don’t like about the left. Some activists call this a “racial dog whistle”—a way to attack justice movements without being explicitly racist.

The concept naturally expanded from racial justice to other inequalities—sexism, LGBTQ+ discrimination, other forms of unfairness. Supporters see this as logical: if you care about one group being treated badly, why wouldn’t you care about others?

And here’s their final point: what’s the alternative? When you dismiss “wokeness,” you’re often dismissing the underlying concerns. Denying that racism still affects American life can become just another way to ignore real problems.

Bottom line from the liberal side: being “woke” means you’ve opened your eyes to how society works differently for different people, and you think we can do better.

The Conservative Take: It’s About Going Too Far

Conservatives see it completely differently. To them, “woke” isn’t about awareness—it’s about excess and control.

They see “wokeness” as an ideology that forces moral conformity and punishes anyone who disagrees. What started as social awareness has turned into censorship and moral bullying. When a professor loses their job over an unpopular opinion or comedy shows get edited for “offensive” jokes, conservatives point and say: “See? This is exactly what we’re talking about.”  To them, “woke” is just the new version of “politically correct”—except worse. It’s intolerance dressed up as virtue.

Here’s the conservative argument in a nutshell:

Wokeness has moved way beyond awareness into something harmful. They argue it creates a “victimhood culture” where status and that benefits come from claiming you’re oppressed rather than from merit or hard work. Instead of fixing injustice, they say it perpetuates it by elevating people based on identity rather than achievement.

They see it as “an intolerant and moralizing ideology” that threatens free speech. In their view, woke culture only allows viewpoints that align with progressive ideology and “cancels” dissenters or labels them “white supremacists.”

Many conservatives deny that structural racism or widespread discrimination still exists in modern America. They attribute unequal outcomes to factors other than bias. They believe America is fundamentally a great country and reject the idea that there is systematic racism or that capitalism can sometimes be unjust.

They also see real harm in certain progressive positions—like the idea that gender is principally a social construct or that children should self-determine their gender. They view these as threats to traditional values and biological reality.

Ultimately, conservatives argue that wokeness is about gaining power through moral intimidation rather than correcting injustice. In their view, the people rejecting wokeness are the real critical thinkers.

The Heart of the Clash

Here’s what makes this so messy: both sides genuinely believe they’re defending what’s right.

Liberals think “woke” means justice and empathy. Conservatives think it means judgment and control. The exact same thing—a company ad featuring diverse families, a school curriculum change, a social movement—can look like progress to one person and propaganda to another.

One person’s enlightenment is literally another person’s indoctrination.

The Word Nobody Wants Anymore

Here’s the ironic part: almost nobody calls themselves “woke” anymore. Like “politically correct” before it, the word has gotten so loaded that it’s frequently used as an insult—even by people who agree with the underlying ideas. The term has been stretched to cover everything from racial awareness to climate activism to gender identity debates, and the more it’s used, the less anyone knows what it truly means.

Recently though, some progressives have started reclaiming the term—you’re beginning to see “WOKE” on protest signs now.

So, Who’s Right?

Maybe both. Maybe neither.

If “woke” means staying aware of injustice and treating people fairly, that’s good. If it means acting morally superior and shutting down disagreement, that’s not. The truth is probably somewhere in the messy middle.

This whole debate tells us more about America than about the word itself. We’ve always struggled with how to balance freedom with fairness, justice with tolerance. “Woke” is just the latest word we’re using to have that same old argument.

The Bottom Line

Whether you love it or hate it, “woke” isn’t going anywhere soon. It captures our national struggle to figure out what awareness and fairness should look like today.

And honestly? Maybe we’d all be better off spending less time arguing about the word and more time talking about the actual values behind it—what’s fair, what’s free speech, what kind of society do we want?

Being “woke” originally meant recognizing systemic prejudices—racial injustice, discrimination, and social inequities many still experience daily. But the term’s become a cultural flashpoint.  Here’s the thing: real progress requires acknowledging both perspectives exist and finding common ground. It’s not about who’s “right”—it’s about building bridges.

 If being truly woke means staying alert to injustice while remaining open to dialogue with those who see things differently, seeking solutions that work for everyone, caring for others, being empathetic and charitable, then call me WOKE.

Bull Markets, Bear Markets and the Story Behind Wall Street’s Most Famous Animals

If you’ve ever caught a business news segment, you’ve probably heard anchors throwing around terms like “bull market” and “bear market” as if everyone just naturally knows what they mean. But beyond the basic idea that one’s good and one’s bad, the real mechanics of these market conditions—and how they got their animal nicknames—are pretty interesting.

How the Stock Market Works (The Quick Version)

Before we dive into bulls and bears, let’s cover the basics. The stock market is essentially a place where people buy and sell ownership stakes in companies. When you buy a share of stock, you’re purchasing a tiny piece of that company. The price of that share goes up or down based on how many people want to buy it versus how many want to sell it—classic supply and demand.

Companies sell shares to raise money for growth, and investors buy them hoping the company will do well and the stock price will increase. The overall “market” is tracked through indexes like the S&P 500 or Dow Jones Industrial Average, which measure how a group of major companies are performing. When most stocks are rising, we say the market is up; when most are falling, the market is down.

What Bull and Bear Markets Actually Mean

A bull market refers to a period when stock prices are rising, typically defined as a 20% or more increase from recent lows. During bull markets, investors are optimistic, companies are generally doing well, and people are more willing to take risks with their money. Bull markets usually are driven by a strong economy with low inflation and optimistic investors. Think of the economic boom of the late 1990s or the recovery after the 2008 financial crisis—those were classic bull markets where prices kept climbing for years.

A bear market is essentially the opposite: a general decline in the stock market over time, usually defined as a 20% or more price decline over at least a two-month period. During bear markets, investors get nervous, sell off their holdings, and pessimism spreads. When a 10% to 20% decline occurs, it’s classified as a correction, and bear territory always precedes a bear market. The Great Depression, the 2008 financial crisis, and the COVID-19 pandemic’s initial impact all triggered bear markets.

The Colorful History Behind the Terms

Now here’s where things get interesting. These terms didn’t come from some modern marketing genius—they trace back to 18th century London, and the story involves everything from old proverbs to violent animal fights to one of history’s biggest financial scandals.

The “bear” term came first. Etymologists point to an old proverb, warning that it is not wise “to sell the bear’s skin before one has caught the bear”. This saying was about the foolishness of counting on something before you actually have it. By the early 1700s, traders who engaged in short selling (betting that prices would fall) were called “bear-skin jobbers” because they sold a bear’s skin—the shares—before catching the bear. The term eventually got shortened to just “bears.”

The real watershed moment came with the South Sea Bubble of 1720. The South Sea Company was a British joint stock company founded by an act of Parliament in 1711, and in 1720, the company assumed most of the British national debt and convinced investors to give up state annuities for company stock sold at a very high premium. When everything collapsed, share prices dropped dramatically, starting a “bear market,” and the story became the topic of many literary works and went down in history as an infamous metaphor.

As for “bull,” the origins are a bit murkier. The first known instance of the market term “bull” popped up in 1714, shortly after the “bear” term emerged. Most historians think it arose as a natural counterpoint to “bear,” possibly influenced by bull-baiting and bear-baiting, two animal fighting sports popular at the time—though I should note that’s somewhat speculative.

There’s also a popular explanation about how the animals attack: bears swipe downward with their paws while bulls thrust upward with their horns, which nicely mirrors market movements. While that’s a helpful memory device, it’s probably more of a convenient coincidence than the actual origin. The term “bull” originally meant a speculative purchase in the expectation that stock prices would rise, and was later applied to the person making such purchases.

Why This Still Matters

These metaphors have stuck around for three centuries because they work. They’re visceral and easy to remember—you can picture a charging bull or a hibernating bear without needing an economics degree. They also capture something real about market psychology: the aggressive optimism of bulls pushing prices up versus the defensive pessimism of bears hunkering down.

Understanding these terms helps you follow financial news and, more importantly, recognize when markets are shifting. Knowing you’re in a bull market might make you less surprised by rising prices, while recognizing a bear market can help you avoid panic-selling when things look grim.

The Bull and Bear Markets are among those things I’ve heard for years and never knew their origin.  This article is an attempt to explain it to myself.

Sources:

Page 1 of 2

Powered by WordPress & Theme by Anders Norén