Grumpy opinions about everything.

Category: Commentary Page 3 of 15

This is the home of grumpy opinions.

Merry Christmas from The Grumpy Doc

The Multitasking Myth

What’s Really Happening in Your Brain

You’re probably multitasking right now. Maybe you’re reading this with a podcast playing in the background, or you’ve got three browser tabs open and you’re checking your phone every few minutes. We all do it. We even brag about it on resumes: “Excellent multitasker!” But here’s the uncomfortable truth that neuroscience has been trying to tell us for years—what we call multitasking is mostly an illusion.

What People Actually Mean

When most people say they’re multitasking, they’re describing one of two scenarios. The first is doing multiple automatic activities simultaneously—like walking while talking, or listening to music while folding laundry. The second, and the one that gets more interesting, is rapidly switching attention between different demanding tasks—like answering emails while on a conference call, or texting while watching TV.

The distinction matters because our brains handle these situations very differently. Activities that have become automatic through practice don’t require much conscious attention. You can absolutely walk and chew gum at the same time because neither activity demands your prefrontal cortex’s full attention. But when both tasks require active thinking and decision-making? That’s where things get complicated.

The Brain’s Bottleneck

Here’s what neuroscience tells us: true multitasking—simultaneously processing multiple streams of complex information—is essentially impossible for the human brain. What feels like multitasking is actually rapid task-switching, and your brain pays a price every time it makes that switch.

The limitation comes from something researchers call the “response selection bottleneck.” When you’re performing tasks that require conscious thought, they all funnel through the same neural pathways in your prefrontal cortex. This region can only process one demanding task at a time, so when you think you’re doing two things at once, you’re really just toggling between them very quickly.

Studies using functional MRI brain imaging have shown what happens during this switching process. When people attempt to multitask, researchers observe reduced activity in the regions responsible for each individual task compared to when those tasks are done separately. Your brain literally can’t devote full processing power to both activities simultaneously.

The Switching Cost

Every time you switch from one task to another, there’s a cognitive cost. Your brain needs to disengage from the first task, shift attention, and then reorient to the new task. This happens so quickly—sometimes in tenths of a second—that we don’t consciously notice it. But those microseconds add up.

Sorry, but get ready for some doctor talk.  When people switch tasks, imaging studies show increased activation in frontoparietal control and dorsal attention networks, especially in prefrontal regions (like the inferior frontal junction) and parietal cortex (such as intraparietal sulcus). This boosted activity reflects the brain dropping one task set, loading another into working memory, and re‑orienting attention—processes that consume time and neural resources.

Over time, practice can make specific tasks more automatic, reducing average activity in these control networks and allowing smoother coordination of tasks. However, even in trained multitaskers, studies still find evidence for serial queuing of operations in the multiple‑demand frontoparietal network, reinforcing the idea that consciously doing multiple demanding things “at once” is extremely limited.

Research from Stanford University found that people who regularly engage in heavy media multitasking actually perform worse at filtering out irrelevant information and switching between tasks than people who focus on one thing at a time. Essentially, chronic multitaskers become worse at the very thing they practice most.

Even when people train extensively, studies indicate they mainly become faster at switching and coordinating, not truly doing two demanding tasks at once. Experimental work using reaction‑time paradigms shows a reliable “switch cost”: when people change tasks, responses get slower and more error‑prone compared to staying with one task.  This cost is one of the strongest signs that most human “multitasking” is serial switching under time pressure rather than genuine simultaneous processing.

The American Psychological Association reports that these mental blocks created by switching between tasks can cost up to 40% of productive time. Think about that for a minute—nearly half your work time potentially lost to the mechanics of jumping between activities.

The Attention Residue Problem

There’s another wrinkle that makes multitasking even less efficient. When you switch away from a task before completing it, part of your attention remains stuck on the unfinished work. Researchers call this “attention residue,” and it reduces your cognitive performance on the next task.

Sophie Leroy, a business professor at the University of Washington, demonstrated this effect in a series of studies. People who switched tasks performed significantly worse on the second task than people who finished the first task before moving on. The unfinished task keeps running in your mental background, using up cognitive resources you need for the new activity.

When “Multitasking” Actually Works

There are legitimate exceptions to the no-multitasking rule, but they’re more limited than most people think. You can successfully combine activities when at least one of them is so well-practiced that it’s become automatic—essentially requiring no conscious thought. You can listen to an audiobook while jogging because your body handles the running on autopilot.

Some research also suggests that certain types of background music or ambient noise can enhance performance on creative tasks, though this seems to work best when the music is familiar and lacks lyrics that compete with language-processing tasks.

Why We Keep Trying

If multitasking is so inefficient, why do we persist? Part of the answer lies in how it feels. Task-switching triggers the release of dopamine, the brain’s reward chemical. Every time you check your phone or switch to a new browser tab, you get a little neurochemical hit. It feels productive, even when it isn’t.

There’s also a cultural element. We live in an attention economy where being constantly connected and responsive feels mandatory. Focusing on one thing can feel like you’re missing out or falling behind, even though the research consistently shows that single-tasking produces better results faster.

It’s worth noting that research consistently shows this gap between perception and performance.  People who think they are excellent multitaskers tend to be the worst at it.

The Bottom Line

The evidence is pretty clear: what we call multitasking is really task-switching, and it makes us slower and more error-prone at both activities. Your brain has a fundamental processing limitation that hasn’t changed despite our increasingly multi-screen world. The prefrontal cortex can only fully engage with one complex task at a time, and switching between tasks creates cognitive costs that add up to significant lost productivity and increased mistakes.

This doesn’t mean you should never listen to music while working or that walking while talking will melt your brain. But when you’re doing something that really matters—writing an important email, having a meaningful conversation, learning something new—giving it your full attention will always produce better results than splitting your focus.

The Freemasons and the Founding Fathers: Secret Society or Just a Really Good Book Club?

You’ve probably heard the whispers—the Freemasons secretly controlled the American Revolution, George Washington wore a special apron, and there’s a hidden pyramid on the dollar bill. It’s the kind of thing that sounds like it came straight from a Nicolas Cage movie. But like most historical legends, the real story is more interesting (and less conspiratorial) than the mythology.

So, what’s the actual deal with Freemasons and America’s founding? Let’s dig in.

What Even Is Freemasonry?

First things first: Freemasonry started out as actual stonemasons’ guilds back in medieval Europe—think guys who built cathedrals sharing trade secrets. But by the early 1700s, it had transformed into something completely different: a philosophical club where educated men gathered to discuss big ideas about morality, reason, and how to be better humans.

The secrecy? That was part of the appeal. Lodges had rituals and passwords, sure, but the core values weren’t exactly hidden. Freemasons were all about Enlightenment thinking—liberty, equality, the pursuit of knowledge. Basically, the kind of stuff that gets you excited if you’re the type who actually enjoys reading philosophy books.

In colonial America, joining a Masonic lodge was a bit like joining an elite networking group today, except instead of swapping business cards, you discussed natural rights and wore fancy aprons. Lawyers, merchants, printers—the educated professional class—flocked to lodges for both the intellectual stimulation and the social connections.

The Founding Fathers: Who Was Actually In?

Let’s separate fact from fiction when it comes to which founders were card-carrying Masons.

Definitely Masons:

George Washington became a Master Mason at 21 in 1753. He wasn’t the most active member—he didn’t attend meetings constantly—but he took it seriously enough to wear his Masonic apron when he laid the cornerstone of the U.S. Capitol in 1793. That’s a pretty public endorsement.

Benjamin Franklin was perhaps the most dedicated Mason among the founders. Initiated in 1731, he eventually became Grand Master of Pennsylvania’s Grand Lodge and helped establish lodges in France during his diplomatic stint. Franklin was basically the poster child for Enlightenment Masonry.

Paul Revere—yes, that Paul Revere—was Grand Master of Massachusetts. His midnight ride gets all the attention, but his Masonic connections were just as important to his Revolutionary activities.

John Hancock also served as Grand Master of Massachusetts. His oversized signature on the Declaration was matched by his outsized commitment to Masonic ideals.

John Marshall, the Chief Justice who shaped American constitutional law, was a dedicated Mason. So was James Monroe, the fifth president.

Here’s a fun stat: of the 56 signers of the Declaration of Independence, at least nine (about 16%) were Masons. Among the 39 who signed the Constitution, roughly thirteen (33%) belonged to the fraternity.

The Maybes:

Thomas Jefferson? Probably not a Mason, despite endless conspiracy theories. There’s no solid evidence of membership, though his Enlightenment philosophy certainly sounded Masonic. His buddy the Marquis de Lafayette was definitely in, which hasn’t helped dispel the rumors.

Alexander Hamilton? The evidence is murky. Some historians think his writings hint at Masonic sympathies, but there’s no membership record.

Definitely Not:

John Adams wasn’t a Mason and was actually skeptical of secret societies. He still believed in many of the same principles, though—virtue, republican government, that sort of thing.

Did the Masons Really Influence the Revolution?

Here’s where it gets interesting. No, the Freemasons didn’t sit around a lodge plotting revolution like some shadowy cabal. But did their ideas and networks matter? Absolutely.

Think about what Masonic lodges provided: a space where educated colonists could meet, discuss radical ideas about natural rights and self-governance, and build trust across colonial boundaries—all without British officials breathing down their necks. These lodges brought together men from different colonies, different religious backgrounds (Anglicans, Quakers, Deists), and different social classes.

The radical part? Inside a lodge, everyone met “on the level.” It didn’t matter if you were born rich or poor—merit and virtue determined your standing. That’s pretty revolutionary thinking in the 1700s when most of the world still believed some people were just born better than others. Sound familiar? “All men are created equal” has a similar ring to it.

Freemasonry also championed religious tolerance. You had to believe in some kind of Supreme Being, but that was it—no specific creed required. This ecumenical approach directly influenced the founders’ commitment to religious freedom and separation of church and state.

The Masonic motto about moving “from darkness to light” through knowledge wasn’t just ritualistic mumbo-jumbo. It reflected genuine Enlightenment belief in reason and progress—the same intellectual current that powered revolutionary thinking.

What About All That Symbolism?

Okay, let’s address the pyramid and the all-seeing eye on the dollar bill. Are they Masonic? Maybe, maybe not. The Great Seal of the United States definitely uses imagery that Masons also used—but so did lots of 18th-century groups drawing on Enlightenment and classical symbolism. The connection is debated among historians.

What’s undeniable is that Masonic culture emphasized architecture and building as metaphors for constructing a just society. When Washington laid that Capitol cornerstone in his Masonic apron, he was making a statement about building something enduring and meaningful.

The “Conspiracy” Question

Let’s be clear: there was no Masonic conspiracy to create America. The fraternity wasn’t even unified—lodges operated independently, and members included both patriots and loyalists. Officially, Masonic organizations tried to stay neutral during the Revolution, though obviously that didn’t work out perfectly when the war split families and communities.

What is true is that many of the Revolution’s most articulate, influential leaders happened to be Masons. And the fraternity’s values—liberty, equality, reason, fraternity—aligned perfectly with revolutionary ideology. Correlation, not conspiracy.

After the Revolution, Freemasonry exploded in popularity. It became associated with the Enlightenment values that had supposedly won the day. Future presidents including Andrew Jackson, James Polk, and Theodore Roosevelt were all Masons. At its 19th-century peak, an estimated one in five American men belonged to a lodge.

What’s the Bottom Line?

The Freemason influence on America’s founding is real, but it’s cultural rather than conspiratorial. The lodges provided a space where Enlightenment ideas could circulate, where colonial leaders could build networks of trust, and where egalitarian principles could be practiced in miniature.

Washington, Franklin, Hancock, and the others weren’t sitting in smoke-filled rooms with secret handshakes planning to overthrow the British crown. They were part of a broader philosophical movement that valued personal improvement, moral virtue, and human rights. The Masonic lodge was one venue—among many—where those ideas took root.

Freemasonry was one tributary feeding into the river of revolutionary thought, along with classical republicanism, British common law, various religious traditions, and plain old grievances about taxes and representation.

The real story is somehow simpler and more fascinating than the conspiracy theories: a bunch of educated colonists joined a fraternity that encouraged them to think big thoughts about human nature and just governance. Those thoughts, debated in lodges and taverns and town halls, eventually sparked a revolution.

Not because of secret symbols or mysterious rituals, but because ideas about liberty and equality—once you start taking them seriously—are genuinely revolutionary.

True confession—The Grumpy Doc is not now, nor has he ever been, a Mason.

The Enigma of Magical Thinking: From Everyday Enchantment to Political Discourse

Have you ever been talking to someone when you started to think, “How in the world can they believe that?” They may have been engaging in magical thinking. But don’t feel too superior because most likely you have been guilty of the same thing.

Magical thinking is one of those fascinating quirks of human psychology that shows up everywhere—from your friend who won’t talk about their job interview until it’s over, to major political movements that shape our world. At its core, it’s the belief that our thoughts, words, or actions can influence events in ways that completely ignore standard cause-and-effect logic.

What We’re Really Talking About

Magical thinking isn’t new. Our ancestors practiced animism, believing spirits lived in everything around them. They created rituals to appease these spirits or tap into their power. Fast forward to today, and despite all our scientific advances, these patterns of thinking haven’t gone anywhere—they’ve just evolved.

It’s essentially a cognitive bias where we connect events that aren’t truly linked. This typically happens when we’re facing uncertainty, stress, or situations where we feel powerless. The thinking pattern gives us a psychological safety net—a feeling that we’re in control.

How It Shows Up in Daily Life

Superstitions and Rituals

Knocking on wood to prevent bad luck is a universal example. There’s zero logical connection between rapping your knuckles on a wooden surface and your future, but people do it anyway because it feels like taking action against uncertainty.

Athletes are notorious for this. That “lucky” jersey, the pre-game meal eaten in exactly the same order, the specific warm-up routine—these rituals don’t really affect performance, but they can boost confidence and calm nerves, which indirectly helps.

Lucky Charms and Talismans

Rabbit’s feet, four-leaf clovers, special coins—lots of people carry objects they believe bring good fortune. These beliefs come from cultural traditions and personal experiences. While there’s no scientific backing for their power, the comfort they provide is genuinely real.

The Jinx Effect

Ever avoided talking about something good that might happen because you didn’t want to “jinx” it? I worked in emergency rooms for many years, and no one would ever use the word “quiet” for fear that that would cause a sudden rush of ambulances. That’s magical thinking connecting your words to external outcomes in a totally irrational way.

Health Decisions

This gets more serious when magical thinking influences medical choices. Some people strongly believe in homeopathic remedies or alternative therapies that lack scientific validation. Interestingly, the placebo effect demonstrates how powerful belief can be—people sometimes experience limited health improvements simply because they believe a treatment works, although these effects are most common in relief of mild to moderate pain.

Gambling Behaviors

Casinos thrive on magical thinking. Blowing on dice, wearing lucky clothes, or believing you’re “due” for a win after several losses—these are all examples of the illusion of control. Gamblers think they can influence random outcomes through specific actions, which can fuel persistent gambling even when they’re losing money.

When Magical Thinking Enters Politics

Here’s where things get more complex and consequential. Magical thinking doesn’t just affect personal decisions—it shapes entire political movements and policy debates.

Conspiracy Theories

QAnon represents one of the most striking modern examples. Followers believe a secret group of powerful figures runs a global operation, and that certain political leaders possess almost supernatural abilities to fight against it. Despite zero credible evidence, this belief system has attracted significant followings, demonstrating how magical thinking can create entire alternate realities in the political sphere.

Pandemic Responses

COVID-19 brought magical thinking into sharp relief. Some people blamed 5G technology for causing the virus, leading to actual attacks on cell towers in multiple countries. Others promoted unproven treatments as miracle cures, despite scientific evidence showing they didn’t work and in some cases were harmful. The desire for simple answers to a complicated crisis made people vulnerable to these beliefs.

Vaccine denialanother pandemic related example of magical thinking—is attributing harmful effects to vaccines despite extensive scientific evidence to the contrary, while simultaneously believing that alternative approaches (like “natural immunity” alone) possess special protective powers.

Economic Policies

“Trickle-down economics”—the idea that tax cuts for wealthy individuals automatically generate economic growth and increased government revenue—often functions as magical thinking in policy debates. This theory simplifies incredibly complex economic dynamics and, according to multiple economic analyses, lacks consistent empirical support. Critics point out it ignores the nuances of fiscal policy and income distribution.

Climate Change

Despite overwhelming scientific consensus, some political movements deny climate change reality. This sometimes involves believing that natural cycles alone explain everything, or that technological solutions will magically appear without significant policy intervention. This type of thinking often protects existing economic interests or ideological positions.

Why Our Brains Do This

Pattern Recognition

Humans are pattern-seeking machines. We’re wired to spot connections, even when they don’t exist—a tendency called apophenia. This helped our ancestors survive—better to assume that rustling bush is a tiger than to ignore it—but it also leads us to form superstitions and magical beliefs.

The Comfort Factor

When life feels uncertain or stressful, magical thinking offers psychological comfort. Rituals and lucky charms reduce anxiety and give us a sense of agency, even if that control is illusory.

Cultural Transmission

Many superstitions and magical beliefs get passed down through generations, becoming woven into cultural norms. When we see others engaging in these behaviors, it reinforces them. Social acceptance is powerful.

Political Advantages

In politics specifically, magical thinking persists because it:

  • Simplifies complex issues into digestible narratives
  • Strengthens group identity and belonging
  • Can be exploited by politicians and interest groups to mobilize support
  • Provides psychological certainty in a chaotic political landscape

Finding the Balance

Here’s the thing: magical thinking isn’t purely negative. It serves real psychological functions—helping us cope with uncertainty, reducing anxiety, and giving us a sense of control when the world feels chaotic.

The problem comes when we rely on it too heavily. Avoiding medical treatment for unproven remedies can have serious health consequences. Basing policy decisions on magical thinking rather than evidence can affect millions of people. The key is balance.

The Takeaway

Magical thinking connects us to our shared human history, from ancient animistic beliefs to modern political movements. It reveals how our minds constantly work to make sense of an unpredictable world. By understanding these cognitive patterns, we can appreciate their psychological benefits while staying alert to their limitations.

Whether we’re knocking on wood or evaluating political claims, recognizing magical thinking helps us become more critical consumers of information. We can honor the comfort these beliefs provide while still grounding important decisions in evidence and rational analysis.

The enchantment isn’t going anywhere—it’s part of being human. But awareness of it? That’s our best tool for navigating between the rational and the magical in both our personal lives and our shared political reality.

The Correlation Mirage: How Good Intentions Go Wrong in Health Debates

Understanding the Basics

Here’s the fundamental problem: just because two things happen together doesn’t mean one caused the other. When we say two variables are “correlated,” we’re simply observing that they move in tandem—when one goes up, the other tends to go up (or down). Causation, on the other hand, means that a change in one variable directly causes a change in the other. Think of correlation as a suspicious coincidence, while causation is a proven relationship with a clear mechanism.

The tricky part is that our brains are pattern-seeking machines. We evolved to spot connections quickly because that helped our ancestors survive. If you ate those red berries and got sick, better to assume the berries caused it rather than to wait around for a controlled study. But this mental shortcut can seriously mislead us in the modern world, especially when it comes to complex health issues.

Classic Examples That Illustrate the Problem

Let me give you some examples that show how ridiculous this confusion can get when we’re not careful. There’s a famous correlation between ice cream sales and drowning—both increase during summer months, but ice cream isn’t causing drowning. The real driver is warmer weather, which leads people to both buy more ice cream and to spend more time at beaches or swimming pools where drowning might happen. This is what researchers call a “confounding variable”—a third factor that influences both things you’re measuring.

Here’s another head-scratcher: there’s a correlation between the number of master’s degrees awarded and box office revenue. Does getting more education somehow boost movie sales? Of course not. This is what we call a spurious correlation—a completely coincidental relationship that exists in the data but has no meaningful connection in reality.

Here’s good news for us coffee drinkers.  For years, studies suggested a correlation between heavy coffee drinking and heart disease. Later research found the real issue: heavy coffee drinkers were also more likely to smoke. Once smoking was controlled for, coffee itself did not increase heart risk.

Perhaps the most amusing example is the correlation between stork populations and birth rates in Germany and Denmark spanning decades. As the stork population fluctuated, so did the number of newborns. Now, you could construct a “Theory of the Stork” claiming that storks deliver babies, but the real explanation probably involves other variables like weather patterns, urbanization, or environmental developments that affected both populations.

The medical field offers more serious examples. You observe a strong correlation between exercise and skin cancer cases—people who exercise more seem to get skin cancer at higher rates. Without digging deeper, you might panic and conclude that exercise somehow causes cancer. But the actual explanation is far more mundane: people who exercise more tend to spend more time outdoors in the sun, which increases their UV exposure. The confounding variable here is sun exposure, not the exercise itself.

The Vaccine-Autism Controversy: A Cautionary Tale

Now let’s talk about one of the most damaging correlation-causation confusions in recent medical history: the claim that vaccines cause autism. Many childhood vaccines are administered at the same ages when numerous developmental conditions first become noticeable—including autism, seizure disorders, and certain metabolic or genetic issues.  This is a textbook case of how mistaking correlation for causation can have real-world consequences.

The whole mess started in 1998 when Andrew Wakefield, a gastroenterologist at London’s Royal Free Hospital, published a paper in The Lancet describing 12 children, eight of whom were reported as having developed autism after receiving the MMR vaccine. Here’s the thing: this wasn’t even a proper study that could establish causation. It was described as a consecutive case series with no control group or control period—it was simply a description that couldn’t tell you whether one thing causes another.

But why did this idea catch fire so dramatically? The timing created a perfect storm for correlation-causation confusion. Autism becomes apparent early in childhood, around the same time children receive many vaccines and there will be a temporal relationship by chance alone. Parents naturally searched for explanations, noticed the temporal proximity, and drew what seemed like an obvious conclusion.

The scientific community took these concerns seriously and conducted extensive research. Despite overwhelming data demonstrating that there is no link between vaccines and autism, many parents remain hesitant to immunize their children because of the alleged association. Study after study found no connection. A study of over 500,000 children in Denmark, published in The New England Journal of Medicine in 2002 found no relationship between autism and MMR as did a subsequent Danish study published in 2019.  In April 2015, JAMA published a large study analyzing health records of over 95,000 children, including about 2,000 who were at risk for autism because they had a sibling already diagnosed.  It confirmed that the MMR vaccine did not increase the risk for autism spectrum disorder.

The original Wakefield study eventually collapsed under scrutiny. The Lancet retracted the article, and Wakefield was found guilty of deliberate fraud—he picked and chose data that suited his case and falsified facts. Wakefield lost his license to practice medicine after being sanctioned by scientific bodies. But by then, the damage was done.

Here’s the correlation-causation issue in stark terms: the prevalence of autism has increased over time, which researchers and healthcare professionals explain is likely due to multiple factors, including people becoming more aware of autism, improved screening, and updated and expanded diagnostic criteria to include other conditions on the autism spectrum. Meanwhile, immunizations have increased and have dramatically reduced the incidence of vaccine-preventable diseases. These two trends—increasing autism diagnoses and increasing vaccination rates—happened to occur during the same historical period, creating an illusory correlation.

The real causes of autism are complex. There is no single root cause; a combination of influences is likely involved, including certain genetic syndromes, genetic changes affecting cell function, and environmental influences such as premature birth, older parents, and illness during pregnancy. Vaccines simply aren’t part of that picture.

Other Health-Related Confusion

The vaccine-autism controversy isn’t the only place where correlation-causation confusion causes problems in health contexts. Let me give you a few more examples that show how pervasive this issue is and how difficult it can be to distinguish between correlation and causation. 

Consider the relationship between diet and health outcomes. The amount of sodium a person gets in their diet is closely correlated to the total calories they eat—in other words, the more a person eats, the more sodium they’re likely to take in, and eating a lot of calories often leads to obesity. Both obesity and high-sodium diets are believed to contribute to high blood pressure. So, what’s the primary driver? Is it sodium, excess calories, or obesity? These are exactly the kinds of questions researchers must carefully untangle.

Here’s another tricky one: research has shown a correlation between antibiotic use in children and increased risk of obesity, with greater antibiotic use associated with higher obesity risk, particularly for children with four or more exposures. But this correlation alone doesn’t tell us whether antibiotics cause obesity. There could be multiple explanations: perhaps children who need frequent antibiotics have other health issues that predispose them to weight gain, or perhaps the infections themselves (not the antibiotics) are the real issue, or maybe it’s actually a disruption of gut bacteria that matters. Without understanding the exact physiological mechanism, we can’t design effective interventions.

Similarly, increased BMI seems to be associated with an increased risk of several cancers in adults. But it would be erroneous to conclude that simply being overweight directly causes cancer. Socioeconomic factors, environmental toxins, access to healthcare, lifestyle differences, physical activity levels, and diet all intertwine in complex ways. Some people may face multiple risk factors simultaneously, making it difficult to isolate which factors are most significant.

When cell phones first became widely used, there was an increasing concern that radiation from the cell phones was causing brain cancer. Brain cancer rates have remained stable for decades despite exponential increases in cell-phone use—strong evidence against a causal relationship.

Beyond Statistics

The stakes here go way beyond academic accuracy. When people confuse correlation with causation in health contexts, they make decisions that can harm themselves and others. The 2017 measles epidemic in Minnesota’s Somali community was in no small measure fomented by Wakefield—he didn’t fade away quietly. He and other anti-vaxers repeatedly proselytized to the community, leading to an approximately 45% reduction in vaccination. At the same time there was an increase in autism diagnoses. Think about that: vaccination rates dropped, yet autism diagnoses continued to rise—the exact opposite of what you’d expect if vaccines caused autism.  A word of caution: this is an observation, not a carefully controlled study.

The problem extends to how we evaluate new treatments and risk factors. In clinical medicine, there are treatment protocols in use that are not supported by randomized controlled trials. There are risk factors that have been associated with various diseases where it’s difficult to know for certain if they are actually contributing causes. This uncertainty creates space for misunderstanding.

How Scientists Establish Causation

So, how do researchers move from observing a correlation to proving causation? They look for several key elements. These include: a stronger association between variables (which is more suggestive of cause and effect than a weaker one), proper temporality (the alleged effect must follow the suspected cause), a dose-response relationship (where increasing exposure leads to proportionally greater effects), and a biologically plausible mechanism of action.

The gold standard is the randomized controlled trial, where researchers can carefully control for confounding variables by randomly assigning people to treatment and control groups. For ethical reasons, there are limits to controlled studies—it wouldn’t be appropriate to use two comparable groups and have one undergo a harmful activity while the other does not. That’s why we often rely on observational studies combined with careful statistical methods to rule out alternative explanations.

The Bottom Line

Understanding the difference between correlation and causation isn’t just an academic exercise—it’s a critical thinking skill that helps you navigate health claims, news stories, and medical decisions. The vaccine-autism controversy shows how dangerous it can be when we mistake coincidental timing for causal relationships, especially when those misunderstandings spread through communities and lead to preventable disease outbreaks.

The key takeaway? When you see two things happening together, your brain will want to assume one caused the other. Resist that urge. Ask yourself: could there be a third factor driving both? Could the timing just be coincidental? Is there a clear, testable mechanism that would explain how one causes the other? These questions can help you separate meaningful connections from statistical coincidences—and potentially save you from making poor health decisions based on faulty reasoning.

Military Purges and Democratic Stability: Why History Still Matters

When political power is on the line, history shows that the military often becomes the make-or-break institution. Authoritarian leaders—from Hitler to Erdogan—have long understood that a professional military answers to the state, not to any one person. That independence can be inconvenient for leaders who want fewer limits to their power. So, the classic move is simple: replace seasoned, independent officers with people whose primary loyalty is personal rather than constitutional.

This isn’t speculation; it’s a familiar historical pattern.

How Authoritarians Reshape Militaries

Professional militaries promote based on experience, training, and merit. They’re built to resist illegal orders and to stay out of domestic politics. For an authoritarian-leaning leader, military professionalism is a potential obstacle. Purges serve a purpose: clear out officers who take institutional norms seriously, and elevate those who won’t push back.

Two cases illustrate how this works.

Hitler and the German Army

After consolidating political power, Hitler moved aggressively to dominate the military. In 1934, the army was pressured to swear a personal oath of loyalty to him—not to the state or constitution.

By 1938 he removed two top commanders, Werner von Blomberg and Werner von Fritsch, through trumped-up scandals after they questioned his rush toward war. Dozens of senior generals were pushed out soon after.

The goal was not efficiency—it was control.

Turkey After the 2016 Coup Attempt

Following the failed coup, President Erdogan launched the largest purge in modern Turkish history. Tens of thousands across the military, police, and judiciary were arrested or fired, including nearly half of Turkey’s generals.

Later reporting showed that many dismissed officers had no link to the coup at all; they were targeted for being politically unreliable or pro-Western.

These cases differ in scale and context, but the pattern is strikingly similar: the professional military is reshaped to serve the leader.

What Healthy Civil–Military Relations Look Like

In stable democracies, civilian leaders set policy, but the military retains professional autonomy. Officers swear loyalty to the constitution. Promotions are merit-based. And there’s a bright line between national service and political allegiance.

One important safeguard: every member of the U.S. military is obligated to refuse unlawful orders and swears an oath to do so. It’s not optional—it’s core to American military ethics.

Research consistently shows that professional, apolitical militaries strengthen democracies, while politically entangled militaries make coups and repression more likely.

The Current U.S. Debate

Since early 2025, Defense Secretary Pete Hegseth’s removal or sidelining of more than two dozen generals and admirals has raised alarms within the military and among lawmakers. It includes the unprecedented firing of a sitting Chairman of the Joint Chiefs of Staff and significant cuts to senior officer billets.

Hegseth has framed these moves as reforms—streamlining, eliminating “woke politicization,” and aligning leadership with the administration’s national-security priorities.

Many inside the services describe the environment as unpredictable and politically charged. Officers report confusion about why certain leaders are removed and others promoted, and some say the secretary’s rhetoric has alienated the very institution he’s trying to lead. Public reporting describes an “atmosphere of uncertainty and fear” inside the officer corps.

Similarities and Differences to Classic Purges

Where patterns overlap

  • Large-scale personnel changes in a short time
  • Emphasis on loyalty to a person rather than institutional norms
  • Limited transparency in the selection and removal process
  • Signals that dissent or disagreement are disqualifying

Where the U.S. still differs

  • Congress can investigate and slow actions
  • Courts remain independent (for now)
  • Officers swear loyalty to the Constitution, not the president
  • No arrests, detentions, or manufactured scandals
  • The press is free to report and criticize

Why This Matters

Institutional Readiness

Purges can weaken the military by removing seasoned leaders and creating gaps in institutional memory.

Professionalism

If officers think advancement depends on political alignment instead of performance, the talent pipeline changes. Some of the best people simply leave.

Civil–Military Trust

The relationship between elected leaders and the military rests on mutual respect. Reports of intimidation or political litmus tests damage that trust.

Democratic Stability

Democracies depend on militaries that stay out of politics. History shows that once political loyalty becomes the main metric for advancement, the slope toward politicization—and eventually erosion of democratic norms—gets much steeper.

The Real Question

It’s not whether current events equal Turkey in 2016 or Germany in 1938. They don’t.

The real question is much simpler:

Will we maintain a military that is professional, apolitical, and loyal to the Constitution—or move toward a military where career survival depends on political loyalty?

That direction matters far more than any single personnel decision.

Bottom Line

History shows that authoritarianism doesn’t arrive all at once; it arrives incrementally. One of the clearest patterns is reshaping the military to reward personal loyalty over constitutional loyalty.

The United States still has strong guardrails: congressional oversight, rule of law, open media, and a military culture steeped in constitutional commitment. But those guardrails only work if they’re maintained—by political leaders, by officers, and by citizens paying attention.  Many are concerned that the deployment of military forces in American cities and their use to destroy purported drug traffickers is a way to acclimate senior officers to following questionable orders.

Watching these trends isn’t alarmist. It’s simply responsible.  It’s our duty as citizens

How A Nobel Laureate Thinks We Can Save The American Economy…But It Won’t Be Easy

I just finished People, Power, and Profits by Joseph Stiglitz — the Nobel Prize winning economist.  He wrote this near the end of Trump’s first term, but honestly, the world he describes feels even more relevant now.

Stiglitz doesn’t sugarcoat it: capitalism, as we’re practicing it today, is broken. Monopolies dominate markets, inequality has gone wild, and trust in democracy is running on fumes. His proposed fix? Something he calls progressive capitalism — capitalism with guardrails, conscience, and a sense of fairness.

Stiglitz makes the case that our economic system is rigged — not by accident, but by design. Here are his most compelling arguments and what he thinks we should do about them.

1. Taxation and Rent-Seeking: The Rigged Game

Stiglitz draws a sharp distinction between making money through productive work and extracting it through what economists call “rent-seeking” – essentially, using power to skim wealth without creating value. Think of a pharmaceutical company that buys a drug patent and jacks up prices 5,000%, or telecom monopolies that divide up markets to avoid competing.

His argument is straightforward: our tax system rewards the wrong behavior. Capital gains are taxed at lower rates than wages, which means someone living off investments pays less than someone working a regular job. Meanwhile, the wealthy can afford armies of accountants to exploit loopholes that most people don’t even know exist.

What Stiglitz recommends: Tax wealth more aggressively, especially inherited wealth. Close the capital gains loophole. Tax rent-seeking activities heavily while reducing taxes on productive work and innovation. The goal isn’t just revenue – it’s changing incentives so that the path to riches runs through creating value, not extracting it.

2. Green Energy and the True Cost of Pollution

Here’s where Stiglitz gets into what economists call “externalities” – costs that businesses impose on society without paying for them. When a coal plant spews carbon into the atmosphere, we all pay through climate change and increased healthcare costs, but the plant’s balance sheet looks great.

Stiglitz argues this is fundamentally dishonest accounting. If we properly priced pollution and carbon emissions, green energy wouldn’t need subsidies to compete – fossil fuels would suddenly look much more expensive once you factor in their real costs to society.

His recommendation: Implement carbon pricing – either through a carbon tax or cap-and-trade system. Make polluters pay for the damage they cause. This isn’t about punishing business; it’s about honest accounting. Once prices reflect reality, the market will naturally shift toward cleaner energy because it’s actually cheaper when you account for all the costs.

3. Big Business and Big Banks: Concentration of Power

Stiglitz has been particularly vocal about how corporate consolidation hurts everyone except shareholders and executives.  His critique of “too big to fail” is sharp. He argues that concentrated economic power — in tech, finance, and even agriculture — undermines both democracy and efficiency. When a few firms dominate markets, they can suppress wages, block innovation, and bend regulations in their favor—they gain power over prices, wages, and even politics.

The banking sector especially concerns him. After the 2008 financial crisis, which was caused largely by reckless behavior from major banks, these same institutions emerged even larger through government-facilitated mergers. We allowed them to spread their losses among their depositors but let them keep their gains as internal profits.

His recommendations: Reinstate and strengthen regulations that were stripped away, including bringing back something like the Glass-Steagall Act that separated commercial and investment banking. Break up banks that are “too big to fail.” Strengthen antitrust enforcement across all industries. Use the government’s regulatory power to promote competition rather than letting industry consolidate.

4. Money in Politics: The Feedback Loop

This is where everything connects for Stiglitz. Concentrated economic power translates directly into political power. Wealthy interests fund campaigns, lobby relentlessly, and effectively write regulations for the agencies that are supposed to oversee them. This creates a vicious cycle: economic inequality begets political inequality, which creates policies that worsen economic inequality.

Stiglitz argues that the Supreme Court’s Citizens United decision, which allowed unlimited corporate spending in elections, turbocharged this problem by treating money as speech and corporations as people.

His recommendations: Limit campaign spending and institute public financing of campaigns to reduce candidates’ dependence on wealthy donors. Place strict limits on lobbying and implement a robust “revolving door” policy that prevents government officials from immediately cashing in with the industries they regulated. Mandate transparency requirements so voters know who’s funding what. Pass Constitutional amendments if necessary to overturn Citizens United.

The Big Picture

What makes Stiglitz’s argument powerful is how these pieces fit together. You can’t fix inequality just through taxation if big corporations control the political process. You can’t address climate change if fossil fuel companies can buy enough influence to block action. Everything is connected.

His recommendations aren’t radical in historical terms – they’re actually trying to restore a balance that existed during the post-war economic boom of the 1950s.  Stiglitz’s “progressive capitalism” isn’t socialism. It’s capitalism with a conscience — one that remembers who it’s supposed to serve.

Whether you see that as a rescue plan or a recipe for red tape depends entirely on where you put your faith: in public institutions or private markets. The question is do we have the political will to implement his recommendation despite entrenched opposition from those benefiting from the current system?

 Either way, this debate isn’t going away — it’s the one shaping the 21st-century economy.

No Kings!

When Evidence Isn’t Enough: The Crisis of Science in Public Life

While I would never call myself a scientist, as a physician my whole professional life is built on the belief in and the trust of science. I am distressed that so many people have chosen to disregard trust in science in favor of misinformation.

Throughout history, scientific discovery has been humanity’s most reliable guide to progress. From the germ theory of disease to space exploration, science has reshaped how we live and what we believe possible. Yet in recent years, the very foundation of this methodical pursuit—evidence, observation, and experimentation—has come under sustained political, cultural, and economic attack. This struggle is often described as “the war on science,” a phrase that captures how debates once rooted in policy have shifted into battles over truth itself.

The numbers tell a stark story. The National Science Foundation has terminated roughly 1,040 grants that would have awarded $739 million to researchers and has awarded only 52 undergraduate research grants in 2025, compared to about 200 annually since 2015. The proposed cuts are staggering. Trump will request a $4 billion budget for the NSF in fiscal year 2026, a 55% reduction from what Congress appropriated for 2025.

At the heart of the conflict lies mistrust. Science requires patience since answers evolve as new data emerge. But in a world driven by instant communication and ideological certainties, that evolving nature is often cast as contradiction or weakness. Critics dismiss changing conclusions not as hallmarks of rigorous inquiry, but as evidence of unreliability. The result is a dangerous fracture; science depends on trust in evidence, while many segments of society increasingly place trust in ideology or anecdote or even outright falsehoods.

Climate change is one of the most visible fronts in this battle. Virtually every major scientific body worldwide affirms that human activities are driving global warming. Yet climate scientists are routinely accused of bias or conspiracy, their data questioned, and their motives impugned. What is often overlooked in the controversy is not the complexity of climate systems—scientists have long acknowledged uncertainties—but the political and economic interests threatened by the solutions science prescribes.  When climate scientists publish evidence of global warming, their research doesn’t just describe weather patterns—it challenges powerful industries built on fossil fuels.

Public health provides another stark example. During the COVID-19 pandemic, scientific guidance became subject to fierce political polarization. Masking policies, vaccine safety, and even simple social distancing rules morphed into partisan symbols rather than matters of medical evidence. Scientists found themselves vilified, their professional debates distorted into talking points. The losers in this exchange were not the scientists themselves but the broader public, denied clear trust in institutions that are dedicated to safeguarding health.

Underlying these conflicts are powerful currents. Some industries resist regulation by casting doubt on findings that threaten profit. Certain political movements thrive on skepticism of expertise, channeling populist distrust of “elites” toward scientists. And in the swirl of social media, misinformation spreads more rapidly than peer-reviewed studies, eroding the influence of evidence before consensus can take hold.

What makes this particularly concerning is the timing. America’s main scientific and technological rivals are rising fast. In terms of federal Research and Development funding as a percentage of GDP, U.S. investment has dropped for decades, and the lead that the U.S. enjoyed over China’s R&D expenditure has largely been erased.

While the war on science is often treated as a distinctly modern dilemma, born of political polarization, mass media, and cultural distrust of expertise, its roots stretch back centuries. Galileo was silenced for challenging religious dogma. Early physicians were scorned when they argued that invisible germs, not miasmas or curses, caused disease.  During the Enlightenment of the 17th and 18th centuries, thinkers faced their own version of this struggle—a battle between dogma and reason, authority and evidence, tradition and discovery.   In every case, vested interests—whether theological, cultural, or economic—feared the disruption that scientific truth carried. Understanding those earlier conflicts provides valuable context for our challenges today.

The stakes today, however, feel higher. Our era’s challenges—climate change, pandemics, artificial intelligence, genetic engineering—demand unprecedented reliance on scientific understanding. To wage war on science is, in effect, to wage war on our own best chance for survival and responsible progress. If truth becomes negotiable, then evidence loses meaning, and with it, the possibility of reasoned self-government. That is why the war on science cannot be dismissed as a technical squabble—it is a philosophical contest echoing the Enlightenment battles that shaped modern civilization.

Ultimately, the struggle is less about data than about values. Do we commit to curiosity, openness, and the willingness to change our minds? Or do we cling to certainties that soothe but endanger us in the end? The war on science will not be won by scientists alone. It can only be resolved if society restores trust in evidence as the most reliable compass we have—however unsettling the direction it may point.  There may be alternative opinions but there are no alternative facts.

Bread and Circuses: From Ancient Rome to Modern America

“Already long ago, from when we sold our vote to no man, the People have abdicated our duties; for the People who once upon a time handed out military command, high civil office, legions — everything, now restrains itself and anxiously desires for just two things: bread and circuses.”

Nearly 2,000 years ago, Roman satirist Juvenal penned one of history’s most enduring political observations: “Two things only the people anxiously desire — bread and circuses.” Writing around 100 CE in his Satire X, Juvenal wasn’t celebrating this phenomenon—he was lamenting it. The poet watched as Roman citizens traded their political engagement for free grain and spectacular entertainment, becoming passive spectators rather than active participants in their democracy. The phrase has endured for nearly two millennia as shorthand for a troubling political dynamic: entertainment and consumption replacing civic engagement and accountability.

The Roman Warning

Juvenal’s critique came at a pivotal moment in Roman history. The republic had collapsed, and emperors like Augustus had systematically dismantled democratic institutions. Rather than revolt, Roman citizens seemed content as long as the government provided basic sustenance (the grain dole called annona) and elaborate spectacles at venues like the Colosseum. Political participation withered as people focused on immediate pleasures rather than long-term civic responsibilities.

The strategy worked brilliantly for Roman rulers. Keep the masses fed and entertained, and they won’t question your authority or demand meaningful representation. It was political control through distraction—a form of soft authoritarianism that maintained order without overt oppression.  The policy was effective in the short term—peace in the streets and loyalty to the emperors—but disastrous over time. Rome’s population became disengaged from politics, while real power consolidated in the hands of a few.

Modern American Parallels

Fast-forward to contemporary America, and Juvenal’s observation feels uncomfortably relevant. While we don’t have gladiatorial games, we do have our own version of “circuses”—professional sports, reality TV, social media feeds, and celebrity culture that dominate public attention. These aren’t inherently problematic, but they become concerning when they crowd out civic engagement.

Our modern “bread” takes various forms: government assistance programs, subsidies, and economic policies designed to maintain consumer spending. We are saturated with cheap goods, instant delivery services, and mass consumerism. For many, economic struggles are temporarily softened by accessible consumption, from fast food to online shopping. Yet material comfort often masks deeper inequalities and systemic challenges—wage stagnation, healthcare costs, and mounting national debt. These programs often serve legitimate purposes, but they can also function as political tools to maintain public satisfaction and suppress dissent.

Consider how political campaigns increasingly focus on entertainment value rather than substantive policy debates. Politicians hire social media managers and appear on talk shows, understanding that capturing attention often matters more than presenting coherent governance plans. Meanwhile, voter turnout for local elections—where citizens have the most direct impact—remains dismally low.

The Distraction Economy

Perhaps most striking is how our information landscape mirrors Roman spectacles. We’re bombarded with sensational news, viral content, and manufactured controversies that generate strong emotional reactions but little productive action. Complex policy issues get reduced to soundbites and memes, making genuine democratic deliberation increasingly difficult.

Social media algorithms are specifically optimized for engagement, not enlightenment. They feed us content designed to provoke reactions—anger, outrage, schadenfreude—rather than encourage thoughtful consideration of difficult issues. This creates a population that feels politically engaged through constant consumption of political content while remaining largely passive in actual civic participation.

The danger of “bread and circuses” in modern America lies in apathy. When civic participation declines, voter turnout falls, and policy debates get reduced to simplistic slogans, elites face less scrutiny. The result is a weakened democracy, vulnerable to manipulation and short-term thinking.

Breaking the Cycle

Juvenal’s warning doesn’t mean we should abandon entertainment or social programs. Rather, it suggests we need intentional balance. Democratic societies thrive when citizens remain actively engaged in governance beyond just voting every few years.

This means staying informed about local issues, attending town halls, contacting representatives, and participating in community organizations. It means choosing substance over spectacle and long-term thinking over immediate gratification.

The Roman Republic fell partly because its citizens stopped paying attention to governance. Juvenal’s “bread and circuses” reminds us that democracy requires constant vigilance—and that comfortable distraction can be freedom’s most seductive enemy.

Page 3 of 15

Powered by WordPress & Theme by Anders Norén