The Grumpy Doc

Grumpy opinions about everything.

A1c Without Diabetes: Context Matters As Much As The Number

If you’ve had routine bloodwork lately, you might have noticed something called hemoglobin A1c (or HbA1c) on your results. For years, this test has been the gold standard for monitoring diabetes, but it’s increasingly also  being used to assess metabolic health in people who don’t have diabetes. Let’s dig into what this number actually tells us and if lower is always better.

What A1c Actually Measures

Hemoglobin A1c reflects your average blood sugar levels over the past two to three months. When glucose circulates in your bloodstream, some of it sticks to hemoglobin—the oxygen-carrying protein in your red blood cells. The more glucose floating around, the more hemoglobin gets “glycated” (coated with sugar). Since red blood cells live about three months, your A1c percentage gives a rolling average of your blood sugar control. It does not capture individual spikes and dips in glucose, but it correlates reasonably well with overall glycemic exposure and is widely used to monitor diabetes control.

For non-diabetics, a normal A1c is generally considered below 5.7%. The prediabetes range sits between 5.7% and 6.4%, while 6.5% or higher on two separate tests typically indicates diabetes.  Nondiabetic adults with A1c above about 6% are more likely to have impaired fasting glucose and other cardiometabolic risk factors than those with A1c around 5.2–5.3%.​

These cutoffs represent points where research has shown increased risk for complications, but like most biological measurements, they exist on a spectrum rather than as hard dividing lines. 

Even in non-diabetics, A1c can vary by genetics, age, ethnicity, iron levels, sleep quality, and stress—not just diet or exercise. That’s why one person may live at 5.2% with no effort, while another naturally runs 5.6%.

The Prediabetes Gray Zone

Here’s where things get interesting—and a bit complicated. Prediabetes affects roughly 98 million American adults, though most don’t know they have it. An A1c between 5.7% and 6.4% signals that your body’s relationship with glucose isn’t quite right. Maybe your cells are becoming resistant to insulin, or your pancreas isn’t producing insulin as efficiently as it once did.  Prediabetes isn’t a disease so much as a metabolic warning sign. It means your body is starting to struggle with glucose regulation—often due to reduced insulin sensitivity, higher visceral fat, chronic stress, poor sleep, or genetics.

The crucial thing about prediabetes is that it’s not a benign waiting room before diabetes. Research shows that even in this intermediate range, you face elevated risks for cardiovascular disease, kidney problems, and nerve damage—though not to the same degree as someone with full-blown diabetes. It often coexists with other metabolic risk factors such as excess weight, dyslipidemia, and elevated blood pressure.​ A large study published in The Lancet found that people with A1c levels in the prediabetic range had a 15-20% increased risk of cardiovascular events compared to those with normal levels.

The good news? Prediabetes is often reversible. Lifestyle changes—particularly losing 5-7% of body weight through diet and exercise—can bring A1c levels back down. The Diabetes Prevention Program, a landmark study, showed that such interventions reduced the risk of developing diabetes by 58% over three years.

Should Non-Diabetics Aim Lower?

Now we arrive at the million-dollar question: if your A1c is already in the normal range (say, 5.3%), would driving it even lower—to 5.0% or 4.8%—provide additional health benefits?

The honest answer is: we don’t really know, but the evidence suggests probably not much.

Here’s what the research tells us. Population studies have found a continuous relationship between A1c levels and cardiovascular risk even within the normal range, meaning that someone with an A1c of 5.5% might have slightly higher risk than someone at 5.0%. However—and this is critical—this doesn’t necessarily mean that artificially lowering your A1c will reduce that risk. Correlation isn’t causation.

Your A1c reflects your overall metabolic health, dietary patterns, genetics, and lifestyle. Someone who naturally maintains an A1c of 5.0% because they exercise regularly, eat a balanced diet, and have favorable genetics probably has lower risk than someone at 5.5%. But that doesn’t mean the person at 5.5% should obsess over shaving off half a percentage point.  Large cohort data suggest that the lowest risk band for nondiabetic adults is roughly an A1c around 5.0–5.6%; below about 5.0% the relationship between A1c and outcomes becomes more complex.

There’s a principle in medicine that “lower is better”  but it often has limits. In diabetes treatment, pushing A1c too low can actually increase risks—particularly hypoglycemia (dangerously low blood sugar), which carries its own serious complications. The ACCORD trial, which studied intensive glucose lowering in people with Type 2 diabetes, had to be stopped early because the group targeting very low A1c levels had increased mortality. While this study involved diabetics using medications, it illustrates that extremely low glucose isn’t necessarily optimal.

If someone tries to force their A1c unusually low through extreme dieting, fasting, or intensive exercise, they can run into unintended effects such as fatigue and irritability, hormonal disruption, disordered eating patterns, and nutrient deficiencies.  Importantly, extremely low A1c values can sometimes reflect anemia or other medical conditions, not superior health.

For non-diabetics with normal A1c levels, there’s no evidence that trying to push numbers lower through extreme dietary restriction or other interventions provides meaningful benefit. Your body is already handling glucose appropriately. The focus should be on maintaining that healthy state through sustainable lifestyle habits rather than chasing incremental improvements in a single biomarker. In practical terms, the benefit is less about the exact number (say 5.1 versus 4.8) and more about maintaining a metabolic profile that keeps A1c comfortably below the prediabetes threshold over the long term

What Actually Matters

Rather than fixating on squeezing every tenth of a point out of your A1c, the evidence supports a broader approach to metabolic health. Regular physical activity, maintaining a healthy weight, eating a diet rich in whole foods with plenty of fiber, getting adequate sleep, and managing stress all contribute to healthy glucose metabolism—and they bring countless other benefits beyond A1c.

It’s also worth noting that A1c isn’t perfect. Certain conditions—like anemia, chronic kidney disease, or hemoglobin variants—can make A1c readings inaccurate. Some people have A1c levels that don’t match what their continuous glucose monitors show, a phenomenon called “glycation gap.” A1c is a useful tool, but it’s one piece of a larger metabolic picture.

The bottom line? If your A1c is in the normal range, you’re doing well. Maintain the healthy habits that got you there rather than micromanaging the number itself. If you’re in the prediabetic range, you have a genuine opportunity to prevent diabetes through lifestyle changes, and bringing that number down has clear benefits. But for those already in the healthy zone, obsessing over fractional improvements likely won’t move the needle much on your actual health outcomes.

The Freemasons and the Founding Fathers: Secret Society or Just a Really Good Book Club?

You’ve probably heard the whispers—the Freemasons secretly controlled the American Revolution, George Washington wore a special apron, and there’s a hidden pyramid on the dollar bill. It’s the kind of thing that sounds like it came straight from a Nicolas Cage movie. But like most historical legends, the real story is more interesting (and less conspiratorial) than the mythology.

So, what’s the actual deal with Freemasons and America’s founding? Let’s dig in.

What Even Is Freemasonry?

First things first: Freemasonry started out as actual stonemasons’ guilds back in medieval Europe—think guys who built cathedrals sharing trade secrets. But by the early 1700s, it had transformed into something completely different: a philosophical club where educated men gathered to discuss big ideas about morality, reason, and how to be better humans.

The secrecy? That was part of the appeal. Lodges had rituals and passwords, sure, but the core values weren’t exactly hidden. Freemasons were all about Enlightenment thinking—liberty, equality, the pursuit of knowledge. Basically, the kind of stuff that gets you excited if you’re the type who actually enjoys reading philosophy books.

In colonial America, joining a Masonic lodge was a bit like joining an elite networking group today, except instead of swapping business cards, you discussed natural rights and wore fancy aprons. Lawyers, merchants, printers—the educated professional class—flocked to lodges for both the intellectual stimulation and the social connections.

The Founding Fathers: Who Was Actually In?

Let’s separate fact from fiction when it comes to which founders were card-carrying Masons.

Definitely Masons:

George Washington became a Master Mason at 21 in 1753. He wasn’t the most active member—he didn’t attend meetings constantly—but he took it seriously enough to wear his Masonic apron when he laid the cornerstone of the U.S. Capitol in 1793. That’s a pretty public endorsement.

Benjamin Franklin was perhaps the most dedicated Mason among the founders. Initiated in 1731, he eventually became Grand Master of Pennsylvania’s Grand Lodge and helped establish lodges in France during his diplomatic stint. Franklin was basically the poster child for Enlightenment Masonry.

Paul Revere—yes, that Paul Revere—was Grand Master of Massachusetts. His midnight ride gets all the attention, but his Masonic connections were just as important to his Revolutionary activities.

John Hancock also served as Grand Master of Massachusetts. His oversized signature on the Declaration was matched by his outsized commitment to Masonic ideals.

John Marshall, the Chief Justice who shaped American constitutional law, was a dedicated Mason. So was James Monroe, the fifth president.

Here’s a fun stat: of the 56 signers of the Declaration of Independence, at least nine (about 16%) were Masons. Among the 39 who signed the Constitution, roughly thirteen (33%) belonged to the fraternity.

The Maybes:

Thomas Jefferson? Probably not a Mason, despite endless conspiracy theories. There’s no solid evidence of membership, though his Enlightenment philosophy certainly sounded Masonic. His buddy the Marquis de Lafayette was definitely in, which hasn’t helped dispel the rumors.

Alexander Hamilton? The evidence is murky. Some historians think his writings hint at Masonic sympathies, but there’s no membership record.

Definitely Not:

John Adams wasn’t a Mason and was actually skeptical of secret societies. He still believed in many of the same principles, though—virtue, republican government, that sort of thing.

Did the Masons Really Influence the Revolution?

Here’s where it gets interesting. No, the Freemasons didn’t sit around a lodge plotting revolution like some shadowy cabal. But did their ideas and networks matter? Absolutely.

Think about what Masonic lodges provided: a space where educated colonists could meet, discuss radical ideas about natural rights and self-governance, and build trust across colonial boundaries—all without British officials breathing down their necks. These lodges brought together men from different colonies, different religious backgrounds (Anglicans, Quakers, Deists), and different social classes.

The radical part? Inside a lodge, everyone met “on the level.” It didn’t matter if you were born rich or poor—merit and virtue determined your standing. That’s pretty revolutionary thinking in the 1700s when most of the world still believed some people were just born better than others. Sound familiar? “All men are created equal” has a similar ring to it.

Freemasonry also championed religious tolerance. You had to believe in some kind of Supreme Being, but that was it—no specific creed required. This ecumenical approach directly influenced the founders’ commitment to religious freedom and separation of church and state.

The Masonic motto about moving “from darkness to light” through knowledge wasn’t just ritualistic mumbo-jumbo. It reflected genuine Enlightenment belief in reason and progress—the same intellectual current that powered revolutionary thinking.

What About All That Symbolism?

Okay, let’s address the pyramid and the all-seeing eye on the dollar bill. Are they Masonic? Maybe, maybe not. The Great Seal of the United States definitely uses imagery that Masons also used—but so did lots of 18th-century groups drawing on Enlightenment and classical symbolism. The connection is debated among historians.

What’s undeniable is that Masonic culture emphasized architecture and building as metaphors for constructing a just society. When Washington laid that Capitol cornerstone in his Masonic apron, he was making a statement about building something enduring and meaningful.

The “Conspiracy” Question

Let’s be clear: there was no Masonic conspiracy to create America. The fraternity wasn’t even unified—lodges operated independently, and members included both patriots and loyalists. Officially, Masonic organizations tried to stay neutral during the Revolution, though obviously that didn’t work out perfectly when the war split families and communities.

What is true is that many of the Revolution’s most articulate, influential leaders happened to be Masons. And the fraternity’s values—liberty, equality, reason, fraternity—aligned perfectly with revolutionary ideology. Correlation, not conspiracy.

After the Revolution, Freemasonry exploded in popularity. It became associated with the Enlightenment values that had supposedly won the day. Future presidents including Andrew Jackson, James Polk, and Theodore Roosevelt were all Masons. At its 19th-century peak, an estimated one in five American men belonged to a lodge.

What’s the Bottom Line?

The Freemason influence on America’s founding is real, but it’s cultural rather than conspiratorial. The lodges provided a space where Enlightenment ideas could circulate, where colonial leaders could build networks of trust, and where egalitarian principles could be practiced in miniature.

Washington, Franklin, Hancock, and the others weren’t sitting in smoke-filled rooms with secret handshakes planning to overthrow the British crown. They were part of a broader philosophical movement that valued personal improvement, moral virtue, and human rights. The Masonic lodge was one venue—among many—where those ideas took root.

Freemasonry was one tributary feeding into the river of revolutionary thought, along with classical republicanism, British common law, various religious traditions, and plain old grievances about taxes and representation.

The real story is somehow simpler and more fascinating than the conspiracy theories: a bunch of educated colonists joined a fraternity that encouraged them to think big thoughts about human nature and just governance. Those thoughts, debated in lodges and taverns and town halls, eventually sparked a revolution.

Not because of secret symbols or mysterious rituals, but because ideas about liberty and equality—once you start taking them seriously—are genuinely revolutionary.

True confession—The Grumpy Doc is not now, nor has he ever been, a Mason.

Understanding Parkinson’s Disease: From Diagnosis to Daily Living

When most people think of Parkinson’s disease, they picture the characteristic tremor—that involuntary shaking that has become almost synonymous with the condition. But the reality is far more complex than just one visible symptom. Let’s dig into what’s actually happening in the brain, how doctors figure out what’s going on, and what living with this condition really looks like.

What Causes Parkinson’s Disease?

Here’s where things get frustrating for researchers: despite decades of study, scientists still don’t know exactly what causes the nerve cells in the brain to die. I’m going to apologize in advance because I’m going to be using a lot of “doctor talk”—no way around it. 

What we do know is that nerve cells (neurons) in the substantia nigra portion of the basal ganglia—an area of the brain controlling movement—become impaired or die, and these neurons normally produce dopamine, an important brain chemical. When these cells stop working properly, dopamine levels drop, and that’s when movement problems begin showing up.

But dopamine isn’t the whole story. People with Parkinson’s also lose nerve endings that produce norepinephrine, the main chemical messenger of the sympathetic nervous system, which helps explain why the disease affects so much more than just movement—things like blood pressure, digestion, and energy levels all take a hit.

Most Parkinson’s cases are idiopathic, meaning the cause is unknown, though contributing factors have been identified. Current thinking suggests a complicated mix of genetic and environmental factors. About 5% to 10% of cases begin before age 50, and these early-onset forms are often, though not always, inherited.

Some risk factors have emerged from research: age is the most significant, with about 1% of those over 65 and around 4.3% of those over 85 affected. Traumatic brain injury significantly increases risk, especially if recent, and repeated head injuries from contact sports can cause what’s called post-traumatic parkinsonism.  Muhammad Ali is a classic example of this.

Exposure to pesticides and industrial chemicals has also been identified as a risk factor.  Interestingly, large epidemiologic studies consistently show that people who smoke have a lower risk of being diagnosed with Parkinson’s disease than never‑smokers, although smoking is still strongly discouraged because of its many harmful health risks.  Large cohort studies in the U.S. and Europe generally find no direct association between alcohol consumption and Parkinson’s disease. A few observational studies show that moderate drinkers have slightly lower Parkinson’s rates. However, researchers believe this may be due to reverse causation (people in early or undiagnosed stages often reduce drinking because of GI or mood changes) and lifestyle confounders (moderate drinkers may differ in socioeconomic status, diet, or activity level).  So, the “protective” effect is considered speculative, not causal.  

The Symptoms: More Than Just Shaking

The hallmark movement symptoms—what doctors call “motor symptoms”—are what usually bring people to the doctor. Slowed movements, called bradykinesia, is required for a Parkinson’s diagnosis. People describe it as muscle weakness, though it’s really about control, not strength. The classic tremor, stiffness, and balance problems round out the main movement issues.  Patients frequently show reduced arm swing, shuffling gait, difficulty initiating movement or turning, masked facial expression, decreased blinking, and soft or monotone speech.

But here’s what often surprises people: many individuals later diagnosed with Parkinson’s notice that prior to experiencing stiffness and tremor, they had sleep problems, constipation, loss of smell, and restless legs. These “prodromal symptoms” can show up years before the movement problems become obvious. Other early signs include mood disorders like anxiety and depression.

The cognitive side deserves attention too. Some people experience changes in cognitive function, including problems with memory, attention, and the ability to plan and accomplish tasks, though hard to pin down due to concurrence with age related memory problems, 20% at the time of diagnosis is a commonly cited number.  More contested is how many develop Parkinson’s dementia, with estimates ranging from 20% all the way to 85%.

How Doctors Make the Diagnosis

Here’s something that might surprise you: there are currently no blood or laboratory tests to diagnose non-genetic cases of Parkinson’s. The standard diagnosis is clinical, meaning there’s no test that can give a conclusive result—certain physical symptoms need to be present.

Doctors typically diagnose Parkinson’s by taking a detailed medical history and performing a neurological examination. If symptoms improve after starting medication, that’s another indicator that the person has Parkinson’s.

There are some imaging tools available. The FDA approved an imaging scan called the DaTscan in 2011, which allows doctors to see detailed pictures of the brain’s dopamine system using a radioactive drug and SPECT scanner. But this scan can’t definitively diagnose Parkinson’s though it helps rule out conditions that mimic it.  A hallmark of Parkinson’s is the buildup of misfolded alpha-synuclein proteins (Lewy bodies) inside neurons. Whether this is a cause, an effect, or both is still under study—this part of the science remains somewhat speculative.

Recently, researchers developed something more promising: the alpha-synuclein seeding amplification assay can detect abnormal alpha-synuclein in spinal fluid and may detect Parkinson’s in people who haven’t been diagnosed yet. The catch? It requires a spinal tap and isn’t widely available, though scientists are working on blood and saliva tests.

The early diagnostic challenge is real. Many disorders can cause similar symptoms, and people with Parkinson’s-like symptoms from other causes are sometimes said to have parkinsonism, which includes conditions like multiple system atrophy and Lewy body dementia that require different treatments.

What to Expect: The Prognosis

Let’s address the big question: how does Parkinson’s affect life expectancy? The news here is better than you might think. The average life expectancy of a person with Parkinson’s is generally the same as for someone without the disease.

More specifically, average life expectancy has increased by about 55% since 1967, rising to more than 14.5 years from diagnosis. Modern treatments have made a huge difference. Research indicates that those with Parkinson’s and normal cognitive function appear to have a largely normal life expectancy.

That said, timing matters. Research from 2020 suggests that people who receive a diagnosis before age 70 usually experience a greater reduction in life expectancy, and males with Parkinson’s may have a greater reduction in life expectancy than females.

The disease is progressive, meaning it gets worse over time, but symptoms and progression vary from person to person, and neither you nor your doctor can predict which symptoms you’ll get, when, or how severe they’ll be. The tremor-dominant type usually has a more favorable prognosis than the hypokinetic type.

What actually causes death in advanced Parkinson’s? Advanced symptoms can cause falls, pressure ulcers, swallowing difficulties and general frailty, all of which are linked to death. Aspiration pneumonia—when you inhale food or liquid into the lungs—is the leading cause of death for people with Parkinson’s.

Managing the Disease

Currently, there’s no cure for Parkinson’s, but medications or surgery can improve many of the movement symptoms.

The gold standard medication is levodopa (often combined with carbidopa as Sinemet). Healthcare providers use levodopa cautiously and they commonly combine it with other medications to keep your body from processing it before it enters your brain.  This helps avoid side effects like nausea, vomiting, and low blood pressure when standing up. The tricky part? Over time, the way your body uses levodopa changes, and it can lose effectiveness.

Beyond levodopa, doctors use MAO-B inhibitors and dopamine agonists. As the disease progresses, these medications become less effective and may cause involuntary muscle movements. When drugs stop working well, there are surgical options to treat severe motor symptoms.

The main surgical treatment today is called deep brain stimulation (DBS).  It is the most important therapeutic advancement since the development of levodopa, and it’s been FDA-approved since the late 1990s A surgeon places thin metal wires called electrodes into one or both sides of the brain, in specific areas that control movement. A second procedure implants an impulse generator battery under the collarbone or in the abdomen. It is similar to a heart pacemaker and about the size of a stopwatch, this device delivers electrical stimulation to those targeted brain areas.

A new treatment that is being used is focused ultrasound. Guided by MRI, high-intensity, inaudible sound waves are emitted into the brain, and where these waves cross, they create high energy that destroys a very specific area connected to tremor. It’s considered non-invasive and the FDA has approved it for Parkinson’s tremor that doesn’t respond to medications.

Don’t underestimate lifestyle interventions either. Physical therapy can improve balance and address muscle stiffness, and regular exercise improves strength, flexibility, and balance. Eating a balanced diet helps—drinking plenty of water and eating enough fiber reduces constipation, while omega-3 fats and magnesium may boost cognition and help with anxiety.

Parkinson’s disease sits at the intersection of aging, genetics, environment, and biology. Diagnosis is clinical, progression is gradual and variable, and treatment has become increasingly sophisticated. While it remains incurable, early diagnosis, personalized medication plans, targeted therapies like DBS, and consistent exercise allow many people to maintain meaningful independence for years.

The key message from specialists? Treatment makes a major difference in keeping symptoms from having worse effects, and adjustments to medications and dosages can hugely impact how Parkinson’s affects your life.

The Enigma of Magical Thinking: From Everyday Enchantment to Political Discourse

Have you ever been talking to someone when you started to think, “How in the world can they believe that?” They may have been engaging in magical thinking. But don’t feel too superior because most likely you have been guilty of the same thing.

Magical thinking is one of those fascinating quirks of human psychology that shows up everywhere—from your friend who won’t talk about their job interview until it’s over, to major political movements that shape our world. At its core, it’s the belief that our thoughts, words, or actions can influence events in ways that completely ignore standard cause-and-effect logic.

What We’re Really Talking About

Magical thinking isn’t new. Our ancestors practiced animism, believing spirits lived in everything around them. They created rituals to appease these spirits or tap into their power. Fast forward to today, and despite all our scientific advances, these patterns of thinking haven’t gone anywhere—they’ve just evolved.

It’s essentially a cognitive bias where we connect events that aren’t truly linked. This typically happens when we’re facing uncertainty, stress, or situations where we feel powerless. The thinking pattern gives us a psychological safety net—a feeling that we’re in control.

How It Shows Up in Daily Life

Superstitions and Rituals

Knocking on wood to prevent bad luck is a universal example. There’s zero logical connection between rapping your knuckles on a wooden surface and your future, but people do it anyway because it feels like taking action against uncertainty.

Athletes are notorious for this. That “lucky” jersey, the pre-game meal eaten in exactly the same order, the specific warm-up routine—these rituals don’t really affect performance, but they can boost confidence and calm nerves, which indirectly helps.

Lucky Charms and Talismans

Rabbit’s feet, four-leaf clovers, special coins—lots of people carry objects they believe bring good fortune. These beliefs come from cultural traditions and personal experiences. While there’s no scientific backing for their power, the comfort they provide is genuinely real.

The Jinx Effect

Ever avoided talking about something good that might happen because you didn’t want to “jinx” it? I worked in emergency rooms for many years, and no one would ever use the word “quiet” for fear that that would cause a sudden rush of ambulances. That’s magical thinking connecting your words to external outcomes in a totally irrational way.

Health Decisions

This gets more serious when magical thinking influences medical choices. Some people strongly believe in homeopathic remedies or alternative therapies that lack scientific validation. Interestingly, the placebo effect demonstrates how powerful belief can be—people sometimes experience limited health improvements simply because they believe a treatment works, although these effects are most common in relief of mild to moderate pain.

Gambling Behaviors

Casinos thrive on magical thinking. Blowing on dice, wearing lucky clothes, or believing you’re “due” for a win after several losses—these are all examples of the illusion of control. Gamblers think they can influence random outcomes through specific actions, which can fuel persistent gambling even when they’re losing money.

When Magical Thinking Enters Politics

Here’s where things get more complex and consequential. Magical thinking doesn’t just affect personal decisions—it shapes entire political movements and policy debates.

Conspiracy Theories

QAnon represents one of the most striking modern examples. Followers believe a secret group of powerful figures runs a global operation, and that certain political leaders possess almost supernatural abilities to fight against it. Despite zero credible evidence, this belief system has attracted significant followings, demonstrating how magical thinking can create entire alternate realities in the political sphere.

Pandemic Responses

COVID-19 brought magical thinking into sharp relief. Some people blamed 5G technology for causing the virus, leading to actual attacks on cell towers in multiple countries. Others promoted unproven treatments as miracle cures, despite scientific evidence showing they didn’t work and in some cases were harmful. The desire for simple answers to a complicated crisis made people vulnerable to these beliefs.

Vaccine denialanother pandemic related example of magical thinking—is attributing harmful effects to vaccines despite extensive scientific evidence to the contrary, while simultaneously believing that alternative approaches (like “natural immunity” alone) possess special protective powers.

Economic Policies

“Trickle-down economics”—the idea that tax cuts for wealthy individuals automatically generate economic growth and increased government revenue—often functions as magical thinking in policy debates. This theory simplifies incredibly complex economic dynamics and, according to multiple economic analyses, lacks consistent empirical support. Critics point out it ignores the nuances of fiscal policy and income distribution.

Climate Change

Despite overwhelming scientific consensus, some political movements deny climate change reality. This sometimes involves believing that natural cycles alone explain everything, or that technological solutions will magically appear without significant policy intervention. This type of thinking often protects existing economic interests or ideological positions.

Why Our Brains Do This

Pattern Recognition

Humans are pattern-seeking machines. We’re wired to spot connections, even when they don’t exist—a tendency called apophenia. This helped our ancestors survive—better to assume that rustling bush is a tiger than to ignore it—but it also leads us to form superstitions and magical beliefs.

The Comfort Factor

When life feels uncertain or stressful, magical thinking offers psychological comfort. Rituals and lucky charms reduce anxiety and give us a sense of agency, even if that control is illusory.

Cultural Transmission

Many superstitions and magical beliefs get passed down through generations, becoming woven into cultural norms. When we see others engaging in these behaviors, it reinforces them. Social acceptance is powerful.

Political Advantages

In politics specifically, magical thinking persists because it:

  • Simplifies complex issues into digestible narratives
  • Strengthens group identity and belonging
  • Can be exploited by politicians and interest groups to mobilize support
  • Provides psychological certainty in a chaotic political landscape

Finding the Balance

Here’s the thing: magical thinking isn’t purely negative. It serves real psychological functions—helping us cope with uncertainty, reducing anxiety, and giving us a sense of control when the world feels chaotic.

The problem comes when we rely on it too heavily. Avoiding medical treatment for unproven remedies can have serious health consequences. Basing policy decisions on magical thinking rather than evidence can affect millions of people. The key is balance.

The Takeaway

Magical thinking connects us to our shared human history, from ancient animistic beliefs to modern political movements. It reveals how our minds constantly work to make sense of an unpredictable world. By understanding these cognitive patterns, we can appreciate their psychological benefits while staying alert to their limitations.

Whether we’re knocking on wood or evaluating political claims, recognizing magical thinking helps us become more critical consumers of information. We can honor the comfort these beliefs provide while still grounding important decisions in evidence and rational analysis.

The enchantment isn’t going anywhere—it’s part of being human. But awareness of it? That’s our best tool for navigating between the rational and the magical in both our personal lives and our shared political reality.

The Correlation Mirage: How Good Intentions Go Wrong in Health Debates

Understanding the Basics

Here’s the fundamental problem: just because two things happen together doesn’t mean one caused the other. When we say two variables are “correlated,” we’re simply observing that they move in tandem—when one goes up, the other tends to go up (or down). Causation, on the other hand, means that a change in one variable directly causes a change in the other. Think of correlation as a suspicious coincidence, while causation is a proven relationship with a clear mechanism.

The tricky part is that our brains are pattern-seeking machines. We evolved to spot connections quickly because that helped our ancestors survive. If you ate those red berries and got sick, better to assume the berries caused it rather than to wait around for a controlled study. But this mental shortcut can seriously mislead us in the modern world, especially when it comes to complex health issues.

Classic Examples That Illustrate the Problem

Let me give you some examples that show how ridiculous this confusion can get when we’re not careful. There’s a famous correlation between ice cream sales and drowning—both increase during summer months, but ice cream isn’t causing drowning. The real driver is warmer weather, which leads people to both buy more ice cream and to spend more time at beaches or swimming pools where drowning might happen. This is what researchers call a “confounding variable”—a third factor that influences both things you’re measuring.

Here’s another head-scratcher: there’s a correlation between the number of master’s degrees awarded and box office revenue. Does getting more education somehow boost movie sales? Of course not. This is what we call a spurious correlation—a completely coincidental relationship that exists in the data but has no meaningful connection in reality.

Here’s good news for us coffee drinkers.  For years, studies suggested a correlation between heavy coffee drinking and heart disease. Later research found the real issue: heavy coffee drinkers were also more likely to smoke. Once smoking was controlled for, coffee itself did not increase heart risk.

Perhaps the most amusing example is the correlation between stork populations and birth rates in Germany and Denmark spanning decades. As the stork population fluctuated, so did the number of newborns. Now, you could construct a “Theory of the Stork” claiming that storks deliver babies, but the real explanation probably involves other variables like weather patterns, urbanization, or environmental developments that affected both populations.

The medical field offers more serious examples. You observe a strong correlation between exercise and skin cancer cases—people who exercise more seem to get skin cancer at higher rates. Without digging deeper, you might panic and conclude that exercise somehow causes cancer. But the actual explanation is far more mundane: people who exercise more tend to spend more time outdoors in the sun, which increases their UV exposure. The confounding variable here is sun exposure, not the exercise itself.

The Vaccine-Autism Controversy: A Cautionary Tale

Now let’s talk about one of the most damaging correlation-causation confusions in recent medical history: the claim that vaccines cause autism. Many childhood vaccines are administered at the same ages when numerous developmental conditions first become noticeable—including autism, seizure disorders, and certain metabolic or genetic issues.  This is a textbook case of how mistaking correlation for causation can have real-world consequences.

The whole mess started in 1998 when Andrew Wakefield, a gastroenterologist at London’s Royal Free Hospital, published a paper in The Lancet describing 12 children, eight of whom were reported as having developed autism after receiving the MMR vaccine. Here’s the thing: this wasn’t even a proper study that could establish causation. It was described as a consecutive case series with no control group or control period—it was simply a description that couldn’t tell you whether one thing causes another.

But why did this idea catch fire so dramatically? The timing created a perfect storm for correlation-causation confusion. Autism becomes apparent early in childhood, around the same time children receive many vaccines and there will be a temporal relationship by chance alone. Parents naturally searched for explanations, noticed the temporal proximity, and drew what seemed like an obvious conclusion.

The scientific community took these concerns seriously and conducted extensive research. Despite overwhelming data demonstrating that there is no link between vaccines and autism, many parents remain hesitant to immunize their children because of the alleged association. Study after study found no connection. A study of over 500,000 children in Denmark, published in The New England Journal of Medicine in 2002 found no relationship between autism and MMR as did a subsequent Danish study published in 2019.  In April 2015, JAMA published a large study analyzing health records of over 95,000 children, including about 2,000 who were at risk for autism because they had a sibling already diagnosed.  It confirmed that the MMR vaccine did not increase the risk for autism spectrum disorder.

The original Wakefield study eventually collapsed under scrutiny. The Lancet retracted the article, and Wakefield was found guilty of deliberate fraud—he picked and chose data that suited his case and falsified facts. Wakefield lost his license to practice medicine after being sanctioned by scientific bodies. But by then, the damage was done.

Here’s the correlation-causation issue in stark terms: the prevalence of autism has increased over time, which researchers and healthcare professionals explain is likely due to multiple factors, including people becoming more aware of autism, improved screening, and updated and expanded diagnostic criteria to include other conditions on the autism spectrum. Meanwhile, immunizations have increased and have dramatically reduced the incidence of vaccine-preventable diseases. These two trends—increasing autism diagnoses and increasing vaccination rates—happened to occur during the same historical period, creating an illusory correlation.

The real causes of autism are complex. There is no single root cause; a combination of influences is likely involved, including certain genetic syndromes, genetic changes affecting cell function, and environmental influences such as premature birth, older parents, and illness during pregnancy. Vaccines simply aren’t part of that picture.

Other Health-Related Confusion

The vaccine-autism controversy isn’t the only place where correlation-causation confusion causes problems in health contexts. Let me give you a few more examples that show how pervasive this issue is and how difficult it can be to distinguish between correlation and causation. 

Consider the relationship between diet and health outcomes. The amount of sodium a person gets in their diet is closely correlated to the total calories they eat—in other words, the more a person eats, the more sodium they’re likely to take in, and eating a lot of calories often leads to obesity. Both obesity and high-sodium diets are believed to contribute to high blood pressure. So, what’s the primary driver? Is it sodium, excess calories, or obesity? These are exactly the kinds of questions researchers must carefully untangle.

Here’s another tricky one: research has shown a correlation between antibiotic use in children and increased risk of obesity, with greater antibiotic use associated with higher obesity risk, particularly for children with four or more exposures. But this correlation alone doesn’t tell us whether antibiotics cause obesity. There could be multiple explanations: perhaps children who need frequent antibiotics have other health issues that predispose them to weight gain, or perhaps the infections themselves (not the antibiotics) are the real issue, or maybe it’s actually a disruption of gut bacteria that matters. Without understanding the exact physiological mechanism, we can’t design effective interventions.

Similarly, increased BMI seems to be associated with an increased risk of several cancers in adults. But it would be erroneous to conclude that simply being overweight directly causes cancer. Socioeconomic factors, environmental toxins, access to healthcare, lifestyle differences, physical activity levels, and diet all intertwine in complex ways. Some people may face multiple risk factors simultaneously, making it difficult to isolate which factors are most significant.

When cell phones first became widely used, there was an increasing concern that radiation from the cell phones was causing brain cancer. Brain cancer rates have remained stable for decades despite exponential increases in cell-phone use—strong evidence against a causal relationship.

Beyond Statistics

The stakes here go way beyond academic accuracy. When people confuse correlation with causation in health contexts, they make decisions that can harm themselves and others. The 2017 measles epidemic in Minnesota’s Somali community was in no small measure fomented by Wakefield—he didn’t fade away quietly. He and other anti-vaxers repeatedly proselytized to the community, leading to an approximately 45% reduction in vaccination. At the same time there was an increase in autism diagnoses. Think about that: vaccination rates dropped, yet autism diagnoses continued to rise—the exact opposite of what you’d expect if vaccines caused autism.  A word of caution: this is an observation, not a carefully controlled study.

The problem extends to how we evaluate new treatments and risk factors. In clinical medicine, there are treatment protocols in use that are not supported by randomized controlled trials. There are risk factors that have been associated with various diseases where it’s difficult to know for certain if they are actually contributing causes. This uncertainty creates space for misunderstanding.

How Scientists Establish Causation

So, how do researchers move from observing a correlation to proving causation? They look for several key elements. These include: a stronger association between variables (which is more suggestive of cause and effect than a weaker one), proper temporality (the alleged effect must follow the suspected cause), a dose-response relationship (where increasing exposure leads to proportionally greater effects), and a biologically plausible mechanism of action.

The gold standard is the randomized controlled trial, where researchers can carefully control for confounding variables by randomly assigning people to treatment and control groups. For ethical reasons, there are limits to controlled studies—it wouldn’t be appropriate to use two comparable groups and have one undergo a harmful activity while the other does not. That’s why we often rely on observational studies combined with careful statistical methods to rule out alternative explanations.

The Bottom Line

Understanding the difference between correlation and causation isn’t just an academic exercise—it’s a critical thinking skill that helps you navigate health claims, news stories, and medical decisions. The vaccine-autism controversy shows how dangerous it can be when we mistake coincidental timing for causal relationships, especially when those misunderstandings spread through communities and lead to preventable disease outbreaks.

The key takeaway? When you see two things happening together, your brain will want to assume one caused the other. Resist that urge. Ask yourself: could there be a third factor driving both? Could the timing just be coincidental? Is there a clear, testable mechanism that would explain how one causes the other? These questions can help you separate meaningful connections from statistical coincidences—and potentially save you from making poor health decisions based on faulty reasoning.

The Underground Heroes: How Sewers Built Our Cities

When we think about what makes cities possible, agriculture usually gets top billing. Without a steady food surplus, people could not have stopped foraging long enough to become artisans, priests, merchants, or kings. But once people clustered into towns and cities, another, less glamorous need quickly emerged: what to do with all the waste.  While no one likes to think about it, without effective methods for sewage disposal cities would quickly become uninhabitable.

When you consider the foundations of modern civilization, sewers probably don’t make your top ten list. But these underground networks deserve way more credit than they get. It is no exaggeration to say that sewage systems—whether open drains in the street or vast subterranean tunnels—were one of the most important technologies that made large cities livable. The story of sewers is really the story of how humans figured out how to live together in large numbers without, well, dying from our own waste.

The Ancient World Gets Creative

The earliest cities faced a pretty basic problem: what do you do with human and animal waste when you’ve got thousands of people living close together? The ancient Indus Valley civilization (around 2600-1900 BCE) came up with one of the first solutions. Archaeological evidence from Harappa and Mohenjo-daro shows they built covered drains and even had individual house connections—pretty impressive for 4,000 years ago.

The Romans, being Romans, took this concept and ran with it. The Cloaca Maxima, built around 600 BCE, started as an open drainage canal but eventually became Rome’s main sewer system. What made Roman sewers special wasn’t just their size, but how they integrated with aqueducts to create a flow-through system that actually worked.

Speculation alert: While we know the Romans understood the practical benefits of sewers, they probably didn’t fully grasp the disease prevention aspect in the way we do today.

The Medieval Mess

After Rome fell, European cities pretty much forgot how to manage waste properly. Medieval cities relied on a charming system where people just dumped waste into the streets, hoping rain would wash it away. Some cities built latrines that emptied into rivers, but most urban waste management was… let’s call it “informal.”

This wasn’t just gross—it was deadly. Cities regularly faced outbreaks of cholera, dysentery, and typhoid, though people didn’t yet understand the connection between contaminated water and disease.

London’s Wake-Up Call

The turning point came in 19th-century London. By the 1850s, the Thames had become essentially an open sewer, and the city’s water supply was contaminated. The “Great Stink” of 1858 made the problem impossible to ignore. The smell from the Thames was so bad that Parliament couldn’t meet.  When something smells worse than politics you know that’s bad.

Enter Joseph Bazalgette, chief engineer of London’s Metropolitan Board of Works. His solution was ambitious: build a comprehensive sewer network that would intercept waste before it reached the Thames and carry it downstream to treatment facilities. The system, completed in the 1870s, used gravity and the natural slope of the land to move waste through a network of tunnels—some large enough to drive a carriage through.

Who wouldn’t love the idea that a man named Thomas Crapper invented the flush toilet. But that’s not quite true. Variations of the flush toilet have been around for over 2000 years.  In 1775, a man named Alexander Cummings invented the S-trap—a curved pipe that prevented sewer gases from backing up into the home—making toilets finally tolerable for indoor use.  While Mr. Crapper did not invent the toilet, he did make it functional enough to be routinely installed in homes by creating a workable ballcock mechanism to allow reliable flushing. He also marketed a toilet of his own design, leading to the now familiar nickname of “the crapper.”

Reversing A River

Chicago’s development of a sewer system was a landmark feat of engineering and urban planning in the 19th century. Faced with flat, swampy terrain and rapid population growth, the city recruited engineer Ellis S. Chesbrough in 1855 to design the first comprehensive underground sewer system in the United States. Because the landscape offered little natural drainage, the entire city center had to be physically raised by several feet—an ambitious task that involved elevating streets and even entire buildings above their original grade to allow for gravity-based drainage into the Chicago River.

Apparently, no one realized this would pollute Lake Michigan, the city’s main drinking water source, a classic example of unintended consequences. This led to further innovation, including the construction of a tunnel extending two miles under the lake to bring in cleaner water (completed in 1866) and, ultimately, the monumental reversal of the Chicago River’s flow in 1900. This project diverted wastewater away from the lake and toward the Mississippi basin, following the time-tested political solution of sending your problems downstream.

The Science Behind the Solution

What made modern sewer systems revolutionary wasn’t just engineering—it was the growing understanding of how disease spreads. Dr. John Snow’s work during London’s 1854 cholera outbreak proved that contaminated water, not “bad air,” was spreading the disease. This discovery gave city planners the scientific backing they needed to invest heavily in sewer infrastructure.

Modern sewer systems work on relatively simple principles: gravity moves waste through sloped pipes to treatment facilities, where biological and chemical processes break down harmful materials before releasing treated water back into the environment. The key innovation was creating separate systems for stormwater and sewage, preventing overflow during heavy rains.

Cities Transform

The impact was immediate and dramatic. Cities with comprehensive sewer systems saw massive drops in waterborne diseases. Life expectancy increased, child mortality plummeted, and for the first time in human history, really large cities became livable spaces rather than death traps.

Sewers enabled urban growth on an unprecedented scale. Without sewers, cities like New York, Chicago, and London couldn’t support populations in the millions. The investment in underground infrastructure became the foundation for everything else—commerce, industry, culture—that makes cities economic powerhouses.

Modern Challenges

Today’s sewer systems face new challenges. Climate change brings more intense storms that can overwhelm older systems. Growing populations strain infrastructure that was built decades ago. Many cities are dealing with the expensive reality that sewer systems, built to last 50-100 years have outlived their life expectancy and need major upgrades or replacement.

Prediction: Cities will likely need to invest heavily in “smart” sewer systems over the next few decades—networks that use sensors and data to manage flow more efficiently and prevent overflows.

The Bottom Line

Sewers represent one of humanity’s most important but least appreciated innovations. They made modern urban life possible by solving the fundamental problem of waste management on a large scale. Without this underground network, our cities and the economic and cultural benefits they provide simply couldn’t exist.

The next time you turn on a tap or use indoor plumbing, remember  you’re benefiting from centuries of engineering innovation that literally built the foundation of modern civilization, one pipe at a time.

Sometimes when I’m researching articles, I find myself going down a rabbit hole.  This time I went down the drain.

Christmas Trivia Quiz

A lighthearted Christmas quiz to make you smile. I posted this last year; see if you do better. Answers are at the end.

Question 1

Which country is credited with starting the Christmas tree tradition?

  • A. Germany
  • B. Sweden
  • C. England

Question 2

What was the first song ever broadcast from space?

  • A. “Silent Night”
  • B. “Jingle Bells”
  • C. “Frosty the Snowman”

Question 3

What gift did the Little Drummer Boy give to the newborn Jesus?

  • A. A lamb
  • B. Gold
  • C. A song

Question 4

In Charles Dickens’ A Christmas Carol, what is the name of Scrooge’s deceased business partner?

  • A. Jacob Marley
  • B. Bob Cratchit
  • C. Fred

Question 5

In the movie Home Alone, where does the McCallister family go on vacation when they leave Kevin behind?

  • A. Paris
  • B. Rome
  • C. London

Question 6

Which U.S. state was the first to recognize Christmas as an official holiday?

  • A. Alabama
  • B. Virginia
  • C. New York

Question 7

Who wrote the famous Christmas poem ‘Twas the Night Before Christmas?

  • A. Clement Clarke Moore
  • B. Washington Irving
  • C. Edgar Allan Poe

Question 8

What is the most popular Christmas movie of all time, according to box office records?

  • A. Home Alone
  • B. Elf
  • C. Dr. Seuss’ The Grinch (2018)

Question 9

In which Gospel do we find the account of the angel announcing Jesus’ birth to shepherds?

  • A. Matthew
  • B. Luke
  • C. John

Question 10

Who is the villain in The Nightmare Before Christmas?

  • A. The Grinch
  • B. Oogie Boogie
  • C. Jack Skellington

Question 11

What was the name of George Bailey’s guardian angel in It’s a Wonderful Life?

  • A. Alfred
  • B. Clarence
  • C. Harold

Question 12

Which reindeer is Rudolph’s father?

  • A. Blitzen
  • B. Prancer
  • C. Donner

Question 13

What is the best-selling Christmas single of all time?

  • A. “Last Christmas”
  • B. “White Christmas”
  • C. “All I Want for Christmas Is You”

Question 14

What Christmas beverage is also known as “milk punch”?

  • A. Mulled wine
  • B. Eggnog
  • C. Wassail

Question 15

What is the name of the Grinch’s dog in How the Grinch Stole Christmas?

  • A. Max
  • B. Charlie
  • C. Spot

Question 16

Which department store is featured in Miracle on 34th Street?

  • A. Bloomingdale’s
  • B. Macy’s
  • C. Sears

Question 17

What plant is associated with Christmas because of its red and green colors?

  • A. Holly
  • B. Poinsettia
  • C. Mistletoe

Question 18

In the song “The Twelve Days of Christmas,” how many total gifts are given by the end?

  • A. 78
  • B. 364
  • C. 144

Question 19

What animated Christmas movie features a train bound for the North Pole?

  • A. Frosty the Snowman
  • B. Arthur Christmas
  • C. The Polar Express

Question 20

Which Christmas character is known as “Kris Kringle”?

  • A. Santa Claus
  • B. St. Nicholas
  • C. Frosty

Question 21

What is the name of the holiday celebrated on December 26 in many countries, including the UK and Canada?

  • A. Boxing Day
  • B. St. Stephen’s Day
  • C. Epiphany

Question 22

Which classic Christmas song contains the lyrics, “In the meadow, we can build a snowman”?

  • A. “Jingle Bells”
  • B. “Let It Snow”
  • C. “Winter Wonderland”

Question 23

In the movie Elf, what is the first rule in the Code of Elves?

  • A. Treat every day like Christmas
  • B. The best way to spread Christmas cheer is singing loud for all to hear
  • C. Always be kind

Question 24

Which biblical figures followed a star to find the baby Jesus?

  • A. Shepherds
  • B. Wise Men (Magi)
  • C. Angels

Question 25

In A Charlie Brown Christmas, what does Charlie Brown set out to find for the Christmas play?

  • A. A wreath
  • B. A Christmas tree
  • C. A Santa costume

Question 26

What is the name of the miserly character in Dr. Seuss’ How the Grinch Stole Christmas?

  • A. Scrooge
  • B. The Grinch
  • C. Ebenezer

Question 27

Who played the title role in the 2003 movie Bad Santa?

  • A. Robin Williams
  • B. Billy Bob Thornton
  • C. Will Ferrell

Question 28

What Christmas tradition involves hanging a stocking over the fireplace?

  • A. To invite Santa into the home
  • B. To receive small gifts or treats
  • C. To honor St. Nicholas

Question 29

Which of the following is a traditional Christmas dessert in England?

  • A. Pumpkin pie
  • B. Christmas pudding
  • C. Fruitcake

Question 30

What is the opening line of the Christmas carol “O Holy Night”?

  • A. “Silent night, holy night”
  • B. “O holy night, the stars are brightly shining”
  • C. “It came upon the midnight clear”

Answers

1 A, 2 B, 3 C, 4 A, 5 A, 6 A, 7 A, 8 C, 9 B, 10 B, 11 B, 12 C, 13 B, 14 B, 15 A, 16 B, 17 B, 18 B, 19 C, 20 A, 21 A, 22 C, 23 B, 24 B, 25 A, 26 B, 27 B, 28 B, 29 B, 30 B

How did you do?

ChristmasTrivia

The Fascinating Journey of Christmas Cards: From Victorian Innovation to Global Tradition

Have you ever wondered how the tradition of sending Christmas cards got started? It’s a story that combines busy social calendars, a new postal system, and one clever solution that became a worldwide phenomenon.

Before Christmas Cards: The Early Messengers

Long before anyone thought to mass-produce holiday greetings, people were already experimenting with seasonal messages. In fifteenth-century Germany, the “Andachtsbilder” appeared—proto-greeting cards with religious imagery, usually depicting baby Jesus, accompanied by the inscription “Ein gut selig jar” (A good and radiant year) that were presented as gifts during the Christmas season. Additionally, handwritten letters wishing “Merry Christmas” date from as early as 1534.These weren’t Christmas cards as we know them, but they laid the groundwork.

The first known Christmas card was sent in 1611 by Michael Maier, a German physician, to King James I of England and his son, with an elaborate greeting celebrating “the birthday of the Sacred King”.  This, however, was an ornate document rather than a mass-produced card. The true breakthrough came much later.

In late 1700s, British schoolchildren were creating their own versions. They would take large sheets of decorated writing paper and pen messages like “Love to Dearest Mummy at the Christmas Season” to show their parents how much their handwriting had improved over the year. It was part homework assignment, part holiday greeting—definitely more practical than sentimental!

Also during the latter part of the 18th century wealthy British families adopted a more personal variant: handwritten holiday letters. These were carefully composed greetings expressing seasonal good will and family updates, often decorated with small flourishes or illustrations. A forerunner of the much maligned Christmas letter.  In Victorian England—where social correspondence was almost an art form—sending letters for Christmas and New Year became fashionable among the middle class. The combination of widespread literacy and improvements in the postal system laid the groundwork for something new: a printed, affordable Christmas greeting.

The Birth of the Modern Christmas Card

The real game-changer came in 1843, thanks to a social problem that sounds remarkably modern: too many people to keep in touch with and not enough time. Henry Cole, a prominent civil servant, helped establish the Penny Post postal system—named after the cost of posting a letter.  He found himself with unanswered mail piling up during the busy Christmas season. His solution? Why not create one design that could be sent to everyone?

Cole commissioned his friend, artist John Callcott Horsley, to design what would become the world’s first commercial Christmas card. The design featured three generations of the Cole family raising a toast in celebration, surrounded by scenes depicting acts of charity. The message was simple: “A Merry Christmas and a Happy New Year to You.”

About 2,050 cards were printed in two versions—a black and white version for sixpence and a hand-colored version for one shilling. Interestingly, the card caused some controversy. The image showed young children enjoying glasses of wine with their family, which upset the Victorian temperance movement.

The Penny Post, introduced in 1840, made mailing affordable and accessible. What started as Cole’s time-saving solution quickly caught on among his friends and acquaintances, though it took a few decades for the tradition to really explode in popularity.

Crossing the Atlantic

Christmas cards made their way to America in the late 1840s, but they were expensive luxuries at first. In 1875, Louis Prang, a German-born printer who had worked on early cards in England, began mass-producing cards in America. He made them affordable for average families. His first cards featured flowers, plants, and children. By the 1880s, Prang was producing over five million cards annually.

 Christmas cards spread rapidly with improvements in both postal systems and printing. Victorian cards often featured sentimental, elaborate images—sometimes anthropomorphic animals or unexpected motifs. The Hall Brothers Company (later Hallmark) shifted the format to folded cards in envelopes rather than postcards, allowing for more personal written messages—setting the standard still seen today.

The 20th century brought both industrialization and personalization to the Christmas card. Advances in color printing, photography, and mass marketing meant that cards became cheaper and more varied. In the 1920s and 1930s, families began sending cards featuring their own photographs, a tradition that gained momentum after World War II with the rise of suburban life and inexpensive cameras. By the 1950s and 1960s, Christmas cards had become a fixture of middle-class life. Designs reflected changing tastes—from sentimental Victorian nostalgia to sleek mid-century modernism.  Surprisingly, the first known Christmas card with a personal photo was sent by Annie Oakley in 1891using a photo taken during a visit to Scotland.

Christmas Cards Around the World Today

Fast forward to today, and Christmas card traditions vary wildly depending on where you are. In Great Britian and US, sending cards remains a major tradition. British people send around 55 cards per year on average, with Christmas cards accounting for almost half of all greeting card sales

But the tradition looks quite different in other parts of the world. In Japan, where only about 1.5% of the population is Christian, Christmas is celebrated as a secular, romantic holiday rather than a religious one. Christmas Eve is treated similarly to Valentine’s Day, with couples exchanging gifts. While many people observe the Western custom of sending cards, these are nengajo—New Year’s cards—sent to friends, family, and business associates, expressing wishes for a happy and prosperous year.

In the Philippines, one of Asia’s most Christian nations, Christmas is celebrated with incredible enthusiasm starting as early as September, with the season officially beginning with nine days of dawn masses on December 16. Cards are part of the celebration, but they’re just one element of an extended, community-focused holiday.

In Australia, the tradition of sending handwritten Christmas cards remains popular despite the summer heat.  Australian cards often feature unique imagery—Santa in shorts and sandals, or kangaroos instead of reindeer, adapting the tradition to local culture.

The Digital Shift

Today, while e-cards and social media posts have certainly cut into traditional card sales, many people still cherish the ritual of sending and receiving physical cards. There’s something irreplaceable about finding a thoughtful card in your mailbox among the bills and advertisements.

What started as Henry Cole’s practical solution to a busy social calendar has evolved into a diverse global tradition, adapted and reimagined by different cultures worldwide. Whether you’re mailing elaborate family photo cards, sending quick e-greetings, or exchanging romantic messages in Tokyo, you’re participating in a tradition that’s over 200 years old.

The Death of Vincent Van Gogh: A Controversy That Won’t Die

The wheat fields outside Auvers-sur-Oise have become one of art history’s most debated crime scenes. On the evening of July 27, 1890, Vincent van Gogh returned to his small inn, badly wounded and clutching his chest. What happened in those fields remains unsettled: did he shoot himself, as generations believed, or was he caught in some kind of accident—or even an intentional shooting by someone else?

When I first got interested in art, van Gogh grabbed me right away. His paintings felt urgent, almost breathless, as if he couldn’t get his vision out fast enough. The more I learned about his short, turbulent life, the more I wondered what forces drove that energy—and what cut it short.

How we interpret his death matters. If we see it as suicide, we reinforce the familiar trope of the “tortured genius,” a man undone by the same demons that fueled his creativity. If it wasn’t suicide, then that myth fractures, and we’re left with someone whose life ended not by fate or torment, but by chance and circumstance.

The Traditional Story: A Troubled Artist’s Final Day

For more than a century, the standard version has been simple: van Gogh, struggling with depression and recurring psychiatric crises, walked into a wheat field and shot himself. He had been living in Auvers-sur-Oise and painting furiously—roughly 70 works in 70 days. Some saw that productivity as a sign of mounting instability.

According to Adeline Ravoux, the innkeeper’s daughter, he left after breakfast and didn’t return until after dark. When police asked what happened, he reportedly said, “Do not accuse anyone. It is I who wanted to kill myself.”

Van Gogh had a long history of mental-health struggles—severe depression, psychotic breaks, even earlier suicidal behavior. His letters often carried a tone of exhaustion; in one to his brother Theo, he wrote, “The sadness will last forever.”

Theo, who died just six months later, recalled his brother saying, “I wish I could have gone away like this.”

Doctors, friends, and family at the time took all this as confirmation of suicide. The narrative of a gifted but tormented artist ending his own life fit neatly into late-19th-century ideas about genius and madness—and it has persisted ever since. The Van Gogh Museum still supports this interpretation:

The Murder Theory: A Challenge to the Old Story

The debate shifted dramatically in 2011 when Steven Naifeh and Gregory White Smith published Van Gogh: The Life. They argued the suicide story didn’t fully line up with the evidence.

Their alternative theory centers on René Secrétan, a 16-year-old local who liked to tease van Gogh and who reportedly had access to a faulty pistol. The authors note several problems with the suicide explanation:

  • Van Gogh rarely had access to weapons and had a stated dislike for them.
  • His final paintings were calm, not despairing.
  • He had described suicide as sinful.
  • He somehow walked more than a mile back to the inn after being shot.
  • His painting gear from that day was never found.

They speculate that Secrétan may have accidentally shot him—and that van Gogh, not wanting to ruin the boy’s life, claimed it was suicide. This remains speculation, but it’s one reason the theory caught fire.

The Forensic Debate

A 2020 study added fuel to the controversy. Researchers tested the same model of revolver and reported that a self-inflicted shot at that angle and range likely would have left powder burns—burns that weren’t noted in van Gogh’s case.

Their conclusion: the injury was “in all medical probability” inconsistent with suicide.

Critics push back, noting that van Gogh’s clothing could have blocked powder residue or that details simply weren’t recorded well in 1890. With no autopsy and no preserved clothing, much of this is still guesswork.

The Counterargument: Why Many Experts Still Reject The Murder Theory

Van Gogh scholar Martin Bailey—among others—finds the murder theory unconvincing.  Key points include:

  • Secrétan denied shooting van Gogh when interviewed later in life.
  • He claimed he had left town before the incident.
  • It’s extremely rare for a homicide victim to insist it was suicide.
  • Theo, Dr. Paul Gachet, and others closest to the situation all believed it was self-inflicted.
  • Van Gogh’s burial outside the Catholic cemetery was itself a sign the community accepted suicide—something they would likely have resisted if foul play had been suspected.

What We Actually Know

Despite a mountain of theories, only a handful of facts are certain:

  • Van Gogh was shot in the chest on July 27, 1890.
  • He survived for about 30 hours and died on July 29.
  • No autopsy was performed.
  • The weapon was never recovered.
  • His art supplies from that day disappeared.
  • He left no suicide note.

Everything else rests on testimony, conjecture, and the limits of 19th-century medical documentation.

Why This Debate Matters

The dispute has moved far beyond academia. Films like Loving Vincent (2017) and At Eternity’s Gate (2018) lean into the accident/murder theory. The discussion reflects a broader cultural question: why do we romanticize suffering when we talk about creativity?

If we assume suicide, we risk locking van Gogh into the stereotype that great art comes only from great pain. If we assume an accident, we open the door to imagining a different future—one where he kept painting, evolving, maybe even recovering.

Could Modern Forensics Solve It?

Some researchers want to exhume van Gogh’s remains to analyze the wound using modern techniques. Proponents argue that even degraded bone might show clues about firing distance or angle. That said:

  • It would require major legal and ethical approval.
  • There’s no guarantee the remains would provide answers after 130+ years.

At this point, it remains an academic long shot.

The Bottom Line

Most major institutions still support the traditional suicide explanation. But alternative theories—especially the forensic questions—have made the old story less airtight than it once seemed.

The most honest conclusion is also the least satisfying—we may never know exactly what happened in that wheat field. Too much evidence is missing, and too much time has passed. What remains is a mystery as layered and emotional as the brushstrokes he left behind.

Page 5 of 25

Powered by WordPress & Theme by Anders Norén