Grumpy opinions about everything.

Author: John Turley Page 8 of 25

Always Faithful: A Brief History of the Marine Corps Motto

When I started training as a Marine more than 50 years ago one of the first things we were taught was the call and response “Semper Fi” followed quickly by “Do or Die”.  But to Marines, Semper Fi, Semper Fidelis—Always Faithful—is more than just a motto. It becomes a personal belief system, a statement of individual integrity and a way of life.  Faithful to country, faithful to the Corps, faithful to fellow Marines, faithful to duty.  It reflects your faith in the Marine Corps and your fellow Marines.

How did Marines come to adopt this distinctly non martial motto?  Other more military sounding mottos and nicknames come to mind: “Devil Dogs”, “First to Fight”, and “Leathernecks”.  But Semper Fidelis has become the way Marines see themselves, so much so that their greeting to one another is “Semper Fi”.  The same ethos is embodied in an unofficial Marine Corps motto, “No Man Left Behind”.

But what is the origin of this motto that seems to sum up the entire philosophy of the Marine Corps?

The United States Marine Corps is known for its discipline, dedication, and fierce loyalty, qualities that are symbolized by Semper Fidelis. Translated from Latin, the phrase means “Always Faithful.” But like many traditions within the military, the motto is rooted in a rich history that stretches back hundreds of years.

The Marine Corps was established in 1775 as the Continental Marines, but the famous motto did not appear until more than a century later. By the early 19th century, several mottos had been associated with the Marines, including “Fortitudine” (With Fortitude) and “By Sea and by Land.” While these phrases captured elements of the Marines’ mission, they lacked the enduring emotional impact that would ultimately come with Semper Fidelis.

It was in 1883 that the motto was formally adopted under the leadership of the 8th Commandant, Colonel Charles McCawley. Colonel McCawley likely chose that motto because it embodies the values of loyalty, faithfulness and dedication that he believed should define every Marine.  Unfortunately, we will never know his exact reason for choosing this specific motto because he did not leave any documentation about his thought process.  Regardless, from that point on, the motto became inseparable from the identity of the Corps.

The phrase “Semper Fidelis” has much older origins than its Marine Corps adoption. It’s believed to have originated from phrases used by senators in ancient Rome, with the earliest recorded use as a motto dating back to the French town of Abbeville in 1369. The phrase has been used by various European families since the 16th century, and possibly as early as the 13th century.

The earliest recorded military use was by the Duke of Beaufort’s Regiment of Foot, raised in southwestern England in 1685. The motto also has connections to Irish, Scottish, and English nobility, as well as 17th-century European military units, some of whose members may have emigrated to American colonies in the 1690s

The choice of the Latin phrase by Colonel McCawley was likely deliberate. Latin carries with it a sense of permanence and tradition, and its concise wording communicated volumes in only two words. “Always Faithful” perfectly captured the bond that must exist between Marines and the responsibilities they shoulder. Marines are expected to remain faithful to the mission, to their comrades in arms, and to the United States, regardless of the personal cost. It is this idea of unshakable fidelity that has come to define what it means to wear the Eagle, Globe, and Anchor.

Since its adoption, Semper Fidelis has carried Marines through every conflict the United States has faced. From the battlefields of World War I, where Marines earned the name “Devil Dogs,” to the grueling island campaigns of the Pacific in World War II, to the frozen battle fields of Korea, to the steaming jungles of Vietnam, Marines have demonstrated again and again what it means to be “Always Faithful.” In modern times, whether in Iraq, Afghanistan, or in humanitarian missions across the globe, this motto continues to serve as a reminder of the Corps’ unwavering commitment.

The phrase has also influenced the broader culture of the Marines, inspiring the title of the official Marine Corps march, “Semper Fidelis,” composed by John Philip Sousa in 1888, which remains a powerful symbol of pride and esprit de corps.

The motto’s meaning extends beyond active service. Marines pride themselves on being “once a Marine, always a Marine,” and Semper Fidelis reflects that lifelong bond. Even after leaving the uniform behind, Marines carry that sense of loyalty into civilian life, honoring the values and traditions of their service. For many, it becomes a central guiding principle throughout their lives.  Marine veterans always say “I was a Marine”.

In the end, the motto “Semper Fidelis” is far more than a catchy phrase. It is both a promise and a challenge—a pledge of unwavering loyalty and a challenge to live up to the highest standards of duty, honor, and fidelity. When Marines declare “Semper Fi,” they acknowledge not only their devotion to the Marine Corps, but also the unbreakable loyalty that binds them together as brothers and sisters in arms.

The celebration of the 250th anniversary of the signing of the Declaration of Independence is coming up next year on July 4th. But what about the events leading up to this? What about the men and women who helped make this happen? There are events coming up to commemorate the 250th anniversary of the founding of the Continental Navy and the Continental Marines in 1775. We will be holding commemorative celebrations here in West Virginia and there will be a national event in Philadelphia in October of this year.

When Evidence Isn’t Enough: The Crisis of Science in Public Life

While I would never call myself a scientist, as a physician my whole professional life is built on the belief in and the trust of science. I am distressed that so many people have chosen to disregard trust in science in favor of misinformation.

Throughout history, scientific discovery has been humanity’s most reliable guide to progress. From the germ theory of disease to space exploration, science has reshaped how we live and what we believe possible. Yet in recent years, the very foundation of this methodical pursuit—evidence, observation, and experimentation—has come under sustained political, cultural, and economic attack. This struggle is often described as “the war on science,” a phrase that captures how debates once rooted in policy have shifted into battles over truth itself.

The numbers tell a stark story. The National Science Foundation has terminated roughly 1,040 grants that would have awarded $739 million to researchers and has awarded only 52 undergraduate research grants in 2025, compared to about 200 annually since 2015. The proposed cuts are staggering. Trump will request a $4 billion budget for the NSF in fiscal year 2026, a 55% reduction from what Congress appropriated for 2025.

At the heart of the conflict lies mistrust. Science requires patience since answers evolve as new data emerge. But in a world driven by instant communication and ideological certainties, that evolving nature is often cast as contradiction or weakness. Critics dismiss changing conclusions not as hallmarks of rigorous inquiry, but as evidence of unreliability. The result is a dangerous fracture; science depends on trust in evidence, while many segments of society increasingly place trust in ideology or anecdote or even outright falsehoods.

Climate change is one of the most visible fronts in this battle. Virtually every major scientific body worldwide affirms that human activities are driving global warming. Yet climate scientists are routinely accused of bias or conspiracy, their data questioned, and their motives impugned. What is often overlooked in the controversy is not the complexity of climate systems—scientists have long acknowledged uncertainties—but the political and economic interests threatened by the solutions science prescribes.  When climate scientists publish evidence of global warming, their research doesn’t just describe weather patterns—it challenges powerful industries built on fossil fuels.

Public health provides another stark example. During the COVID-19 pandemic, scientific guidance became subject to fierce political polarization. Masking policies, vaccine safety, and even simple social distancing rules morphed into partisan symbols rather than matters of medical evidence. Scientists found themselves vilified, their professional debates distorted into talking points. The losers in this exchange were not the scientists themselves but the broader public, denied clear trust in institutions that are dedicated to safeguarding health.

Underlying these conflicts are powerful currents. Some industries resist regulation by casting doubt on findings that threaten profit. Certain political movements thrive on skepticism of expertise, channeling populist distrust of “elites” toward scientists. And in the swirl of social media, misinformation spreads more rapidly than peer-reviewed studies, eroding the influence of evidence before consensus can take hold.

What makes this particularly concerning is the timing. America’s main scientific and technological rivals are rising fast. In terms of federal Research and Development funding as a percentage of GDP, U.S. investment has dropped for decades, and the lead that the U.S. enjoyed over China’s R&D expenditure has largely been erased.

While the war on science is often treated as a distinctly modern dilemma, born of political polarization, mass media, and cultural distrust of expertise, its roots stretch back centuries. Galileo was silenced for challenging religious dogma. Early physicians were scorned when they argued that invisible germs, not miasmas or curses, caused disease.  During the Enlightenment of the 17th and 18th centuries, thinkers faced their own version of this struggle—a battle between dogma and reason, authority and evidence, tradition and discovery.   In every case, vested interests—whether theological, cultural, or economic—feared the disruption that scientific truth carried. Understanding those earlier conflicts provides valuable context for our challenges today.

The stakes today, however, feel higher. Our era’s challenges—climate change, pandemics, artificial intelligence, genetic engineering—demand unprecedented reliance on scientific understanding. To wage war on science is, in effect, to wage war on our own best chance for survival and responsible progress. If truth becomes negotiable, then evidence loses meaning, and with it, the possibility of reasoned self-government. That is why the war on science cannot be dismissed as a technical squabble—it is a philosophical contest echoing the Enlightenment battles that shaped modern civilization.

Ultimately, the struggle is less about data than about values. Do we commit to curiosity, openness, and the willingness to change our minds? Or do we cling to certainties that soothe but endanger us in the end? The war on science will not be won by scientists alone. It can only be resolved if society restores trust in evidence as the most reliable compass we have—however unsettling the direction it may point.  There may be alternative opinions but there are no alternative facts.

Frédéric Bazille: The Impressionist Who Never Saw Impressionism

I like to think of my research and my writing as being eclectic, although sometimes I have to admit they may be better described as unfocused. This post may be an example of one of those episodes.  Recently I was looking through a magazine and saw an ad with an illustration that was obviously based on Monet’s Garden in Giverny. It sent me thinking about a series of lectures I had watched on French Impressionists and that got me thinking about Frédéric Bazille, an artist I have always found fascinating. I decided to spend a little time looking into him. So, I completely forgot about the project I was working on and started on this one. That may be why I have so many unfinished articles and random files of unrelated research.

In French Impressionism there are names that stand in the forefront— Monet, Renoir, Degas, Manet — and then there are names that hover just behind them. Frédéric Bazille is one of those in the shadows. He was part of the same circle, painted with the same daring brush, and showed the same fascination with light and color. Yet his life ended before Impressionism even had a name. Bazille was killed in the Franco-Prussian War in 1870, at just twenty-eight years old. His death robbed the movement of both a gifted painter and a generous friend who helped shape its history.

A Wealthy Outsider

Bazille was born in Montpellier in 1841, the son of a prosperous Protestant family. Unlike Monet and Renoir, who often lived in dire poverty, Bazille never worried about how to pay for paints or rent. That freedom made him unusual in Parisian art circles.

Bazille became interested in painting after seeing works by Eugène Delacroix, but his family insisted he also study medicine to ensure financial independence. By 1859, he had begun taking drawing and painting classes at the Musée Fabre in Montpellier with local sculptors Joseph and Auguste Baussa.

In 1862, Bazille moved to Paris ostensibly to continue his medical studies, but he also enrolled as a painting student in Charles Gleyre’s studio. There he met three fellow students who would become close friends and collaborators: Claude Monet, Pierre-Auguste Renoir, and Alfred Sisley. He soon became part of a group of artists and writers that also included Édouard Manet, and Émile Zola.  After failing his medical exam (perhaps intentionally) in 1864, Bazille began painting full-time.

Bazille used his money to help everyone in his circle. He rented large studios in Paris, where friends who couldn’t afford space of their own painted and slept. He bought their finished canvases when no one else would. To Manet, Renoir, Monet, and Sisley, Bazille was not just a colleague but a lifeline. Without him, some of the paintings we now consider cornerstones of Impressionism might never have been finished.

Experiments With Light

What makes Bazille more than a wealthy patron is his own work as an artist. He was fascinated by how sunlight transformed color and how outdoor settings could frame the human figure. Long before the Impressionists formally broke with mainstream French art, Bazille was exploring these themes.  In The Pink Dress (1864), he painted his cousin on a terrace overlooking the countryside, her figure half-lost in shadow, half-caught by light.  In Family Reunion (1867), he executed a technically difficult group portrait outside, with natural sunlight revealing the folds of dresses and the textures of grass. In The Studio on the Rue de la Condamine (1870), Bazille turned his brush on his own circle, capturing Manet, Monet, Renoir, and Zola in a collective portrait of the avant-garde.

His style was less free than Monet’s and more deliberate than Renoir’s, but the suggestion of Impressionism is unmistakable. He was bridging the academic precision of his training with the looser brushwork of the new school.

Bazille exhibited at the Salon in Paris in 1866 and 1868.  The Salon was the most prestigious and conservative art exhibition in France. These official exhibitions became increasingly controversial as they repeatedly rejected innovative artists like the Impressionists, leading to the creation of alternative exhibitions such as the famous Salon des Refusés in 1863 and independent Impressionist exhibitions starting in 1874.

A Call to War

When France declared war on Prussia in 1870, Bazille enlisted in the army. He could easily have avoided service — his family connections, his money, and his medical background all gave him options. But he joined the Zouave, an elite infantry regiment known for its colorful uniforms and reckless bravery.

On November 28, 1870, at the Battle of Beaune-la-Rolande, Bazille’s commanding officer was struck down. Bazille stepped forward to lead the attack. Within minutes, he too was hit and killed. He never saw the armistice. He never saw the first Impressionist exhibition in 1874. He never saw his friends vindicated by history.

The Spirit of Impressionism

Bazille left fewer than sixty known canvases. That small number alone ensures his reputation will never match Monet’s or Renoir’s. Yet the works he did leave offer glimpses of a painter who might have been one of the movement’s greats. He had both vision and means — a rare combination in the avant-garde world.

Without him, the Impressionists lost not only a friend but also a stabilizing force. Bazille’s studios had been safe havens, his purchases financial lifelines, his company a source of encouragement. Monet and Renoir were devastated by his death, and for years afterward, they spoke of him with unslacking grief.

Art historians often speculate about what role he might have played had he lived. Perhaps he would have anchored Impressionism more firmly in the Paris art establishment, or perhaps his money and position would have shielded his friends from years of ridicule. We can only guess.

Remembering Bazille

Today, Bazille’s paintings hang in the Musée d’Orsay in Paris and the Musée Fabre in Montpellier and in the United States at The National Gallery of Art, the Metropolitan Museum of Art, and the Art Institute of Chicago.  To see them is to feel both promise and loss. His canvases are alive with sunlight and color, and they hint at the career that never was.

Bazille reminds us that history is shaped not just by the titans who endure but also by the voices cut short. Impressionism survived and flourished without him, but it was poorer for his absence. In a way, every bright patch of sunlight in Monet’s gardens or flashing dress in Renoir’s dance halls carries a trace of the young man who painted light before the movement even had a name — and who never lived to see it shine.

Bread and Circuses: From Ancient Rome to Modern America

“Already long ago, from when we sold our vote to no man, the People have abdicated our duties; for the People who once upon a time handed out military command, high civil office, legions — everything, now restrains itself and anxiously desires for just two things: bread and circuses.”

Nearly 2,000 years ago, Roman satirist Juvenal penned one of history’s most enduring political observations: “Two things only the people anxiously desire — bread and circuses.” Writing around 100 CE in his Satire X, Juvenal wasn’t celebrating this phenomenon—he was lamenting it. The poet watched as Roman citizens traded their political engagement for free grain and spectacular entertainment, becoming passive spectators rather than active participants in their democracy. The phrase has endured for nearly two millennia as shorthand for a troubling political dynamic: entertainment and consumption replacing civic engagement and accountability.

The Roman Warning

Juvenal’s critique came at a pivotal moment in Roman history. The republic had collapsed, and emperors like Augustus had systematically dismantled democratic institutions. Rather than revolt, Roman citizens seemed content as long as the government provided basic sustenance (the grain dole called annona) and elaborate spectacles at venues like the Colosseum. Political participation withered as people focused on immediate pleasures rather than long-term civic responsibilities.

The strategy worked brilliantly for Roman rulers. Keep the masses fed and entertained, and they won’t question your authority or demand meaningful representation. It was political control through distraction—a form of soft authoritarianism that maintained order without overt oppression.  The policy was effective in the short term—peace in the streets and loyalty to the emperors—but disastrous over time. Rome’s population became disengaged from politics, while real power consolidated in the hands of a few.

Modern American Parallels

Fast-forward to contemporary America, and Juvenal’s observation feels uncomfortably relevant. While we don’t have gladiatorial games, we do have our own version of “circuses”—professional sports, reality TV, social media feeds, and celebrity culture that dominate public attention. These aren’t inherently problematic, but they become concerning when they crowd out civic engagement.

Our modern “bread” takes various forms: government assistance programs, subsidies, and economic policies designed to maintain consumer spending. We are saturated with cheap goods, instant delivery services, and mass consumerism. For many, economic struggles are temporarily softened by accessible consumption, from fast food to online shopping. Yet material comfort often masks deeper inequalities and systemic challenges—wage stagnation, healthcare costs, and mounting national debt. These programs often serve legitimate purposes, but they can also function as political tools to maintain public satisfaction and suppress dissent.

Consider how political campaigns increasingly focus on entertainment value rather than substantive policy debates. Politicians hire social media managers and appear on talk shows, understanding that capturing attention often matters more than presenting coherent governance plans. Meanwhile, voter turnout for local elections—where citizens have the most direct impact—remains dismally low.

The Distraction Economy

Perhaps most striking is how our information landscape mirrors Roman spectacles. We’re bombarded with sensational news, viral content, and manufactured controversies that generate strong emotional reactions but little productive action. Complex policy issues get reduced to soundbites and memes, making genuine democratic deliberation increasingly difficult.

Social media algorithms are specifically optimized for engagement, not enlightenment. They feed us content designed to provoke reactions—anger, outrage, schadenfreude—rather than encourage thoughtful consideration of difficult issues. This creates a population that feels politically engaged through constant consumption of political content while remaining largely passive in actual civic participation.

The danger of “bread and circuses” in modern America lies in apathy. When civic participation declines, voter turnout falls, and policy debates get reduced to simplistic slogans, elites face less scrutiny. The result is a weakened democracy, vulnerable to manipulation and short-term thinking.

Breaking the Cycle

Juvenal’s warning doesn’t mean we should abandon entertainment or social programs. Rather, it suggests we need intentional balance. Democratic societies thrive when citizens remain actively engaged in governance beyond just voting every few years.

This means staying informed about local issues, attending town halls, contacting representatives, and participating in community organizations. It means choosing substance over spectacle and long-term thinking over immediate gratification.

The Roman Republic fell partly because its citizens stopped paying attention to governance. Juvenal’s “bread and circuses” reminds us that democracy requires constant vigilance—and that comfortable distraction can be freedom’s most seductive enemy.

The Communist Dream vs. Stalinist Reality: A Tale of Two Visions

Recently, I’ve been looking at various political philosophies. I’ve written about fascism, totalitarianism, authoritarianism, autocracy and kleptocracy. In this post I’m going to look at theoretical communism versus the reality of Stalinist communism, and I would be remiss if I didn’t at least briefly mention oligarchy as currently practiced in Russia.

The gap between Karl Marx’s theoretical vision of communism and its implementation under Joseph Stalin’s leadership in the Soviet Union represents one of history’s most significant divergences between ideological theory and political practice. While both claimed the same ultimate goal—a classless, stateless society—their approaches and outcomes differed in fundamental ways that continue to shape our world today, one a workers’ utopia and the other a brutal dictatorship.

The Marxist Vision

Marx envisioned communism as the natural culmination of historical progress, emerging from the inherent conflicts within capitalism. In his theoretical framework, the working class (proletariat) would eventually overthrow the capitalist system through revolution, leading to a transitional socialist phase before achieving true communism. This final stage would be characterized by the absence of social classes, private property, private ownership of the means of production, and ultimately, the state itself.

Central to Marx’s concept was the idea that communism would emerge from highly developed capitalist societies where industrial production had reached its peak. He believed that the abundance created by advanced capitalism would make scarcity obsolete, allowing society to operate according to the principle “from each according to his ability, to each according to his needs.” The state, having served its purpose as an instrument of class rule, would simply “wither away” as class distinctions disappeared.

Marx also emphasized that the transition to communism would be an international phenomenon. He argued that capitalism was inherently global, and therefore its replacement would necessarily be worldwide. The famous rallying cry “Workers of the world, unite!” reflected this internationalist perspective, suggesting that communist revolution would spread across national boundaries as workers recognized their common interests.

The Stalinist Implementation

Vladimir Lenin took firm control of Russia following the revolution in 1917 and oversaw creation of a state that was characterized by centralization, suppression of opposition parties, and the establishment of the Cheka (secret police) to enforce party rule. Economically, Lenin’s government shifted from War Communism (state control of production during the civil war) to the New Economic Policy (NEP) in 1921, which allowed limited private trade and small-scale capitalism to stabilize the economy. It formally became the Union of Soviet Socialist Republics in 1922. This period provided the groundwork for the highly centralized, totalitarian state under Stalin that followed Lenin’s death in 1924.

Stalin’s approach to building communism in the Soviet Union diverged sharply from Marx’s theoretical blueprint. Rather than emerging from advanced capitalism, Stalin attempted to construct socialism in a largely agricultural society that had barely begun industrialization. This fundamental difference in starting conditions shaped every aspect of the Soviet experiment.

Instead of the gradual withering away of the state, Stalin presided over an unprecedented expansion of state power. The Soviet government under his leadership controlled virtually every aspect of economic and social life, from industrial production to agricultural collectivization to cultural expression. The state became not a temporary tool for managing the transition to communism, but a permanent and increasingly powerful institution that dominated all aspects of society.  By the early 1930s, Joseph Stalin had centralized all power in his own hands, sidelining collective decision-making bodies like the Politburo or Soviets.

Marx emphasized rule by the proletariat giving power to all people equally.  Stalin fostered a cult of personality through relentless propaganda.  His image appeared on posters, statues, and in schools.  History books were rewritten to credit him for Soviet successes—often erasing Lenin, Trotsky, or others.  He was referred to as the “Father of Nations,” “Brilliant Genius,” and “Great Leader.”  Loyalty to Stalin became more important than loyalty to the Communist Party or its ideals.  The government and the economy operated at his personal direction, enforced by the secret police, censorship, executions, and mass purges of dissidents.

Stalin implemented a command economy, in which the government or central authority makes all major decisions about production, investment, pricing, and the allocation of resources, rather than leaving those choices to market forces. In this system, planners typically set production targets, control industries, and determine what goods and services will be available, often with the goal of achieving social or political objectives such as central control and rapid industrialization. This is the direct opposite of the voluntary cooperation Marx had envisioned. The forced collectivization of peasants onto government farms, rapid industrialization through five-year plans, and the use of prison labor in gulags represented a top-down model of development that contradicted Marx’s emphasis on worker empowerment and democratic participation.

Where Marx emphasized emancipation and freedom for workers, Stalinist policies involved widespread repression, political purges, forced labor camps, and censorship. Most notable is the period that came to be known as the “Great Purge,” also called the “Great Terror,” a campaign of political repression between 1936 and 1938. It involved widespread arrests, forced confessions, show trials, executions, and imprisonment in labor camps (the Gulag system). Stalin accused perceived political rivals, military leaders, intellectuals, and ordinary citizens of being disloyal or conducting “counter-revolutionary” activities. It is estimated that about 700,000 people were executed by firing squad after being branded “enemies of the people” in show trials or secret proceedings.  Another 1.5 to 2 million people were arrested and sent to Gulag labor camps, prisons, or exile. Many died from overwork, malnutrition, disease, or harsh conditions.

Perhaps most significantly, Stalin abandoned Marx’s internationalist vision in favor of “socialism in one country.” This doctrine, developed in the 1920s, argued that the Soviet Union could build socialism independently of worldwide revolution. This shift not only contradicted Marx’s theoretical framework but also led to policies that prioritized Soviet national interests over international worker solidarity.

Key Contradictions

The differences between Marxist theory and Stalinist practice created several fundamental contradictions. Where Marx predicted the elimination of social classes, Stalin’s Soviet Union developed a rigid hierarchy with the Communist Party elite at the top, followed by technical specialists, workers, and peasants. This new class structure, while different from capitalist society, still involved significant inequalities in power, privilege, and access to resources.

Marx’s vision of worker control over production stood in stark contrast to Stalin’s centralized command economy. Rather than workers democratically managing their workplaces, Soviet workers found themselves subject to increasingly detailed state control over their labor. The factory became less a site of worker empowerment than a component in a vast state machine directed from Moscow.

The treatment of dissent also revealed fundamental differences. Marx believed that communism would eliminate the need for political repression as class conflicts disappeared. Stalin’s regime, however, relied extensively on surveillance, censorship, and violent suppression of opposition. The extensive use of terror against both perceived enemies and ordinary citizens contradicted Marx’s vision of a society based on cooperation and mutual benefit.

Modern Russia

At this point, I want to mention something about modern Russia and its current governmental and economic situation since the breakup of the Soviet Union

An oligarchy is a form of government where power rests in the hands of a small number of people. These individuals typically come from similar backgrounds – they might be distinguished by wealth, family ties, education, corporate control, military influence, or religious authority. The word comes from the Greek “oligarkhia,” meaning “rule by few.” In an oligarchy, this small group makes the major political and economic decisions that affect the entire population, often prioritizing their own interests over those of the broader society.

Modern Russia’s economy is often described as having oligarchic features because a relatively small group of wealthy business leaders—many of whom made their fortunes during the chaotic privatization of the 1990s—maintain outsized influence over key industries like energy, banking, and natural resources. While Russia is technically a mixed economy with both private and state involvement, political connections determine who gains access to wealth and power. This creates a system where economic opportunity is concentrated among elites closely tied to the Kremlin, most closely resembling an oligarchy.

Historical Context and Consequences

Understanding the differences between Marxist theory and Stalinist implementation requires considering the historical context in which Stalin operated. The Soviet Union faced external threats, internal resistance, and the enormous challenge of rapid modernization. Stalin’s supporters argued that harsh measures were necessary to defend the revolution and build industrial capacity quickly enough to survive in a hostile international environment.

Critics, however, contend that Stalin’s methods created a system that was fundamentally incompatible with Marx’s vision of human liberation. The concentration of power in a single party—much less a single person— combined with the suppression of democratic institutions, and the extensive use of violence and coercion demonstrate that Stalinist practice moved away from, rather than toward, Marx’s goals.

The legacy of this divergence continues to influence contemporary political debates. Supporters of Marxist theory often argue that Stalin’s failures demonstrate the dangers of abandoning egalitarian principles and internationalist perspectives. Meanwhile, critics of communism point to the Soviet experience as evidence that Marxist ideals are inherently unrealistic or even dangerous.

This comparison reveals the complex relationship between political theory and practice, highlighting how historical circumstances, leadership decisions, and practical constraints can shape the implementation of ideological visions in ways that may fundamentally alter their character and outcomes.

Why History Still Matters: Lessons for Today’s World

Recently I was reading an article about the poor state of historical knowledge in the United States, and I decided to repost my first article from when I started blogging almost five years ago.  It seems very little has changed.

“Study the past if you would define the future,”—Confucius. I particularly like this quotation. It is similar to the more modern version: Those who don’t learn from the past are doomed to repeat it. However, I much prefer the former because it seems to be more in the form of advice or instruction. The latter seems to be more of a dire warning. Though I suspect, given the current state of the world, a dire warning is in order.

But regardless of whether it comes in the form of advice or warning, people today do not seem to heed the importance of studying the past.  The knowledge of history in our country is woeful. The lack of emphasis on the teaching of history in general and specifically American history, is shameful. While it is tempting to blame it on the lack of interest on the part of the younger generation, I find people my own age also have very little appreciation of the events that shaped our nation, the world, and their lives. Without this understanding, how can we evaluate what is currently happening and understand what we must do to come together as a nation and as a world.

I have always found history to be a fascinating subject. Biographies and nonfiction historical books remain among my favorite reading. In college I always added one or two history courses every semester to raise my grade point average. Even in college I found it strange that many of my friends hated history courses and took only the minimum. At the time, I didn’t realize just how serious this lack of historical perspective was to become.

Several years ago, I became aware of just how little historical knowledge most people possess. At the time, Jay Leno was still doing his late-night show, and he had a segment called jaywalking. During that segment he would ask people in the street questions that were somewhat obscure and to which he could expect to get unusual and generally humorous answers. On one show, on the 4th of July, he asked people “From what country did the United States declare independence on the 4th of July?” and of course no one knew the answer.  

My first response was he must have gone through dozens of people to find the four or five people who did not know the answer to his question. The next day at work, the 5th of July, I decided to ask several people, all of whom were college graduates, the same question. I got not one single correct answer. Although, one person at least realized “I think I should know this”. When I told my wife, a retired teacher, she wasn’t surprised.  For a long time, she had been concerned about the lack of emphasis on social studies and the arts in school curriculums.  I too was now becoming seriously concerned about the direction of education in our country.

A lot of people are probably thinking “So what, who really cares what a bunch of dead people did 250 years ago?” If we don’t know what they did and why they did it how can we understand its relevance today?  We have no way to judge what actions may support the best interests of society and what will ultimately be detrimental.

Failure to learn from and understand the past results in a me-centric view of everything. If you fail to understand how and why things have developed, then you certainly cannot understand what the best course forward will be. Attempting to judge all people and events of the past through your own personal prejudices leads only to continued and worsening conflict.

If you study the past, you will see that there has never general agreement on anything. There were many disagreements, debates and even a civil war over differences of opinion.  It helps us to understand that there are no perfect people who always do everything the right way and at the right time. It helps us to appreciate the good that people do while understanding the human weaknesses that led to the things that we consider faults today. In other words, we cannot expect anyone to be a 100% perfect person. They may have accomplished many good and meaningful things, and those good and meaningful things should not be discarded because the person was also a human being with human flaws.

Understanding the past does not mean approving of everything that occurred but it also means not condemning everything that does not fit into twenty-first century mores.  Only by recognizing this and seeing what led to the disasters of the past can we hope to avoid repetition of the worst aspects of our history. History teaches lessons in compromise, involvement and understanding. Failure to recognize that leads to strident argument and an unwillingness to cooperate with those who may differ in even the slightest way. Rather than creating the hoped-for perfect society, it simply leads to a new set of problems and a new group of grievances.

In sum, failure to study history is failure to prepare for the future. We owe it to ourselves and future generations to understand where we came from and how we can best prepare our country and the world we leave for them. They deserve nothing less than a full understanding of the past and a rational way forward. 

I’m going to close with a quote I recently came across:

  “Indifference to history isn’t just ignorant, it’s rude.”

                        —David McCollum

Palpitations Explained: When It’s Normal and When It’s an Emergency

That sudden awareness of your heart beating faster, skipping a beat, or pounding in your chest can be unsettling. You’re experiencing what doctors call palpitations, and while they might feel alarming, they’re actually quite common. Understanding what causes them, when to worry, and how they’re treated can help put your mind at ease.

What Are Heart Palpitations?

Heart palpitations are essentially your heightened awareness of your own heartbeat. Normally, you don’t notice your heart beating throughout the day. When palpitations occur, you suddenly become conscious of this usually automatic process. People describe the sensation in various ways: your heart might feel like it’s racing, pounding, fluttering, flip-flopping, or skipping beats entirely.

You can feel palpitations in different locations too. While most people notice them in their chest, you might also feel them in your throat or neck. Some people even hear their heartbeat, especially when lying in bed at night in a quiet room.

Common Causes of Palpitations

The most frequent trigger for palpitations is anxiety or stress. When you’re worried, scared, or experiencing a panic attack, your body’s fight-or-flight response kicks in, causing your heart to beat faster and harder. But anxiety isn’t the only culprit.

Lifestyle factors play a significant role. Caffeine from coffee, tea, or energy drinks can trigger palpitations, as can alcohol and spicy foods. Many people notice palpitations after eating large, heavy meals rich in carbohydrates or sugar. Smoking and recreational drugs like cocaine or amphetamines are also common triggers.

Hormonal changes during pregnancy, menstruation, or menopause frequently cause palpitations. During pregnancy, your body produces more blood to support your baby, which can make your heart work harder and create noticeable palpitations.

Certain medications, including asthma inhalers, decongestants, thyroid medications, corticosteroids, and some blood pressure drugs, may cause palpitations as a side effect.

Medical conditions can also be responsible. An overactive thyroid gland speeds up your metabolism and heart rate. Low blood sugar, anemia, dehydration, imbalances of potassium or magnesium, and fever can all trigger palpitations.

Arrhythmias are an abnormal rhythm of the heart that can be perceived as palpitations.  Common types include atrial fibrillation (irregular, often rapid heart rate) commonly known as afib, ventricular tachycardia or vtach, (a rapid heart rate that starts in the lower chambers of the heart), and premature ventricular contractions (extra heartbeats) sometimes called PVCs. Some arrhythmias such as PVCs are harmless, while others can increase the risk of stroke, heart failure, or sudden cardiac arrest.

Palpitations can be a sign of more serious heart disease, such as coronary artery disease, cardiomyopathy, or heart valve problems. These often come with other symptoms such as chest pain, dizziness, or shortness of breath.

Recognizing the Symptoms

Beyond the basic awareness of your heartbeat, palpitations can come with additional sensations. You might feel like your heart is beating too fast or too hard. Some people describe a fluttering sensation, like butterflies in their chest. Others feel like their heart is skipping beats or adding extra ones.

The timing and triggers of your palpitations can provide important clues. Some people only notice them at night when lying down, simply because there are fewer distractions. Others experience them after exercise, during stressful situations, or following meals.

Most palpitations are brief, lasting just seconds to a few minutes. However, if they persist for longer periods or occur frequently throughout the day, they warrant medical attention.

How Palpitations Are Diagnosed

When you visit your doctor about palpitations, they’ll start with a detailed   conversation about your symptoms. They’ll ask you to describe exactly what you feel, when the palpitations occur, and what might trigger them. Your medical history, including any heart conditions, medications, and family history of heart problems, is crucial information.

The physical examination includes listening to your heart and lungs with a stethoscope, checking your blood pressure, and looking for signs of other conditions that might cause palpitations, such as an enlarged thyroid gland.

The most important initial test is an electrocardiogram (ECG or EKG), which records your heart’s electrical activity. This painless test can detect irregular heart rhythms if they occur during the recording. However, since palpitations often come and go, you might not experience them during the brief ECG.

For this reason, doctors often recommend longer-term monitoring. A Holter monitor is a portable device you wear for 24 to 48 hours that continuously records your heart rhythm during normal activities. Event monitors can be worn for weeks or months, and you activate them when you feel symptoms.

Blood tests can check for conditions like anemia, thyroid problems, or electrolyte imbalances that might trigger palpitations. An echocardiogram, which uses sound waves to create images of your heart, can reveal structural problems.

Benign vs. Dangerous Palpitations

Here’s the good news: most palpitations are benign and don’t indicate serious heart problems. Research shows that about 16% of people see their primary care doctor for palpitations, but the vast majority have harmless causes.

Benign palpitations typically occur in people with normal heart function and no structural heart disease. They’re often triggered by identifiable factors like stress, caffeine, or hormonal changes. These palpitations usually last only seconds to minutes and resolve on their own.

However, certain warning signs suggest palpitations might indicate a more serious condition. Seek immediate medical attention if palpitations occur with chest pain, severe shortness of breath, dizziness, fainting, or near-fainting episodes. These symptoms could indicate dangerous heart rhythms that affect your heart’s ability to pump blood effectively.

People with existing heart disease, previous heart attacks, or significant risk factors for heart disease should take palpitations more seriously. In these cases, palpitations might signal a dangerous arrhythmia that requires prompt treatment.

The pattern of palpitations can also provide clues. Sustained episodes lasting hours, very frequent daily occurrences, or palpitations that worsen over time are more concerning than occasional brief episodes.

Treatment and Management Options

Treatment for palpitations depends entirely on their underlying cause. When palpitations are benign and caused by lifestyle factors, the focus is on avoiding triggers and making healthy changes.

Stress management is often the most effective intervention. Techniques like deep breathing exercises, meditation, yoga, or regular counseling can significantly reduce stress-related palpitations. Regular exercise, while it might temporarily increase your heart rate, actually helps reduce overall palpitation frequency by improving cardiovascular fitness and stress resilience.

Dietary modifications can be very effective. Reducing or eliminating caffeine, limiting alcohol consumption, and avoiding large heavy meals can prevent many episodes. Staying well-hydrated and maintaining stable blood sugar levels through regular, balanced meals also helps.

For palpitations caused by medical conditions, treating the underlying problem usually resolves the symptom. This might involve thyroid medication for hyperthyroidism, iron supplements for anemia, or adjusting medications that trigger palpitations.

When palpitations are caused by heart rhythm disorders, more specific treatments may be necessary. Medications called beta-blockers can slow heart rate and reduce palpitation frequency. For more serious arrhythmias, doctors might recommend procedures like catheter ablation, which uses targeted energy to correct abnormal electrical pathways in the heart.

Some people benefit from devices like pacemakers (for slow heart rhythms) or implantable cardioverter defibrillators (for dangerous fast rhythms). However, these interventions are reserved for serious heart conditions, not typical benign palpitations.

While most current treatments focus on medications and procedures, emerging technologies like smartphone monitoring and wearable devices may play larger roles in future palpitation management.

When to Seek Help

Most palpitations don’t require emergency care, but certain situations demand immediate attention. Call 911 if palpitations occur with chest pain or pressure, severe shortness of breath, fainting, severe dizziness, if your pulse feels very fast or erratic, or any signs that might indicate a heart attack.

Schedule a regular appointment with your doctor if you experience frequent palpitations, if they’re interfering with your daily activities, or if you have risk factors for heart disease. Even if your palpitations turn out to be benign, getting proper evaluation provides peace of mind and ensures you’re not missing any underlying conditions.

Remember, while palpitations can feel frightening, they’re usually harmless. Recognizing the difference between harmless triggers and signs of more serious conditions and understanding their causes and knowing when to seek help are keys to managing your heart health

The Erosion of Decorum in Public Discourse

The nature of public debate has undergone a dramatic change in recent years. Civility and reasoned discourse—once the hallmarks of political and social commentary—have given way to something closer to a verbal battleground.

Today’s public exchanges are increasingly defined by inflammatory rhetoric, personal attacks, and an abandonment of long-held norms of decorum.

From Respectful Dialogue to Profanity-Laced Exchanges

The decline is nowhere more evident than in the normalization of profanity. What was once limited to private conversations or edgy entertainment now spills freely across digital platforms.

Social media comment threads, online forums, and even professional publications regularly feature language that, not long ago, would have been considered unacceptable in public life. This shift reflects a broader cultural preference for emotional expression over reasoned argument.

Substack and the Temptation of Provocation

Even Substack, often positioned as a refuge for serious, long-form writing, has not been immune.

When I first joined the platform, I was drawn by its promise of thoughtful essays outside the noise of traditional media. Yet I’ve noticed a sharp increase in profanity, personal insults, and derogatory comments—paired with a noticeable decline in reasoned discussion.

False claims, easily disproven with a quick fact-check, are repeated and restacked with little regard for accuracy. The subscription model, rewarding engagement over editorial oversight, can inadvertently encourage more inflammatory tones in order to hold readers’ attention.

The Meme Problem

Memes have only accelerated this decline. And here, I’ll admit my own complicity: I’ve created and shared memes to make ironic or satirical points. But over time, irony can blur into sarcasm, and satire into insult.

Memes thrive on simplification and emotional impact. Complex policies collapse into pithy slogans and mocking images. They’re shareable, entertaining, and easy—but rarely conducive to real understanding.

The result? Substantive debate gets replaced by fast, shallow exchanges of oversimplified (and often misleading) talking points.

From Essays to Punchlines

Essays once demanded careful argument: claims supported by evidence, acknowledgment of counterpoints, and respect for nuance. Memes demand only a laugh—or a groan.

Worse, their viral nature ensures that inflammatory or misleading content spreads faster than any correction ever could.

This isn’t just an aesthetic concern. When communication prioritizes winning over understanding, democracy suffers. Citizens grow less equipped to grapple with complex issues, and leaders find it easier to appeal to emotion rather than present workable solutions.

Can We Reverse the Trend?

The trajectory is worrisome—but not irreversible.

  • Platforms could design features that reward thoughtful engagement instead of amplifying outrage.
  • Educational institutions could recommit to teaching critical thinking and civil debate.
  • Individuals can model better behavior, remembering that persuasion usually requires respect.

Still, if I’m honest, I’m not optimistic. Too many incentives—from clicks to cash—push the culture of discourse in the opposite direction.

Final Thoughts

The health of our public discourse is the health of democracy itself. As writers, readers, and citizens, we carry responsibility for raising the standard.

Our words shape not only our immediate conversations but also the norms of civic life for generations to come. The choice is ours: continue down the path of hostility and simplification—or rebuild the habits of respect and reason.

I hope we choose the latter. But hope, at this moment, feels fragile.

The First Amphibious Landing

 The Continental Marines at Nassau

When the Second Continental Congress authorized the creation of the Continental Marines on November 10, 1775, few could foresee their pivotal role in orchestrating North America’s first amphibious assault less than four months later.  The operation against Nassau, on New Providence Island in the Bahamas, was born of necessity, marked by improvisation, and ultimately set the tone for Marine Corps operations—an audacious legacy that endures to this day.

Origins: Gunpowder Desperation and Strategic Vision

The American Revolution’s early years were marked by chronic shortages, especially of gunpowder. After the British seized stores destined for the Patriot cause, intelligence uncovered that significant quantities were stockpiled at Nassau. The Continental Congress approached this challenge with typical Revolutionary War creativity—they would use their brand-new Navy and even newer Marines to solve an Army problem. The Congress’ official instructions to Commodore Esek Hopkins focused on patrolling the Virginia and Carolina coasts, but “secret orders” directed attention to the Bahamas, setting in motion a bold plan to directly address the fledgling army’s supply crisis.

Organization: The Making of an Amphibious Battalion

With barely three months’ existence, the Continental Marines had hastily raised five companies of around 300 men. Captain Samuel Nicholas, tasked as the first Marine officer, oversaw their training and organization in Philadelphia. Their equipment was uneven—many wore civilian garb rather than uniforms and carried whatever muskets and bayonets were available. The uniform regulations specifying the now famous green coats with white facings were not promulgated until several months after the raid was over.

The Voyage South: Challenges and Preparation

Hopkins’ fleet consisted of the ships Alfred, Hornet, Wasp, Fly, Andrew Doria, Cabot, Providence, and Columbus. In addition to ships’ crews, the fleet carried more than 200 Continental Marines under the command of Captain Nicholas. The expedition began inauspiciously on January 4, 1776, when the fleet attempted to leave Philadelphia but became trapped by ice in the Delaware River for six weeks.

When they finally reached the Atlantic on February 17, 1776, the small fleet faced additional challenges. Disease found its way onboard most of the ships. Smallpox was a huge concern and was reported on at least four ships.

The fleet’s journey to the Caribbean took nearly two weeks of sailing through challenging winter conditions. Despite the hardships, Hopkins maintained the element of surprise—British intelligence had detected American naval preparations but assumed the fleet was bound for New York or Boston, not the distant Bahamas.

Implementation: Amphibious Innovation at Nassau

The element of surprise was initially lost when the fleet’s approach triggered alarm at Nassau. Plans to storm the stronger Fort Nassau dissolved, and Hopkins convened a council to identify a new landing point. A revised strategy saw about 230 Marines and 50 sailors, led by Captain Nicholas, land from longboats two miles east of the weaker Fort Montagu on March 3, 1776. They wore a patchwork of civilian clothes and white breeches—some men had managed to find green shirts as a form of identification. They set out marching toward the fort armed with muskets and bayonets, looking perhaps more like pirates than soldiers. 

Their advance was met with only token resistance. Outnumbered and ill-prepared, local militia withdrew as Nicholas’s men captured Fort Montagu in what historian Edwin Simmons called a “battle as bemused as it was bloodless.”

Nicholas decided to wait until morning to advance on the town.  His decision was tactically sound given the circumstances—he’d lost surprise, did not know the enemy’s strength, was operating in unknown terrain, night was falling, and he lacked naval support. However, this prudent military decision allowed Governor Browne to escape with over 80% of Nassau’s gunpowder stores, turning what could have been a complete strategic victory into a partial success. This incident highlights the tension between tactical prudence and strategic urgency that was destined to become a recurring theme in amphibious warfare.

The next day the Americans took Fort Nassau and arrested the Governor, Montfort Browne. Browne had already sent most of the coveted gunpowder on to St. Augustine, Florida, the night before. Despite this, American forces seized cannons, shells, and other military stores before occupying Nassau for nearly two weeks.

Marine discipline and flexibility were evident, as they pivoted from their surprise landing, conducted operations deep inland, and created their evolving amphibious reputation. The fleet departed on March 17, not before stripping Nassau and its forts of anything militarily useful.

Aftermath: Growing Pains and Enduring Lessons

Though the mission failed in its primary objective of securing a cache of gunpowder, its operational successes far outweighed the losses. The Marines returned with large quantities of artillery, munitions, and several recaptured vessels. On the return leg, they faced and fought (though did not defeat) HMS Glasgow; the squadron returned to New England by April 8, with several casualties including the first Marine officer killed in action, Lt. John Fitzpatrick.

Controversy followed—Hopkins was censured for failing to engage British forces as directed in his official orders.  Nicholas was promoted to major and tasked with raising additional Marine companies for new frigates then under construction. These developments reflected both the lessons learned and the growing recognition of the value of the Marine force in expeditionary operations ashore.

A second raid on Nassau by Continental Marines occurred from January 27–30, 1778, under Captain John Peck Rathbun. Marines and seamen landed covertly at midnight, quickly seizing Fort Nassau and liberating American prisoners held by the British. The raiders proceeded to capture five anchored vessels, dismantled Fort Montagu, spiked the guns, and loaded 1,600 lbs of captured gunpowder before departing. This bold operation marked the first time the Stars and Stripes flew over a foreign fort and showcased the resourcefulness of American forces, who managed to strike a valuable blow against British power in the Caribbean without suffering casualties.

Long-Term Implications for the United States Marine Corps

The Nassau operation set powerful precedents:

  • Amphibious Warfare Doctrine: This was the Marines’ first organized amphibious landing, shaping the Corps’ future focus on rapid deployment from sea to shore, a hallmark that continues in modern doctrine.  This was likely referred to at the time as a Naval landing, as the word amphibious did not come into use in this context until the 1930s.
  • Adaptability Under Fire: The improvisational tactics used at Nassau foreshadowed the Corps’ reputation for flexibility and mission focus.
  • Naval Integration: Joint operations with the Navy not only succeeded tactically, but helped institutionalize the Marine-Navy partnership, with Marines serving as shipboard security, landing parties, and naval infantry.
  • Legacy of Boldness: This first operation established a “first-in” ethos and a culture embracing challenge and audacity, foundational principles in Marine culture.

After the war, the Continental Marines disbanded, only to be re-established in 1798. Yet the legacy of Nassau endured. “Semper Fidelis”—always faithful—has its roots in that March 1776 assault, when the odds seemed long and the stakes critical.

Today’s United States Marine Corps draws a direct lineage from that small, ragtag battalion of Marines scrambling ashore at Nassau, forever entwining its identity with the promise, risk, and legacy of that first storied mission. Every modern Marine, stepping from ship to shore, walks in the footprints of Captain Samuel Nicholas and his men—soldiers of the sea whose boldness, improvisation, and teamwork have echoed across the centuries.

The Electoral College: Should America Go Popular?

Few topics in American politics generate as much perennial debate as the Electoral College. Every four years, calls to abolish it resurface—often with renewed vigor when the electoral vote winner loses the popular vote, as happened in 1824, 1876, 1888, 2000, and 2016. The proposal is to elect the president by a nationwide popular vote, just as we do governors and senators.

Why We Have an Electoral College

The Electoral College was a late-stage compromise at the Constitutional Convention of 1787. The framers were balancing multiple tensions:

  • Large vs. small states
  • Slave vs. free states
  • Congress choosing the president vs. direct election

Delegates feared that direct election by popular vote would favor populous states, allow urban centers to dominate rural areas, and encourage demagogues to campaign purely on popular passions. At the same time, they worried about giving Congress too much control over the executive branch.

The system for selecting the president—via the Electoral College—was partly designed to prevent direct popular influence. Its original intent, according to historians, was to empower electors (seen as more knowledgeable) and to ensure thoughtful deliberation in choosing the president, guarding against the masses being swayed by charm rather than substance.

Some delegates—like James Madison, James Wilson, and Gouverneur Morris—supported direct popular election of the president, while others, like Elbridge Gerry and Roger Sherman, explicitly voiced distrust in direct election of the president and believed ordinary voters lacked impartiality or sufficient knowledge. 

Institutional and political bargaining ultimately shaped the final structure. Their solution: each state gets electors equal to its total number of representatives and senators. The addition of two electors for the senators ensures that the small states remain, on a population basis, overrepresented in the Electoral College.

State legislatures determine how electors are chosen (eventually, every state moved to popular election). Most states now award all their electoral votes to the statewide popular vote winner—“winner-take-all.”

The Electoral College thus emerged not as anyone’s ideal system, but as a possible,  workable compromise that balanced competing regional interests, philosophical concerns about democracy, and the practical realities of governing a large, diverse republic in the 18th century.

Pros of Eliminating the Electoral College

Equal Weight for Every Vote

The most compelling argument for eliminating the Electoral College centers on democratic equality. Under the current electoral system, a vote in Wyoming carries roughly three times the weight of a vote in California when measured by electoral votes per capita. To put this in real numbers Wyoming has about 193,000 people per electoral vote while California has about 718,000.  This mathematical reality means that some Americans’ voices count more than others in selecting their president, a principle that seems to contradict the foundational democratic ideal of “one person, one vote.”

A national popular vote would ensure that every American’s vote carries identical weight, regardless of geography. This approach would eliminate scenarios where candidates win the presidency while losing the popular vote. Such outcomes can undermine public confidence in democratic institutions and raise questions about the legitimacy of electoral results.

Reflects the Will of the Majority

In two of the last six elections (2000 and 2016), the candidate with fewer total popular votes became president. While the framers accepted the possibility of divergence between the popular and electoral results, many modern Americans view such outcomes as undermining democratic legitimacy.

Encourages Nationwide Campaigning

Because many states are firmly “red” or “blue,” campaigns focus their energy on a handful of battleground states that could go either way—like Pennsylvania, Wisconsin, and Arizona. Under a popular vote, candidates would have an incentive to compete everywhere, because every additional vote counts the same regardless of location.

Simplifies the Process

The Electoral College system confuses many Americans and can seem archaic in the 21st century. A direct popular vote is straightforward and immediately understandable: the candidate who receives the most votes wins. This simplicity could increase public trust and participation in the democratic process.

Eliminates “Faithless Electors”

Although rare, faithless electors—those who cast electoral votes against their state’s popular choice—are possible under the current system. A direct election would remove this constitutional quirk.

Cons of Eliminating the Electoral College

Federalism Concerns

The United States is a union of states as well as a single nation. The Electoral College reinforces the role of states in presidential elections, reflecting their status as sovereign entities in certain respects. Abolishing it could be seen as eroding federalism by further centralizing power.

Risk of Regional Dominance

Opponents argue that without the Electoral College, candidates could focus disproportionately on high-population regions—California, Texas, Florida, New York—while ignoring rural states and smaller communities. Whether this would happen in practice is debated, but the perception of neglect could deepen regional divides.

Potential for Narrow-Margin Crises

In a popular vote system, a razor-thin margin would require a nationwide recount. Under the Electoral College, disputes are typically contained within a state (e.g., Florida in 2000). A national recount would be a logistical and political nightmare.

Constitutional Hurdles

Abolishing the Electoral College requires a constitutional amendment—an extraordinarily high bar. That means approval by two-thirds of both houses of Congress and ratification by three-quarters of the states. Smaller states, which benefit from the Electoral College’s vote weighting, have little incentive to approve such a change.

Intermediate Options

Since abolishing the Electoral College outright is politically unlikely in the near term, reform advocates have proposed middle-ground solutions.

The National Popular Vote Interstate Compact (NPVIC)

The NPVIC is an agreement among states to award all their electoral votes to the national popular vote winner, but it only takes effect once states totaling at least 270 electoral votes join. As of 2025, 17 states plus D.C. (totaling 209 electoral votes) have joined. This approach sidesteps a constitutional amendment but relies on states’ willingness to cede control over their electoral votes.  The compact could be implemented without amending the constitution and achieves the functional equivalent of a popular vote. However, it has not been legally tested and would likely face court challenges. To me, the greatest drawback is that states could withdraw at any time. I would envision that in a closely contested and contentious election states unhappy with the national outcome would likely withdraw from the compact.

Proportional Allocation of Electoral Votes

Instead of winner-take-all, states could allocate electoral votes proportionally to the share of the statewide vote. Maine and Nebraska already use a variation of this system, awarding some votes by congressional district.  Theoretically, this would reduce the impact of battleground states and increase the representation for minority views within states. But it could also increase the likelihood of no candidate reaching 270 electoral votes thereby sending the election into the House of Representatives. It still preserves the over representation of smaller states because it retains the two electors for senators. 

If electors are awarded proportionally based on statewide voting, the popular vote may not be distributed in a manner to allow awarding of whole delegates. There’s no constitutional provision for awarding partial electors. This would be especially significant in states with only one or two representatives in the house.

If electors were awarded to the winners of each Congressional District this would encourage even more gerrymandering than we are currently seeing. Extreme gerrymandering could undermine any progress towards reflecting the popular vote, simply continuing the current mismatch of popular and electoral votes.

Gerrymandering is a political practice that involves manipulating the boundaries of electoral districts to benefit a particular party or group. It is nothing new in American politics, originating in the early 19th century.  The term “gerrymandering” was coined after an 1812 incident in Massachusetts, where Governor Elbridge Gerry signed a bill redrawing district lines to favor his party. One of the districts resembled a mythical salamander in shape, inspiring the portmanteau “Gerry-mander” in a satirical cartoon by Elkanah Tisdale that helped popularize the term. It’s interesting, that since gerrymandering favored the Democratic-Republican Party and the newspaper that published the cartoon supported the Federalist Party, it was made to look not like a cute salamander but more like an ominous dragon. 

Bonus Electoral Votes for National Popular Vote Winner

A hybrid idea would keep the Electoral College but award a fixed number of bonus electors (say, 100) to the national popular vote winner. This would almost guarantee alignment between the popular and electoral results without abandoning the current structure.  This option maintains a state-based system and reduces the chance of a split result. But it would also require a constitutional amendment and add complexity that many voters may find confusing.

Feasibility of Change

Reforming or abolishing the Electoral College faces three main obstacles:

  • Constitutional Entrenchment – Article II and the 12th Amendment are clear about elector allocation. Full abolition would require one of the most difficult political feats in American governance—a constitutional amendment.
  • State Incentives – Smaller states and swing states have outsized influence under the current system. They are unlikely to support reforms that dilute their power.
  • Partisan Dynamics – Since recent Electoral College/popular vote splits have benefited Republicans, Democrats tend to favor reform, while Republicans tend to defend the status quo. That dynamic could shift if the pattern changes.

 Conclusion

The Electoral College is both a relic of 18th-century compromises and a living feature of America’s federal structure. Its defenders argue that it protects smaller states, contains electoral disputes, and reinforces the states’ role in national governance. Its critics counter that it violates the principle of “one person, one vote” and distorts campaign priorities.

Abolishing it in favor of a direct popular vote would likely make presidential elections more democratic in the literal sense, but it would also raise questions about federalism, campaign strategy, and the handling of close results. The Electoral College preserves federalism and geographic balance but can produce outcomes that seem to contradict majority will.

Intermediate options like the NPVIC or proportional allocation may offer ways to mitigate the College’s most controversial effects without uprooting the constitutional framework but also face significant hurdles for implementation.

Whether reform happens will depend not just on the merits of the arguments, but on the political incentives of the states and the parties. Until those incentives shift, the Electoral College is likely to remain—imperfect, contentious, and uniquely  American.

Page 8 of 25

Powered by WordPress & Theme by Anders Norén