Grumpy opinions about everything.

Category: History Page 4 of 9

Grumpy opinions about American history

Always Faithful: A Brief History of the Marine Corps Motto

When I started training as a Marine more than 50 years ago one of the first things we were taught was the call and response “Semper Fi” followed quickly by “Do or Die”.  But to Marines, Semper Fi, Semper Fidelis—Always Faithful—is more than just a motto. It becomes a personal belief system, a statement of individual integrity and a way of life.  Faithful to country, faithful to the Corps, faithful to fellow Marines, faithful to duty.  It reflects your faith in the Marine Corps and your fellow Marines.

How did Marines come to adopt this distinctly non martial motto?  Other more military sounding mottos and nicknames come to mind: “Devil Dogs”, “First to Fight”, and “Leathernecks”.  But Semper Fidelis has become the way Marines see themselves, so much so that their greeting to one another is “Semper Fi”.  The same ethos is embodied in an unofficial Marine Corps motto, “No Man Left Behind”.

But what is the origin of this motto that seems to sum up the entire philosophy of the Marine Corps?

The United States Marine Corps is known for its discipline, dedication, and fierce loyalty, qualities that are symbolized by Semper Fidelis. Translated from Latin, the phrase means “Always Faithful.” But like many traditions within the military, the motto is rooted in a rich history that stretches back hundreds of years.

The Marine Corps was established in 1775 as the Continental Marines, but the famous motto did not appear until more than a century later. By the early 19th century, several mottos had been associated with the Marines, including “Fortitudine” (With Fortitude) and “By Sea and by Land.” While these phrases captured elements of the Marines’ mission, they lacked the enduring emotional impact that would ultimately come with Semper Fidelis.

It was in 1883 that the motto was formally adopted under the leadership of the 8th Commandant, Colonel Charles McCawley. Colonel McCawley likely chose that motto because it embodies the values of loyalty, faithfulness and dedication that he believed should define every Marine.  Unfortunately, we will never know his exact reason for choosing this specific motto because he did not leave any documentation about his thought process.  Regardless, from that point on, the motto became inseparable from the identity of the Corps.

The phrase “Semper Fidelis” has much older origins than its Marine Corps adoption. It’s believed to have originated from phrases used by senators in ancient Rome, with the earliest recorded use as a motto dating back to the French town of Abbeville in 1369. The phrase has been used by various European families since the 16th century, and possibly as early as the 13th century.

The earliest recorded military use was by the Duke of Beaufort’s Regiment of Foot, raised in southwestern England in 1685. The motto also has connections to Irish, Scottish, and English nobility, as well as 17th-century European military units, some of whose members may have emigrated to American colonies in the 1690s

The choice of the Latin phrase by Colonel McCawley was likely deliberate. Latin carries with it a sense of permanence and tradition, and its concise wording communicated volumes in only two words. “Always Faithful” perfectly captured the bond that must exist between Marines and the responsibilities they shoulder. Marines are expected to remain faithful to the mission, to their comrades in arms, and to the United States, regardless of the personal cost. It is this idea of unshakable fidelity that has come to define what it means to wear the Eagle, Globe, and Anchor.

Since its adoption, Semper Fidelis has carried Marines through every conflict the United States has faced. From the battlefields of World War I, where Marines earned the name “Devil Dogs,” to the grueling island campaigns of the Pacific in World War II, to the frozen battle fields of Korea, to the steaming jungles of Vietnam, Marines have demonstrated again and again what it means to be “Always Faithful.” In modern times, whether in Iraq, Afghanistan, or in humanitarian missions across the globe, this motto continues to serve as a reminder of the Corps’ unwavering commitment.

The phrase has also influenced the broader culture of the Marines, inspiring the title of the official Marine Corps march, “Semper Fidelis,” composed by John Philip Sousa in 1888, which remains a powerful symbol of pride and esprit de corps.

The motto’s meaning extends beyond active service. Marines pride themselves on being “once a Marine, always a Marine,” and Semper Fidelis reflects that lifelong bond. Even after leaving the uniform behind, Marines carry that sense of loyalty into civilian life, honoring the values and traditions of their service. For many, it becomes a central guiding principle throughout their lives.  Marine veterans always say “I was a Marine”.

In the end, the motto “Semper Fidelis” is far more than a catchy phrase. It is both a promise and a challenge—a pledge of unwavering loyalty and a challenge to live up to the highest standards of duty, honor, and fidelity. When Marines declare “Semper Fi,” they acknowledge not only their devotion to the Marine Corps, but also the unbreakable loyalty that binds them together as brothers and sisters in arms.

The celebration of the 250th anniversary of the signing of the Declaration of Independence is coming up next year on July 4th. But what about the events leading up to this? What about the men and women who helped make this happen? There are events coming up to commemorate the 250th anniversary of the founding of the Continental Navy and the Continental Marines in 1775. We will be holding commemorative celebrations here in West Virginia and there will be a national event in Philadelphia in October of this year.

Frédéric Bazille: The Impressionist Who Never Saw Impressionism

I like to think of my research and my writing as being eclectic, although sometimes I have to admit they may be better described as unfocused. This post may be an example of one of those episodes.  Recently I was looking through a magazine and saw an ad with an illustration that was obviously based on Monet’s Garden in Giverny. It sent me thinking about a series of lectures I had watched on French Impressionists and that got me thinking about Frédéric Bazille, an artist I have always found fascinating. I decided to spend a little time looking into him. So, I completely forgot about the project I was working on and started on this one. That may be why I have so many unfinished articles and random files of unrelated research.

In French Impressionism there are names that stand in the forefront— Monet, Renoir, Degas, Manet — and then there are names that hover just behind them. Frédéric Bazille is one of those in the shadows. He was part of the same circle, painted with the same daring brush, and showed the same fascination with light and color. Yet his life ended before Impressionism even had a name. Bazille was killed in the Franco-Prussian War in 1870, at just twenty-eight years old. His death robbed the movement of both a gifted painter and a generous friend who helped shape its history.

A Wealthy Outsider

Bazille was born in Montpellier in 1841, the son of a prosperous Protestant family. Unlike Monet and Renoir, who often lived in dire poverty, Bazille never worried about how to pay for paints or rent. That freedom made him unusual in Parisian art circles.

Bazille became interested in painting after seeing works by Eugène Delacroix, but his family insisted he also study medicine to ensure financial independence. By 1859, he had begun taking drawing and painting classes at the Musée Fabre in Montpellier with local sculptors Joseph and Auguste Baussa.

In 1862, Bazille moved to Paris ostensibly to continue his medical studies, but he also enrolled as a painting student in Charles Gleyre’s studio. There he met three fellow students who would become close friends and collaborators: Claude Monet, Pierre-Auguste Renoir, and Alfred Sisley. He soon became part of a group of artists and writers that also included Édouard Manet, and Émile Zola.  After failing his medical exam (perhaps intentionally) in 1864, Bazille began painting full-time.

Bazille used his money to help everyone in his circle. He rented large studios in Paris, where friends who couldn’t afford space of their own painted and slept. He bought their finished canvases when no one else would. To Manet, Renoir, Monet, and Sisley, Bazille was not just a colleague but a lifeline. Without him, some of the paintings we now consider cornerstones of Impressionism might never have been finished.

Experiments With Light

What makes Bazille more than a wealthy patron is his own work as an artist. He was fascinated by how sunlight transformed color and how outdoor settings could frame the human figure. Long before the Impressionists formally broke with mainstream French art, Bazille was exploring these themes.  In The Pink Dress (1864), he painted his cousin on a terrace overlooking the countryside, her figure half-lost in shadow, half-caught by light.  In Family Reunion (1867), he executed a technically difficult group portrait outside, with natural sunlight revealing the folds of dresses and the textures of grass. In The Studio on the Rue de la Condamine (1870), Bazille turned his brush on his own circle, capturing Manet, Monet, Renoir, and Zola in a collective portrait of the avant-garde.

His style was less free than Monet’s and more deliberate than Renoir’s, but the suggestion of Impressionism is unmistakable. He was bridging the academic precision of his training with the looser brushwork of the new school.

Bazille exhibited at the Salon in Paris in 1866 and 1868.  The Salon was the most prestigious and conservative art exhibition in France. These official exhibitions became increasingly controversial as they repeatedly rejected innovative artists like the Impressionists, leading to the creation of alternative exhibitions such as the famous Salon des Refusés in 1863 and independent Impressionist exhibitions starting in 1874.

A Call to War

When France declared war on Prussia in 1870, Bazille enlisted in the army. He could easily have avoided service — his family connections, his money, and his medical background all gave him options. But he joined the Zouave, an elite infantry regiment known for its colorful uniforms and reckless bravery.

On November 28, 1870, at the Battle of Beaune-la-Rolande, Bazille’s commanding officer was struck down. Bazille stepped forward to lead the attack. Within minutes, he too was hit and killed. He never saw the armistice. He never saw the first Impressionist exhibition in 1874. He never saw his friends vindicated by history.

The Spirit of Impressionism

Bazille left fewer than sixty known canvases. That small number alone ensures his reputation will never match Monet’s or Renoir’s. Yet the works he did leave offer glimpses of a painter who might have been one of the movement’s greats. He had both vision and means — a rare combination in the avant-garde world.

Without him, the Impressionists lost not only a friend but also a stabilizing force. Bazille’s studios had been safe havens, his purchases financial lifelines, his company a source of encouragement. Monet and Renoir were devastated by his death, and for years afterward, they spoke of him with unslacking grief.

Art historians often speculate about what role he might have played had he lived. Perhaps he would have anchored Impressionism more firmly in the Paris art establishment, or perhaps his money and position would have shielded his friends from years of ridicule. We can only guess.

Remembering Bazille

Today, Bazille’s paintings hang in the Musée d’Orsay in Paris and the Musée Fabre in Montpellier and in the United States at The National Gallery of Art, the Metropolitan Museum of Art, and the Art Institute of Chicago.  To see them is to feel both promise and loss. His canvases are alive with sunlight and color, and they hint at the career that never was.

Bazille reminds us that history is shaped not just by the titans who endure but also by the voices cut short. Impressionism survived and flourished without him, but it was poorer for his absence. In a way, every bright patch of sunlight in Monet’s gardens or flashing dress in Renoir’s dance halls carries a trace of the young man who painted light before the movement even had a name — and who never lived to see it shine.

Bread and Circuses: From Ancient Rome to Modern America

“Already long ago, from when we sold our vote to no man, the People have abdicated our duties; for the People who once upon a time handed out military command, high civil office, legions — everything, now restrains itself and anxiously desires for just two things: bread and circuses.”

Nearly 2,000 years ago, Roman satirist Juvenal penned one of history’s most enduring political observations: “Two things only the people anxiously desire — bread and circuses.” Writing around 100 CE in his Satire X, Juvenal wasn’t celebrating this phenomenon—he was lamenting it. The poet watched as Roman citizens traded their political engagement for free grain and spectacular entertainment, becoming passive spectators rather than active participants in their democracy. The phrase has endured for nearly two millennia as shorthand for a troubling political dynamic: entertainment and consumption replacing civic engagement and accountability.

The Roman Warning

Juvenal’s critique came at a pivotal moment in Roman history. The republic had collapsed, and emperors like Augustus had systematically dismantled democratic institutions. Rather than revolt, Roman citizens seemed content as long as the government provided basic sustenance (the grain dole called annona) and elaborate spectacles at venues like the Colosseum. Political participation withered as people focused on immediate pleasures rather than long-term civic responsibilities.

The strategy worked brilliantly for Roman rulers. Keep the masses fed and entertained, and they won’t question your authority or demand meaningful representation. It was political control through distraction—a form of soft authoritarianism that maintained order without overt oppression.  The policy was effective in the short term—peace in the streets and loyalty to the emperors—but disastrous over time. Rome’s population became disengaged from politics, while real power consolidated in the hands of a few.

Modern American Parallels

Fast-forward to contemporary America, and Juvenal’s observation feels uncomfortably relevant. While we don’t have gladiatorial games, we do have our own version of “circuses”—professional sports, reality TV, social media feeds, and celebrity culture that dominate public attention. These aren’t inherently problematic, but they become concerning when they crowd out civic engagement.

Our modern “bread” takes various forms: government assistance programs, subsidies, and economic policies designed to maintain consumer spending. We are saturated with cheap goods, instant delivery services, and mass consumerism. For many, economic struggles are temporarily softened by accessible consumption, from fast food to online shopping. Yet material comfort often masks deeper inequalities and systemic challenges—wage stagnation, healthcare costs, and mounting national debt. These programs often serve legitimate purposes, but they can also function as political tools to maintain public satisfaction and suppress dissent.

Consider how political campaigns increasingly focus on entertainment value rather than substantive policy debates. Politicians hire social media managers and appear on talk shows, understanding that capturing attention often matters more than presenting coherent governance plans. Meanwhile, voter turnout for local elections—where citizens have the most direct impact—remains dismally low.

The Distraction Economy

Perhaps most striking is how our information landscape mirrors Roman spectacles. We’re bombarded with sensational news, viral content, and manufactured controversies that generate strong emotional reactions but little productive action. Complex policy issues get reduced to soundbites and memes, making genuine democratic deliberation increasingly difficult.

Social media algorithms are specifically optimized for engagement, not enlightenment. They feed us content designed to provoke reactions—anger, outrage, schadenfreude—rather than encourage thoughtful consideration of difficult issues. This creates a population that feels politically engaged through constant consumption of political content while remaining largely passive in actual civic participation.

The danger of “bread and circuses” in modern America lies in apathy. When civic participation declines, voter turnout falls, and policy debates get reduced to simplistic slogans, elites face less scrutiny. The result is a weakened democracy, vulnerable to manipulation and short-term thinking.

Breaking the Cycle

Juvenal’s warning doesn’t mean we should abandon entertainment or social programs. Rather, it suggests we need intentional balance. Democratic societies thrive when citizens remain actively engaged in governance beyond just voting every few years.

This means staying informed about local issues, attending town halls, contacting representatives, and participating in community organizations. It means choosing substance over spectacle and long-term thinking over immediate gratification.

The Roman Republic fell partly because its citizens stopped paying attention to governance. Juvenal’s “bread and circuses” reminds us that democracy requires constant vigilance—and that comfortable distraction can be freedom’s most seductive enemy.

The Communist Dream vs. Stalinist Reality: A Tale of Two Visions

Recently, I’ve been looking at various political philosophies. I’ve written about fascism, totalitarianism, authoritarianism, autocracy and kleptocracy. In this post I’m going to look at theoretical communism versus the reality of Stalinist communism, and I would be remiss if I didn’t at least briefly mention oligarchy as currently practiced in Russia.

The gap between Karl Marx’s theoretical vision of communism and its implementation under Joseph Stalin’s leadership in the Soviet Union represents one of history’s most significant divergences between ideological theory and political practice. While both claimed the same ultimate goal—a classless, stateless society—their approaches and outcomes differed in fundamental ways that continue to shape our world today, one a workers’ utopia and the other a brutal dictatorship.

The Marxist Vision

Marx envisioned communism as the natural culmination of historical progress, emerging from the inherent conflicts within capitalism. In his theoretical framework, the working class (proletariat) would eventually overthrow the capitalist system through revolution, leading to a transitional socialist phase before achieving true communism. This final stage would be characterized by the absence of social classes, private property, private ownership of the means of production, and ultimately, the state itself.

Central to Marx’s concept was the idea that communism would emerge from highly developed capitalist societies where industrial production had reached its peak. He believed that the abundance created by advanced capitalism would make scarcity obsolete, allowing society to operate according to the principle “from each according to his ability, to each according to his needs.” The state, having served its purpose as an instrument of class rule, would simply “wither away” as class distinctions disappeared.

Marx also emphasized that the transition to communism would be an international phenomenon. He argued that capitalism was inherently global, and therefore its replacement would necessarily be worldwide. The famous rallying cry “Workers of the world, unite!” reflected this internationalist perspective, suggesting that communist revolution would spread across national boundaries as workers recognized their common interests.

The Stalinist Implementation

Vladimir Lenin took firm control of Russia following the revolution in 1917 and oversaw creation of a state that was characterized by centralization, suppression of opposition parties, and the establishment of the Cheka (secret police) to enforce party rule. Economically, Lenin’s government shifted from War Communism (state control of production during the civil war) to the New Economic Policy (NEP) in 1921, which allowed limited private trade and small-scale capitalism to stabilize the economy. It formally became the Union of Soviet Socialist Republics in 1922. This period provided the groundwork for the highly centralized, totalitarian state under Stalin that followed Lenin’s death in 1924.

Stalin’s approach to building communism in the Soviet Union diverged sharply from Marx’s theoretical blueprint. Rather than emerging from advanced capitalism, Stalin attempted to construct socialism in a largely agricultural society that had barely begun industrialization. This fundamental difference in starting conditions shaped every aspect of the Soviet experiment.

Instead of the gradual withering away of the state, Stalin presided over an unprecedented expansion of state power. The Soviet government under his leadership controlled virtually every aspect of economic and social life, from industrial production to agricultural collectivization to cultural expression. The state became not a temporary tool for managing the transition to communism, but a permanent and increasingly powerful institution that dominated all aspects of society.  By the early 1930s, Joseph Stalin had centralized all power in his own hands, sidelining collective decision-making bodies like the Politburo or Soviets.

Marx emphasized rule by the proletariat giving power to all people equally.  Stalin fostered a cult of personality through relentless propaganda.  His image appeared on posters, statues, and in schools.  History books were rewritten to credit him for Soviet successes—often erasing Lenin, Trotsky, or others.  He was referred to as the “Father of Nations,” “Brilliant Genius,” and “Great Leader.”  Loyalty to Stalin became more important than loyalty to the Communist Party or its ideals.  The government and the economy operated at his personal direction, enforced by the secret police, censorship, executions, and mass purges of dissidents.

Stalin implemented a command economy, in which the government or central authority makes all major decisions about production, investment, pricing, and the allocation of resources, rather than leaving those choices to market forces. In this system, planners typically set production targets, control industries, and determine what goods and services will be available, often with the goal of achieving social or political objectives such as central control and rapid industrialization. This is the direct opposite of the voluntary cooperation Marx had envisioned. The forced collectivization of peasants onto government farms, rapid industrialization through five-year plans, and the use of prison labor in gulags represented a top-down model of development that contradicted Marx’s emphasis on worker empowerment and democratic participation.

Where Marx emphasized emancipation and freedom for workers, Stalinist policies involved widespread repression, political purges, forced labor camps, and censorship. Most notable is the period that came to be known as the “Great Purge,” also called the “Great Terror,” a campaign of political repression between 1936 and 1938. It involved widespread arrests, forced confessions, show trials, executions, and imprisonment in labor camps (the Gulag system). Stalin accused perceived political rivals, military leaders, intellectuals, and ordinary citizens of being disloyal or conducting “counter-revolutionary” activities. It is estimated that about 700,000 people were executed by firing squad after being branded “enemies of the people” in show trials or secret proceedings.  Another 1.5 to 2 million people were arrested and sent to Gulag labor camps, prisons, or exile. Many died from overwork, malnutrition, disease, or harsh conditions.

Perhaps most significantly, Stalin abandoned Marx’s internationalist vision in favor of “socialism in one country.” This doctrine, developed in the 1920s, argued that the Soviet Union could build socialism independently of worldwide revolution. This shift not only contradicted Marx’s theoretical framework but also led to policies that prioritized Soviet national interests over international worker solidarity.

Key Contradictions

The differences between Marxist theory and Stalinist practice created several fundamental contradictions. Where Marx predicted the elimination of social classes, Stalin’s Soviet Union developed a rigid hierarchy with the Communist Party elite at the top, followed by technical specialists, workers, and peasants. This new class structure, while different from capitalist society, still involved significant inequalities in power, privilege, and access to resources.

Marx’s vision of worker control over production stood in stark contrast to Stalin’s centralized command economy. Rather than workers democratically managing their workplaces, Soviet workers found themselves subject to increasingly detailed state control over their labor. The factory became less a site of worker empowerment than a component in a vast state machine directed from Moscow.

The treatment of dissent also revealed fundamental differences. Marx believed that communism would eliminate the need for political repression as class conflicts disappeared. Stalin’s regime, however, relied extensively on surveillance, censorship, and violent suppression of opposition. The extensive use of terror against both perceived enemies and ordinary citizens contradicted Marx’s vision of a society based on cooperation and mutual benefit.

Modern Russia

At this point, I want to mention something about modern Russia and its current governmental and economic situation since the breakup of the Soviet Union

An oligarchy is a form of government where power rests in the hands of a small number of people. These individuals typically come from similar backgrounds – they might be distinguished by wealth, family ties, education, corporate control, military influence, or religious authority. The word comes from the Greek “oligarkhia,” meaning “rule by few.” In an oligarchy, this small group makes the major political and economic decisions that affect the entire population, often prioritizing their own interests over those of the broader society.

Modern Russia’s economy is often described as having oligarchic features because a relatively small group of wealthy business leaders—many of whom made their fortunes during the chaotic privatization of the 1990s—maintain outsized influence over key industries like energy, banking, and natural resources. While Russia is technically a mixed economy with both private and state involvement, political connections determine who gains access to wealth and power. This creates a system where economic opportunity is concentrated among elites closely tied to the Kremlin, most closely resembling an oligarchy.

Historical Context and Consequences

Understanding the differences between Marxist theory and Stalinist implementation requires considering the historical context in which Stalin operated. The Soviet Union faced external threats, internal resistance, and the enormous challenge of rapid modernization. Stalin’s supporters argued that harsh measures were necessary to defend the revolution and build industrial capacity quickly enough to survive in a hostile international environment.

Critics, however, contend that Stalin’s methods created a system that was fundamentally incompatible with Marx’s vision of human liberation. The concentration of power in a single party—much less a single person— combined with the suppression of democratic institutions, and the extensive use of violence and coercion demonstrate that Stalinist practice moved away from, rather than toward, Marx’s goals.

The legacy of this divergence continues to influence contemporary political debates. Supporters of Marxist theory often argue that Stalin’s failures demonstrate the dangers of abandoning egalitarian principles and internationalist perspectives. Meanwhile, critics of communism point to the Soviet experience as evidence that Marxist ideals are inherently unrealistic or even dangerous.

This comparison reveals the complex relationship between political theory and practice, highlighting how historical circumstances, leadership decisions, and practical constraints can shape the implementation of ideological visions in ways that may fundamentally alter their character and outcomes.

The First Amphibious Landing

 The Continental Marines at Nassau

When the Second Continental Congress authorized the creation of the Continental Marines on November 10, 1775, few could foresee their pivotal role in orchestrating North America’s first amphibious assault less than four months later.  The operation against Nassau, on New Providence Island in the Bahamas, was born of necessity, marked by improvisation, and ultimately set the tone for Marine Corps operations—an audacious legacy that endures to this day.

Origins: Gunpowder Desperation and Strategic Vision

The American Revolution’s early years were marked by chronic shortages, especially of gunpowder. After the British seized stores destined for the Patriot cause, intelligence uncovered that significant quantities were stockpiled at Nassau. The Continental Congress approached this challenge with typical Revolutionary War creativity—they would use their brand-new Navy and even newer Marines to solve an Army problem. The Congress’ official instructions to Commodore Esek Hopkins focused on patrolling the Virginia and Carolina coasts, but “secret orders” directed attention to the Bahamas, setting in motion a bold plan to directly address the fledgling army’s supply crisis.

Organization: The Making of an Amphibious Battalion

With barely three months’ existence, the Continental Marines had hastily raised five companies of around 300 men. Captain Samuel Nicholas, tasked as the first Marine officer, oversaw their training and organization in Philadelphia. Their equipment was uneven—many wore civilian garb rather than uniforms and carried whatever muskets and bayonets were available. The uniform regulations specifying the now famous green coats with white facings were not promulgated until several months after the raid was over.

The Voyage South: Challenges and Preparation

Hopkins’ fleet consisted of the ships Alfred, Hornet, Wasp, Fly, Andrew Doria, Cabot, Providence, and Columbus. In addition to ships’ crews, the fleet carried more than 200 Continental Marines under the command of Captain Nicholas. The expedition began inauspiciously on January 4, 1776, when the fleet attempted to leave Philadelphia but became trapped by ice in the Delaware River for six weeks.

When they finally reached the Atlantic on February 17, 1776, the small fleet faced additional challenges. Disease found its way onboard most of the ships. Smallpox was a huge concern and was reported on at least four ships.

The fleet’s journey to the Caribbean took nearly two weeks of sailing through challenging winter conditions. Despite the hardships, Hopkins maintained the element of surprise—British intelligence had detected American naval preparations but assumed the fleet was bound for New York or Boston, not the distant Bahamas.

Implementation: Amphibious Innovation at Nassau

The element of surprise was initially lost when the fleet’s approach triggered alarm at Nassau. Plans to storm the stronger Fort Nassau dissolved, and Hopkins convened a council to identify a new landing point. A revised strategy saw about 230 Marines and 50 sailors, led by Captain Nicholas, land from longboats two miles east of the weaker Fort Montagu on March 3, 1776. They wore a patchwork of civilian clothes and white breeches—some men had managed to find green shirts as a form of identification. They set out marching toward the fort armed with muskets and bayonets, looking perhaps more like pirates than soldiers. 

Their advance was met with only token resistance. Outnumbered and ill-prepared, local militia withdrew as Nicholas’s men captured Fort Montagu in what historian Edwin Simmons called a “battle as bemused as it was bloodless.”

Nicholas decided to wait until morning to advance on the town.  His decision was tactically sound given the circumstances—he’d lost surprise, did not know the enemy’s strength, was operating in unknown terrain, night was falling, and he lacked naval support. However, this prudent military decision allowed Governor Browne to escape with over 80% of Nassau’s gunpowder stores, turning what could have been a complete strategic victory into a partial success. This incident highlights the tension between tactical prudence and strategic urgency that was destined to become a recurring theme in amphibious warfare.

The next day the Americans took Fort Nassau and arrested the Governor, Montfort Browne. Browne had already sent most of the coveted gunpowder on to St. Augustine, Florida, the night before. Despite this, American forces seized cannons, shells, and other military stores before occupying Nassau for nearly two weeks.

Marine discipline and flexibility were evident, as they pivoted from their surprise landing, conducted operations deep inland, and created their evolving amphibious reputation. The fleet departed on March 17, not before stripping Nassau and its forts of anything militarily useful.

Aftermath: Growing Pains and Enduring Lessons

Though the mission failed in its primary objective of securing a cache of gunpowder, its operational successes far outweighed the losses. The Marines returned with large quantities of artillery, munitions, and several recaptured vessels. On the return leg, they faced and fought (though did not defeat) HMS Glasgow; the squadron returned to New England by April 8, with several casualties including the first Marine officer killed in action, Lt. John Fitzpatrick.

Controversy followed—Hopkins was censured for failing to engage British forces as directed in his official orders.  Nicholas was promoted to major and tasked with raising additional Marine companies for new frigates then under construction. These developments reflected both the lessons learned and the growing recognition of the value of the Marine force in expeditionary operations ashore.

A second raid on Nassau by Continental Marines occurred from January 27–30, 1778, under Captain John Peck Rathbun. Marines and seamen landed covertly at midnight, quickly seizing Fort Nassau and liberating American prisoners held by the British. The raiders proceeded to capture five anchored vessels, dismantled Fort Montagu, spiked the guns, and loaded 1,600 lbs of captured gunpowder before departing. This bold operation marked the first time the Stars and Stripes flew over a foreign fort and showcased the resourcefulness of American forces, who managed to strike a valuable blow against British power in the Caribbean without suffering casualties.

Long-Term Implications for the United States Marine Corps

The Nassau operation set powerful precedents:

  • Amphibious Warfare Doctrine: This was the Marines’ first organized amphibious landing, shaping the Corps’ future focus on rapid deployment from sea to shore, a hallmark that continues in modern doctrine.  This was likely referred to at the time as a Naval landing, as the word amphibious did not come into use in this context until the 1930s.
  • Adaptability Under Fire: The improvisational tactics used at Nassau foreshadowed the Corps’ reputation for flexibility and mission focus.
  • Naval Integration: Joint operations with the Navy not only succeeded tactically, but helped institutionalize the Marine-Navy partnership, with Marines serving as shipboard security, landing parties, and naval infantry.
  • Legacy of Boldness: This first operation established a “first-in” ethos and a culture embracing challenge and audacity, foundational principles in Marine culture.

After the war, the Continental Marines disbanded, only to be re-established in 1798. Yet the legacy of Nassau endured. “Semper Fidelis”—always faithful—has its roots in that March 1776 assault, when the odds seemed long and the stakes critical.

Today’s United States Marine Corps draws a direct lineage from that small, ragtag battalion of Marines scrambling ashore at Nassau, forever entwining its identity with the promise, risk, and legacy of that first storied mission. Every modern Marine, stepping from ship to shore, walks in the footprints of Captain Samuel Nicholas and his men—soldiers of the sea whose boldness, improvisation, and teamwork have echoed across the centuries.

The Electoral College: Should America Go Popular?

Few topics in American politics generate as much perennial debate as the Electoral College. Every four years, calls to abolish it resurface—often with renewed vigor when the electoral vote winner loses the popular vote, as happened in 1824, 1876, 1888, 2000, and 2016. The proposal is to elect the president by a nationwide popular vote, just as we do governors and senators.

Why We Have an Electoral College

The Electoral College was a late-stage compromise at the Constitutional Convention of 1787. The framers were balancing multiple tensions:

  • Large vs. small states
  • Slave vs. free states
  • Congress choosing the president vs. direct election

Delegates feared that direct election by popular vote would favor populous states, allow urban centers to dominate rural areas, and encourage demagogues to campaign purely on popular passions. At the same time, they worried about giving Congress too much control over the executive branch.

The system for selecting the president—via the Electoral College—was partly designed to prevent direct popular influence. Its original intent, according to historians, was to empower electors (seen as more knowledgeable) and to ensure thoughtful deliberation in choosing the president, guarding against the masses being swayed by charm rather than substance.

Some delegates—like James Madison, James Wilson, and Gouverneur Morris—supported direct popular election of the president, while others, like Elbridge Gerry and Roger Sherman, explicitly voiced distrust in direct election of the president and believed ordinary voters lacked impartiality or sufficient knowledge. 

Institutional and political bargaining ultimately shaped the final structure. Their solution: each state gets electors equal to its total number of representatives and senators. The addition of two electors for the senators ensures that the small states remain, on a population basis, overrepresented in the Electoral College.

State legislatures determine how electors are chosen (eventually, every state moved to popular election). Most states now award all their electoral votes to the statewide popular vote winner—“winner-take-all.”

The Electoral College thus emerged not as anyone’s ideal system, but as a possible,  workable compromise that balanced competing regional interests, philosophical concerns about democracy, and the practical realities of governing a large, diverse republic in the 18th century.

Pros of Eliminating the Electoral College

Equal Weight for Every Vote

The most compelling argument for eliminating the Electoral College centers on democratic equality. Under the current electoral system, a vote in Wyoming carries roughly three times the weight of a vote in California when measured by electoral votes per capita. To put this in real numbers Wyoming has about 193,000 people per electoral vote while California has about 718,000.  This mathematical reality means that some Americans’ voices count more than others in selecting their president, a principle that seems to contradict the foundational democratic ideal of “one person, one vote.”

A national popular vote would ensure that every American’s vote carries identical weight, regardless of geography. This approach would eliminate scenarios where candidates win the presidency while losing the popular vote. Such outcomes can undermine public confidence in democratic institutions and raise questions about the legitimacy of electoral results.

Reflects the Will of the Majority

In two of the last six elections (2000 and 2016), the candidate with fewer total popular votes became president. While the framers accepted the possibility of divergence between the popular and electoral results, many modern Americans view such outcomes as undermining democratic legitimacy.

Encourages Nationwide Campaigning

Because many states are firmly “red” or “blue,” campaigns focus their energy on a handful of battleground states that could go either way—like Pennsylvania, Wisconsin, and Arizona. Under a popular vote, candidates would have an incentive to compete everywhere, because every additional vote counts the same regardless of location.

Simplifies the Process

The Electoral College system confuses many Americans and can seem archaic in the 21st century. A direct popular vote is straightforward and immediately understandable: the candidate who receives the most votes wins. This simplicity could increase public trust and participation in the democratic process.

Eliminates “Faithless Electors”

Although rare, faithless electors—those who cast electoral votes against their state’s popular choice—are possible under the current system. A direct election would remove this constitutional quirk.

Cons of Eliminating the Electoral College

Federalism Concerns

The United States is a union of states as well as a single nation. The Electoral College reinforces the role of states in presidential elections, reflecting their status as sovereign entities in certain respects. Abolishing it could be seen as eroding federalism by further centralizing power.

Risk of Regional Dominance

Opponents argue that without the Electoral College, candidates could focus disproportionately on high-population regions—California, Texas, Florida, New York—while ignoring rural states and smaller communities. Whether this would happen in practice is debated, but the perception of neglect could deepen regional divides.

Potential for Narrow-Margin Crises

In a popular vote system, a razor-thin margin would require a nationwide recount. Under the Electoral College, disputes are typically contained within a state (e.g., Florida in 2000). A national recount would be a logistical and political nightmare.

Constitutional Hurdles

Abolishing the Electoral College requires a constitutional amendment—an extraordinarily high bar. That means approval by two-thirds of both houses of Congress and ratification by three-quarters of the states. Smaller states, which benefit from the Electoral College’s vote weighting, have little incentive to approve such a change.

Intermediate Options

Since abolishing the Electoral College outright is politically unlikely in the near term, reform advocates have proposed middle-ground solutions.

The National Popular Vote Interstate Compact (NPVIC)

The NPVIC is an agreement among states to award all their electoral votes to the national popular vote winner, but it only takes effect once states totaling at least 270 electoral votes join. As of 2025, 17 states plus D.C. (totaling 209 electoral votes) have joined. This approach sidesteps a constitutional amendment but relies on states’ willingness to cede control over their electoral votes.  The compact could be implemented without amending the constitution and achieves the functional equivalent of a popular vote. However, it has not been legally tested and would likely face court challenges. To me, the greatest drawback is that states could withdraw at any time. I would envision that in a closely contested and contentious election states unhappy with the national outcome would likely withdraw from the compact.

Proportional Allocation of Electoral Votes

Instead of winner-take-all, states could allocate electoral votes proportionally to the share of the statewide vote. Maine and Nebraska already use a variation of this system, awarding some votes by congressional district.  Theoretically, this would reduce the impact of battleground states and increase the representation for minority views within states. But it could also increase the likelihood of no candidate reaching 270 electoral votes thereby sending the election into the House of Representatives. It still preserves the over representation of smaller states because it retains the two electors for senators. 

If electors are awarded proportionally based on statewide voting, the popular vote may not be distributed in a manner to allow awarding of whole delegates. There’s no constitutional provision for awarding partial electors. This would be especially significant in states with only one or two representatives in the house.

If electors were awarded to the winners of each Congressional District this would encourage even more gerrymandering than we are currently seeing. Extreme gerrymandering could undermine any progress towards reflecting the popular vote, simply continuing the current mismatch of popular and electoral votes.

Gerrymandering is a political practice that involves manipulating the boundaries of electoral districts to benefit a particular party or group. It is nothing new in American politics, originating in the early 19th century.  The term “gerrymandering” was coined after an 1812 incident in Massachusetts, where Governor Elbridge Gerry signed a bill redrawing district lines to favor his party. One of the districts resembled a mythical salamander in shape, inspiring the portmanteau “Gerry-mander” in a satirical cartoon by Elkanah Tisdale that helped popularize the term. It’s interesting, that since gerrymandering favored the Democratic-Republican Party and the newspaper that published the cartoon supported the Federalist Party, it was made to look not like a cute salamander but more like an ominous dragon. 

Bonus Electoral Votes for National Popular Vote Winner

A hybrid idea would keep the Electoral College but award a fixed number of bonus electors (say, 100) to the national popular vote winner. This would almost guarantee alignment between the popular and electoral results without abandoning the current structure.  This option maintains a state-based system and reduces the chance of a split result. But it would also require a constitutional amendment and add complexity that many voters may find confusing.

Feasibility of Change

Reforming or abolishing the Electoral College faces three main obstacles:

  • Constitutional Entrenchment – Article II and the 12th Amendment are clear about elector allocation. Full abolition would require one of the most difficult political feats in American governance—a constitutional amendment.
  • State Incentives – Smaller states and swing states have outsized influence under the current system. They are unlikely to support reforms that dilute their power.
  • Partisan Dynamics – Since recent Electoral College/popular vote splits have benefited Republicans, Democrats tend to favor reform, while Republicans tend to defend the status quo. That dynamic could shift if the pattern changes.

 Conclusion

The Electoral College is both a relic of 18th-century compromises and a living feature of America’s federal structure. Its defenders argue that it protects smaller states, contains electoral disputes, and reinforces the states’ role in national governance. Its critics counter that it violates the principle of “one person, one vote” and distorts campaign priorities.

Abolishing it in favor of a direct popular vote would likely make presidential elections more democratic in the literal sense, but it would also raise questions about federalism, campaign strategy, and the handling of close results. The Electoral College preserves federalism and geographic balance but can produce outcomes that seem to contradict majority will.

Intermediate options like the NPVIC or proportional allocation may offer ways to mitigate the College’s most controversial effects without uprooting the constitutional framework but also face significant hurdles for implementation.

Whether reform happens will depend not just on the merits of the arguments, but on the political incentives of the states and the parties. Until those incentives shift, the Electoral College is likely to remain—imperfect, contentious, and uniquely  American.

The Constitutional Foundations

Who Controls Elections?

Donald Trump has repeatedly claimed that the president should have broad authority to change how elections are conducted—particularly when it comes to abolishing mail-in voting and voting machines. As recently as August 2025, Trump pledged to issue an executive order banning mail-in ballots and voting machines ahead of the 2026 midterm elections, insisting that states must comply with his directive because, in his words, “States act merely as ‘agents’ for the Federal Government when it comes to counting and tabulating votes.… They are required to follow what the Federal Government, represented by the President of the United States, instructs them to do, FOR THE GOOD OF OUR COUNTRY”.

But this isn’t the first time he has suggested that he could control the election process.  In March 2025, Trump issued a major executive order titled “Preserving and Protecting the Integrity of American Elections” that aims to expand presidential control over the election process.  The order attempts to direct the Election Assistance Commission (EAC) — an independent, bipartisan agency — to mandate that voters show a passport or other similar document proving citizenship when registering to vote using the federal voter registration form.  The executive order has been the subject of extensive litigation, and several federal judges have issued injunctions against various portions of it.

Amid the COVID-19 pandemic during his first term, President Trump publicly suggested delaying the election. Constitutional scholars and members of Congress quickly pointed out he lacked such authority—the date of federal elections is set by statute, and only Congress could change it.

The U.S. Constitution provides a clear framework for who holds the authority to control elections, and it is not the president.

Article I, Section 4: Congressional and State Authority

The main constitutional authority over U.S. elections is found in Article I, Section 4, commonly called the “Elections Clause.” It states:

“The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof; but the Congress may at any time by Law make or alter such Regulations…”

This language charges state legislatures with defining the details of congressional elections, including logistics and procedures. Importantly, Congress retains the power to override state laws and impose federal rules—such as standardized Election Days or regulations for voter registration and districting.

What does this mean for the president? The Constitution is clear: the president has no direct authority to determine the conduct of congressional elections or to unilaterally change the way federal elections are held. Presidential influence over elections is limited to signing or vetoing congressional legislation, not acting alone.

Article II and the 12th Amendment: Presidential Elections

Presidential elections are regulated by Article II, which created the Electoral College, and by the 12th Amendment .

Article II, Section 1 provides:

“Each State shall appoint, in such Manner as the Legislature thereof may direct, a Number of Electors…”

States arrange how their presidential electors are selected, subject to changes imposed by congressional law. The federal government, through Congress (not the president!), determines the timing of choosing electors and casting electoral votes. The 12th Amendment sets procedures for how electors meet and vote for both president and vice president.

Again, neither Article II nor the 12th Amendment gives the president authority to independently set election rules. At most, the president can recommend reforms, sign laws crafted by Congress, and advocate for certain policies.

Historical Examples of Limits on Presidential Power Over Elections

Even during national crises, presidents have not been able to unilaterally change election rules:

  • 1864 Election (Lincoln): Despite the Civil War, Abraham Lincoln did not postpone or suspend the presidential election. Elections were carried out in the states, including special arrangements for soldiers to vote.
  • 1944 Election (Roosevelt): In the midst of World War II, Franklin Roosevelt stood for re-election. Again, no effort was made by the president to change election laws.

Presidential Powers: What Can the Executive Branch Do?

The president’s responsibilities in elections are more limited than you might expect and are essentially ministerial and ceremonial, not regulatory.

The executive power in Article II invests the president with broad national leadership, command of the military, and responsibility to “take Care that the Laws be faithfully executed”. This can include enforcing voting rights laws and overseeing federal agencies that support election integrity. However, the Constitution and decades of legal precedent restrict the president from directly controlling election rules.

  • The president cannot by executive order change state rules for voting methods (e.g., mail-in voting, voting machines).
  • The president cannot unilaterally suspend or postpone federal elections.
  • The president cannot direct states to alter their voter registration, polling locations, or other administrative details.
  • The president has no role in certifying state results. That function belongs to state officials, with Congress responsible for counting electoral votes.
  • The president can direct federal agencies like the Department of Justice to enforce federal election laws, protect voting rights, and intervene in cases of fraud or intimidation.  The president does not have the authority to direct federal agencies to act in a manner contrary to the law.

When presidents have sought to influence election administration more directly, courts and Congress have reaffirmed the constitutional boundaries. For example, efforts to change the date of an election or prohibit certain voting methods without congressional action have consistently failed in the courts.

Congressional Power: The Real Check on Election Rules

While state legislatures remain the primary manager of elections, Congress retains the final word. The Supreme Court has confirmed that congressional law “preempts” conflicting state rules in matters of federal elections. When Congress acts—through laws like the Voting Rights Act, Help America Vote Act, and the National Voter Registration Act—states must comply, and the president’s role is simply to sign or veto those laws.

Congress has used its power over the years to:

  • Set a uniform national Election Day.
  • Establish protections for disabled voters and overseas citizens.
  • Mandate requirements around voter registration and accessibility.
  • Regulate campaign finance and transparency.

Checks, Balances, and Modern Tensions

Recent political debates have seen calls for presidents to take stronger action on election oversight, especially regarding the use of mail-in ballots or voting machines. However, these calls run up against clear constitutional limits: the president cannot rewrite the rules of elections without Congress or state legislatures.

Any presidential attempt to do so by executive order would face swift legal challenges and almost certainly be invalidated. The intent of the Framers was to divide election power between the states and Congress, with the president largely excluded from direct rule-making authority. This balance—central to federalism—protects elections from potential abuses of executive power and ensures that reforms require broad democratic consensus. While presidents can champion reforms and enforce federal laws supporting fair elections, they are constitutionally forbidden from unilaterally changing election rules.

Conclusion

The framework isn’t perfect—it can create confusion when state and federal authorities clash. But the basic principle remains: states run elections. Congress can regulate them within constitutional bounds, and presidents enforce the resulting laws.

For citizens, lawmakers, and presidents alike, respect for these boundaries secures the foundation of American democracy. The right to vote—and the integrity of how that vote is counted—is protected not by any single leader, but by enduring constitutional principles and the shared power of states and Congress.

Deborah Sampson: A Revolutionary Soldier

In the story of the American Revolution, the names most often remembered are those of the Founding Fathers and battlefield generals. Yet woven through the familiar narrative are lesser known but extraordinary individuals whose actions defied the norms of their time. One of the most remarkable among them was Deborah Sampson, a Massachusetts woman who disguised herself as a man and served for nearly two years in the Continental Army. Her life reflects not only courage and patriotism, but also the complexity of gender roles in Revolutionary America

A Difficult Early Life

Deborah Sampson was born in Plympton, Massachusetts, in 1760 as the eldest of seven children in a family with deep Pilgrim roots, tracing lineage to Myles Standish and Governor William Bradford. Despite this heritage, her family struggled financially, and she grew up with poverty and abandonment. Her father deserted the family when she was young, leaving her mother with limited resources to care for their children. It was initially thought that he had died at sea, but they later discovered he had actually moved to Maine where he married and raised a second family.

Deborah was still young when her mother died and she was sent to live with a widow, Mary Price Thatcher, then in her 80s. Deborah likely learned to read while living with her.  After Widow Thatcher died, Deborah was bound out as an indentured servant to the Thomas family in Middleborough, Massachusetts, where she worked until she turned 18. This experience exposed her to hard physical labor and taught her skills typically associated with men’s work, including farming and carpentry. During this time, she educated herself and developed a keen intellect that would prove invaluable throughout her life. 

When her term of indenture ended in 1782, Sampson found herself in a precarious position as a young, unmarried woman with few economic opportunities. She intermittently supported herself as a teacher in the summers and a weaver in the winters.

Enlisting in the Army

The Revolutionary War was still raging, and the Continental Army desperately needed recruits. Motivated by both patriotic fervor and economic necessity, Sampson made the audacious decision to enlist in the army disguised as a man. She initially enlisted in 1782 under the name Timothy Taylor and collected a cash enlistment bounty but she failed to report for duty with her company.   She was later recognized as being Taylor and was required to repay what she had not already spent from her enlistment bounty.  No further punishment was made by the civil authorities; however, the Baptist Church withdrew its fellowship until she apologized and asked for forgiveness.

She later made a second enlistment, adopting the name Robert Shurtleff (sometimes spelled Shurtlieff or Shirtliff). This time she followed through and reported for duty.

She bound her chest, cut her hair, and donned men’s clothing to complete her transformation.  Sampson’s physical appearance aided her deception. She was tall for a woman of her era, standing nearly six feet, with a lean build and strong constitution developed through years of manual labor. Her lack of facial hair was not unusual among young male recruits, and she successfully passed the initial examination to join the 4th Massachusetts Regiment in May 1782.

The challenge of maintaining her disguise while living in close quarters with other soldiers required constant vigilance. Sampson developed strategies to protect her secret, including volunteering for guard duty to avoid sleeping arrangements that might expose her, and finding private moments to tend to personal needs. She also had to manage the physical demands of military life while dealing with the unique challenges of being a woman in a male-dominated environment.

Sampson’s military career nearly ended when she was wounded during a skirmish. She received a sword cut to her head and was shot in the thigh. Fearing that medical treatment would reveal her true identity, she initially treated her wounds herself, even digging a musket ball out of her own leg with a knife. Some of the shot remained too deep to remove, leaving her with a lifelong disability.

During her military service, Sampson demonstrated exceptional courage and skill as a soldier. She participated in several skirmishes and battles, including engagements near New York City and in Westchester County. Her fellow soldiers respected her for her dedication, marksmanship, and willingness to volunteer for dangerous scouting missions. She proved herself particularly adept at reconnaissance work, using her intelligence and observational skills to gather valuable information about enemy positions and movements.

Discovery and Discharge

During an epidemic in Philadelphia, she fell seriously ill with a fever and was taken to a hospital, where a physician discovered her secret while treating her. Fortunately, the doctor, Barnabas Binney, chose to protect Sampson rather than expose her. He treated her quietly and helped facilitate her honorable discharge from the army in October 1783. Her commanding officer, General John Paterson, reportedly handled the situation with discretion and respect, recognizing her valuable service to the cause of independence.  Eventually she was discharged by General Henry Knox on October 25, 1783, and was given funds to return home and a Note of Advice, similar to modern discharge papers.

Life After the War

After the war, Sampson returned to Massachusetts, where she married Benjamin Gannet in 1785 and had three children. But like many veterans, she struggled financially and had difficulty obtaining the military pay and benefits she had earned. In 1792, with the help of prominent supporters—including Paul Revere—she successfully petitioned the Massachusetts legislature for back pay and a modest state pension and she later received a pension from the federal government.

Her story didn’t end with domestic life. She became one of the first women in America to go on a speaking tour, traveling throughout New England and New York to share her experiences. Wearing her military uniform, she delivered a combination of storytelling, dramatic performance of military drills, and patriotic appeal.  These lectures, which began in 1802, were groundbreaking for their time, as respectable women rarely spoke publicly before mixed audiences.

A Lasting Legacy

Deborah Sampson’s legacy extends far beyond her military service. She challenged rigid gender roles and demonstrated that women could serve their country with the same valor and effectiveness as men. Her story inspired future generations of women who sought to break barriers and serve in traditionally male-dominated fields.

After she died in 1827, her story continued to gain recognition. In 1838, her husband was awarded a widow’s pension, possibly the first instance in U.S. history that the benefit was granted to a man based on his wife’s military service.

She left behind a legacy of courage, determination, and pioneering spirit that continues to resonate today. In 1983, she was declared the Official Heroine of the Commonwealth of Massachusetts, and in 2020, the U.S. House of Representatives passed the Deborah Sampson Act, expanding healthcare and benefits for female veterans. Statues and memorials, including her gravesite in Sharon, Massachusetts, commemorate her contributions.  Her wartime exploits have been the subject of books, plays, and scholarly research and her story continues to inspire generations as a symbol of courage and the ongoing struggle for gender equality in military service. 

While she was not the only woman to disguise herself and enlist—others like Margaret Corbin and Anna Maria Lane also took up arms—Sampson is among the best documented and celebrated.

Her life represents a crucial chapter in both military history and women’s history, illustrating the complex ways in which the American Revolution created opportunities for individuals to transcend social conventions in service of the greater cause of independence.  Deborah’s journey from indentured servant to Continental Army soldier and national lecturer is a testament to her extraordinary courage and determination. By stepping into a role forbidden to women and excelling under the harshest conditions, she challenged the boundaries of her time and set a precedent for future generations.

Though it is possible that her wartime activities may have been exaggerated—a common practice in biographies of the time—her life remains a powerful reminder of the contributions women have made, often unrecognized, in the shaping of American history.

The illustration at the beginning of this post is from The Female Review: Life of Deborah Sampson, the Female Soldier in the War of Revolution (1916), a reprint of the 1797 biography by Herman Mann.  

The Quasi-War

America’s Undeclared Naval Conflict with France

Following the American Revolution, the newly independent United States found itself caught between the competing imperial ambitions of Britain and France. What began as tensions over trade and neutrality escalated into an undeclared naval war and became the country’s first international crisis. The Quasi-War, as historians have named this conflict, was a pivotal moment in early American history when the republic’s survival hung in the balance. 

The Quasi-War was a limited, undeclared naval conflict fought primarily in the Caribbean and along the American coast. The conflict was “quasi” because it lacked a formal declaration of war and was limited in scope, focusing mainly on naval encounters rather than land-based military campaigns.

Origins of the Conflict

The roots of the Quasi-War began with the disagreements, conflicts, and confusion of international relations that followed the French Revolution. The French declaration of war on Britain in 1793 put the United States in a difficult position. The 1778 Treaty of Alliance with France technically obligated the United States to support France militarily, but President George Washington, recognizing that America was too weak to engage in another major conflict, issued a Proclamation of Neutrality.

This decision to remain neutral infuriated the French government, which viewed it as a betrayal of their wartime alliance.  

Tensions increased when the United States suspended loan repayments to France in 1793 primarily due to the dramatic political upheaval of the French Revolution and uncertainty about the legitimacy of the French government.

When the French Revolution began, the U.S. had outstanding debts to France from loans provided during the American Revolution. By 1793, the situation in France had become extremely volatile — King Louis XVI was executed in January, and various revolutionary factions were competing for power.

The Washington administration faced a dilemma: should they continue making payments to the new revolutionary government, or would doing so constitute recognition of a regime that might not be stable or even legitimate? There were also concerns about whether honoring debts to the revolutionary government might drag the U.S. into France’s expanding wars with European monarchies.

The situation deteriorated even further when the United States signed Jay’s Treaty with Britain in 1794. This commercial agreement resolved several outstanding issues between America and Britain, including the evacuation of British forts in the Northwest Territory and the establishment of limited trade relationships. To the French, Jay’s Treaty represented a clear alignment with their enemy.

The French responded in 1795, by authorizing privateers and naval vessels to attack and capture American merchant ships, particularly those trading with British ports or carrying British goods. By 1797, the French had captured over 300 American vessels, causing significant economic damage to American merchants and threatening the nation’s maritime commerce.

The diplomatic crisis deepened with the infamous XYZ Affair of 1797.  When President John Adams sent envoys to Paris to negotiate a resolution to the mounting tensions, French Foreign Minister Talleyrand’s agents demanded substantial bribes, described as loans, before any negotiations could begin. The American diplomats refused these demands and returned home empty-handed. When news of the attempted extortion became public, it sparked outrage across America and the rallying cry “Millions for defense, but not one cent for tribute!”  In his report to Congress, President Adams referred to the three French officials as X, Y, and Z to protect their actual identity, thus giving the XYZ Affair its name.

The Undeclared War Begins

The failure of diplomatic negotiations and continued French attacks on American shipping pushed the Adams administration toward military action. Adams chose not to declare war, which would have required congressional approval and would have risked drawing in other European powers. Instead, he chose a more limited response, authorizing the United States Navy to capture French vessels found in American waters and to protect American merchant ships on the high seas.

The Adams administration took several significant steps to prepare for this maritime conflict. Congress authorized the creation of a separate Navy Department, expanded the existing naval forces, and approved the construction of new warships. The Marine Corps, which was disbanded in 1783, was also formally reestablished during this period. Additionally, Congress suspended all commercial activities with France and authorized American naval vessels to capture French ships engaged in hostile acts against American commerce.

Naval Encounters and Key Battles

The Quasi-War witnessed numerous naval engagements, ranging from single-ship duels to larger squadron actions. American naval commanders like Thomas Truxtun emerged as heroes during this conflict. Truxtun’s frigate USS Constellation achieved two notable victories, first capturing the French frigate L’Insurgente in February 1799, and later defeating La Vengeance in a fierce night battle in February 1800.

The conflict extended throughout the Caribbean, where American naval squadrons protected merchant convoys and hunted French privateers. The USS Enterprise, USS Experiment, and other smaller American vessels proved particularly effective in these operations, capturing numerous French privateers and recapturing American merchant vessels. The U. S. Navy captured or destroyed over 80 French vessels while having only a single ship captured in combat and that ship was later recaptured.

One of the most significant aspects of the naval war was its impact on American naval development. The conflict provided valuable combat experience for American officers and sailors, many of whom would later serve with distinction in the War of 1812. The success of American naval forces during the Quasi-War also demonstrated the importance of maintaining a strong navy for protecting American commercial interests.

While the war was almost exclusively naval, there were minor land actions such as that which occurred during September 1800, when U.S. Marines landed at Curaçao to drive French forces from two forts.

Fears of a possible French invasion led Congress to authorize a provisional army of 10,000 men. President John Adams appointed George Washington as Commander-in-Chief, with Alexander Hamilton as his second-in-command, though no land battles occurred.

Domestic Political Consequences

The Quasi-War worsened the growing divide between Federalists and Democratic-Republicans. President Adams and the Federalist Party generally supported a strong response to French aggression, viewing it as necessary to protect American honor and commercial interests. The conflict provided Federalists with an opportunity to strengthen the federal government and build up American military forces.

The Alien and Sedition Acts, passed in 1798 during the height of anti-French sentiment, granted the government broad powers to deport foreign nationals and prosecute critics of the administration. These measures were widely seen as attacks on civil liberties and became a major political liability for the Federalists.

Thomas Jefferson and the Democratic-Republicans opposed both the war and the domestic security measures, arguing that Adams was leading the country toward unnecessary conflict and tyranny. They maintained that the French grievances were legitimate and that diplomatic solutions should be pursued more vigorously.

Resolution and the Convention of 1800

By 1799, both sides had grown weary of the costly and disruptive conflict. Napoleon Bonaparte, who had come to power in France, was more interested in European affairs than in continuing a naval war with America. Similarly, President Adams recognized that prolonged conflict would be economically devastating and politically dangerous.

In February 1799, Adams surprised both his own party and the nation by announcing his intention to send new diplomatic envoys to France. This decision split the Federalist Party, with many hawks opposing any negotiations with France. Nevertheless, Adams persisted, believing that peace was in America’s best interest.

The resulting negotiations led to the Convention of 1800, also known as the Treaty of Mortefontaine, signed in September 1800. This agreement effectively ended the Quasi-War by establishing terms for peaceful coexistence based on the principle of “free trade, free goods” between the two nations. The treaty also provided for the mutual restoration of captured vessels, established compensation procedures for maritime losses, and most importantly, formally ended the 1778 Treaty of Alliance between France and the United States.

Long-term Impact and Legacy

The Quasi-War’s conclusion marked a significant turning point in American foreign policy. The conflict demonstrated that the United States could successfully defend its interests against a major European power without formal allies. It also established important precedents for presidential war powers and the use of limited military force without formal declarations of war.  The experience gained during the Quasi-War would prove invaluable in subsequent conflicts, particularly the War of 1812.

The Quasi-War Was the beginning of a long-standing policy of neutrality in European conflicts, that persisted for much of the 19th century and was even echoed in the first half of the 20th century.   It demonstrated to the world that the United States was a viable country that stood ready to defend its sovereignty.

The Quasi-War was America’s first undeclared war.  Although Congress eventually granted limited military authority it was begun at the direction of President Adams. This has influenced American foreign policy and the use of military forces ever since. The Quasi-War was referenced in debates about American involvement in Vietnam and in the Gulf War. The War Powers Act of 1973 was passed an effort to limit the ability of the President to send American troops to combat in foreign countries, but its effectiveness and enforcement have been debated ever since it was passed.

Button Gwinnett

An Almost Forgotten Signer of the Declaration of Independence

History is full of people both little known and unknown who were present at important events. They may have participated, or they may simply have been observers. Understanding them, their lives and their involvement can help us to understand the human aspect of historical events. This is what I love most about history, the stories of average people.

Not long ago, I was looking at a copy of a broadside of the Declaration of Independence when I noticed an intriguing signature — Button Gwinnett. He is one of the lesser-known signers of the Declaration of Independence, yet he played a significant role in the early political landscape of Georgia. His life was a blend of ambition and political maneuvering. His dramatic rise and fall remain intriguing to historians. Even though Gwinnett is little remembered today, his story offers a glimpse into the turbulent period of America’s founding.

Early Life and Migration to America

Button Gwinnett was born in 1735 in Down Hatherley, Gloucestershire, England. He was the son of an Anglican vicar and was named after his mother’s cousin Barbara Button who was also his godmother.

While details about his early education are scarce, it is believed that he received a basic education typical of the English gentry. Gwinnett’s early adulthood was marked by modest success as a merchant. In the 1760s, facing limited opportunities in England and the promise of economic prosperity in the American colonies, Gwinnett and his wife, Ann, emigrated to the New World.

Initially, Gwinnett settled in Charleston, South Carolina, where he engaged in trade. However, he struggled financially, and by 1765, he had relocated to Savannah, Georgia. This move marked not only the beginning of his political career, but also a period of fluctuating fortune. Gwinnett purchased St. Catherine’s Island off the coast of Georgia, hoping to become a successful plantation owner. Unfortunately, he overextended himself financially, and his attempts to establish a profitable business met with failure. Despite his financial setbacks, Gwinnett’s status as a landowner and merchant allowed him to enter the local political scene.

Rise in Politics and Revolutionary Activity

Gwinnett’s involvement in politics grew as tensions between the American colonies and Britain escalated. By the early 1770s, he had become aligned with the growing revolutionary sentiment. In 1775, he was elected to Georgia’s Provincial Congress, where he quickly rose to prominence due to his vocal support for independence from British rule. Although Georgia had initially shown less enthusiasm for independence than colonies like Massachusetts or Virginia, a growing faction of Georgia patriots, including Gwinnett, began advocating for stronger opposition to British rule. By 1776, Gwinnett had become a delegate to the Second Continental Congress.

Continental Congress and the Declaration

On January 20, 1776, Gwinnett left Georgia for Philadelphia to represent the colony in Congress. This appointment marked the pinnacle of his political career and placed him at the center of the deliberations for American independence. His journey to Philadelphia came at a crucial moment when the Continental Congress was moving toward a formal declaration of independence.

Gwinnett voted for independence on July 2, voted to approve the declaration on July 4, and signed his name to the parchment of the Declaration of Independence on August 2. Out of the 56 delegates who signed the Declaration, Button was one of only 8 who were born in Britain. His British birth added a unique perspective to his role as a Founding Father, representing the immigrant experience that was central to colonial American society.

His signing of the Declaration of Independence would later make his signature one of the most valuable autographs in American history. Gwinnett is known chiefly because his autographs are extremely rare and collectors have paid dearly to obtain one. (In 2001 one of his 36 known autographs sold at public auction for $110,000. Since then, several others have been documented.)

Conflict and Power Struggles in Georgia

Back in Georgia, Gwinnett became embroiled in a power struggle with General Lachlan McIntosh, a prominent figure in the colony’s revolutionary army. The conflict between Gwinnett and McIntosh was fueled by political rivalry and personal animosity. Gwinnett aspired to leadership positions within Georgia’s government and military, and in March 1777, he became acting president of Georgia’s Revolutionary Council after the sudden death of Governor Archibald Bulloch.

During his brief tenure as acting council president, Gwinnett’s leadership was controversial. He proposed a bold military expedition against British-controlled East Florida, intending to bolster his political standing and secure Georgia’s borders. However, the campaign was poorly executed, and it ended in failure. This debacle intensified the feud between Gwinnett and McIntosh, with each blaming the other for the military defeat.

Gwinnett’s promising political career was cut short by an ongoing personal conflict that became intertwined with the honor culture of the American South. The rivalry between Gwinnett and McIntosh reached its climax in May 1777. After a series of public insults—McIntosh called Gwinnett a “scoundrel and lying rascal,” Gwinnett responded by challenging him to a duel. Dueling, though technically illegal, was still a common way to resolve disputes among gentlemen of the period. On May 16, 1777, the two men faced each other with pistols in a pasture near Savannah. Both were wounded, but only Gwinnett’s injuries proved fatal. He died three days later, at age 42, and was buried in Savannah’s Colonial Park Cemetery, though the exact location of his grave is still unknown.

Legacy and Historical Significance

Gwinnett’s legacy is visible in his namesake Gwinnett County, one of Georgia’s most populous counties, a tribute to his contributions to the state’s early political history.

In recent decades, historians have taken a renewed interest in Button Gwinnett, examining his role beyond the narrow context of his duel and signature. While he lacked the fame of other founding fathers, Gwinnett’s political maneuvering and his role during the revolutionary period highlight the complexities of early American politics. His rivalry with McIntosh reflects the deep divisions and regional conflicts that existed even among those who supported independence.

Gwinnett’s life also underscores the risks faced by those who ventured into the revolutionary cause. Unlike many of his contemporaries who enjoyed long, celebrated careers, Gwinnett’s story is one of a meteoric rise and abrupt fall. His legacy, while overshadowed by more prominent figures, is a reminder of the many lesser-known men and women who played vital roles in America’s fight for independence.

Button Gwinnett’s life was marked by ambition, conflict, and an untimely death that left him as one of the more obscure figures of the American Revolution. His contributions to the independence movement in Georgia were significant, even if his political career was cut short. Today, Gwinnett’s name lives on in Georgia’s geography, and his autograph serves as a rare artifact of a fleeting yet impactful moment in history.

Sources:

·      National Archives: Declaration of Independence Signers – https://www.archives.gov/founding-docs/signers

·      The Georgia Historical Society: Biography of Button Gwinnett – https://georgiahistory.com

·      Smithsonian Magazine: The Rare Autograph of Button Gwinnett – https://www.smithsonianmag.com

·      Library of Congress: Early American Biographies – https://www.loc.gov

Page 4 of 9

Powered by WordPress & Theme by Anders Norén