Grumpy opinions about everything.

Tag: American History Page 2 of 4

The Fatal Meeting: When Hamilton and Burr Settled Fifteen Years of Rivalry with Pistols

The story of the Hamilton-Burr duel has all the elements of a Greek tragedy: brilliant men, political ambition, an unforgiving honor culture, and an ending that destroyed both victor and vanquished alike. When Aaron Burr shot Alexander Hamilton on the morning of July 11, 1804, he didn’t just kill one of America’s founding architects—he also ended his own political career and helped doom the entire Federalist Party to irrelevance. Let’s rewind the clock more than a decade to try and understand how these two gifted lawyers and Revolutionary War veterans ended up facing each other with loaded pistols.

The Long Road to Weehawken

Hamilton and Burr moved in the same elite New York political circles from the 1790s onward, but they had remarkably different temperaments and political beliefs. Hamilton was ideological, prolific, and combative—often too much so for his own good. Burr was pragmatic, opaque, self-serving, and famously hard to pin down on principle. They distrusted each other deeply.

Their rivalry stretched back to 1791, when Burr defeated Philip Schuyler for a U.S. Senate seat representing New York. This wasn’t just any political defeat for Hamilton—Schuyler was his father-in-law and a crucial Federalist ally on whom Hamilton had counted to support his ambitious financial programs. Hamilton, who was serving in George Washington’s cabinet as Treasury Secretary, never forgave Burr for this loss. In correspondence from June 1804, Hamilton himself referenced “a course of fifteen years competition” between the two men.  

Their philosophical differences ran deep. Hamilton was an ideological Federalist who dreamed of transforming the United States into a modern economic power rivaling European empires through strong central government, industrial development, and military strength. Burr, by contrast, approached politics more pragmatically—he saw it as a vehicle for advancing his own interests and those of his allies rather than as a way to implement sweeping political visions. As Burr himself allegedly said, politics were nothing more than “fun and honor and profit”. Hamilton viewed Burr as fundamentally dangerous due to his lack of fixed ideological principles. Hamilton wrote in 1792 that he considered it his “religious duty to keep this man from office”

The election of 1800 brought their animosity to a boiling point. Due to a quirk in the original Constitution’s electoral system, Thomas Jefferson and his running mate Aaron Burr tied in the Electoral College with 73 votes each, allowing the Federalists to briefly consider elevating Burr to the presidency.  The decision went to the House of Representatives, and Hamilton—despite despising Jefferson’s Democratic-Republican politics—campaigned hard to ensure Jefferson won the presidency rather than Burr. Hamilton argued that Jefferson, however wrong in policy, had convictions, whereas Burr had none.  In the end, Jefferson gained the presidency and Burr became Vice President, but their relationship was never collegial and Burr was excluded from any meaningful participation in Jefferson’s administration.

By 1804, it was clear Jefferson would not consider Burr for a second term as Vice President. Desperate to salvage his political career, Burr made a surprising move: he sought the Federalist nomination for governor of New York, switching from his Democratic-Republican affiliation. It was a strange gambit—essentially betting that his political enemies might support him if it served their interests. Hamilton, predictably, worked vigorously to block Burr’s ambitions yet again. Although Hamilton’s opposition wasn’t the only factor, Burr lost badly to Morgan Lewis, the Democratic-Republican candidate, in April 1804.

The Cooper Letter and the Challenge

The immediate trigger for the duel came from a relatively minor slight in the context of their long feud. In February 1804, Dr. Charles Cooper attended a dinner party where Hamilton spoke forcefully against Burr’s candidacy. Cooper later wrote to Philip Schuyler describing Hamilton’s comments, noting that Hamilton had called Burr “a dangerous man” and referenced an even “more despicable opinion” of him. This letter was published in the Albany Register in April, after Burr’s electoral defeat.

When the newspaper reached Burr, he was already politically ruined—still Vice President of the United States, but with no prospects for future office. He demanded that Hamilton acknowledge or deny the statements attributed to him. What followed was a formal exchange of letters between the two men and their representatives that lasted through June. Hamilton refused to give Burr the straightforward denial he sought, explaining that he couldn’t reasonably be expected to account for everything he might have said about a political opponent during fifteen years of competition. Burr, seeing his honor impugned and his options exhausted, invoked the code of honor and issued a formal challenge to duel.

Hamilton found himself in an impossible position. If he admitted to the insults, which were substantially true, he would lose his honor. If he refused to duel, the result would be the same—his political career would effectively end. Hamilton had personal and moral objections to dueling. His eldest son Philip had died in a duel just three years earlier, at the same Weehawken location where Hamilton and Burr would meet. Hamilton calculated that his ability to maintain his political influence required him to conform to the codes of honor that governed gentlemen’s behavior in early America.

Dawn at Weehawken

At 5:00 AM on the morning of July 11, 1804, the men departed Manhattan from separate docks. They were each rowed across the Hudson River to the Heights of Weehawken, New Jersey—a popular dueling ground where at least 18 known duels took place between 1700 and 1845. They chose New Jersey because while dueling had been outlawed in both New York and New Jersey, the New Jersey penalties were less severe.

Burr arrived first around 6:30 AM, with Hamilton landing about thirty minutes later. Each man was accompanied by his “second”—an assistant responsible for ensuring the duel followed proper protocols. Hamilton brought Nathaniel Pendleton, a Revolutionary War veteran and Georgia district court judge, while Burr’s second was William Van Ness, a New York federal judge. Hamilton also brought Dr. David Hosack, a Columbia College professor of medicine and botany, in case medical attention proved necessary.

Shortly after 7 a.m., the seconds measured out ten paces, loaded the .56‑caliber pistols, and explained the firing rules before Hamilton and Burr took their positions. What exactly happened next remains one of history’s enduring mysteries. The seconds gave conflicting accounts, and historians still debate the sequence and meaning of events.

In a written statement before the duel, Hamilton expressed religious and moral objections to dueling, worry for his family and creditors, and professed no personal hatred of Burr, yet concluded that honor and future public usefulness compelled him to accept. By some accounts, Hamilton had also written to confidants indicating his intention to “throw away my shot”—essentially to deliberately miss Burr, satisfying the requirements of honor without attempting to kill his opponent. Burr, by contrast, appears to have aimed directly at Hamilton.

Some accounts suggest Hamilton fired first, with his shot hitting a tree branch above and behind Burr’s head. Other versions claim Burr shot first. There’s even a theory that Hamilton’s pistol had a hair trigger that caused an accidental discharge after Burr wounded him.

What’s undisputed is the outcome: Burr’s shot struck Hamilton in the lower abdomen, with the bullet lodging near his spine. Hamilton fell, and Burr reportedly started toward his fallen opponent before Van Ness held him back, worried about the legal consequences of lingering at the scene. The two parties crossed back to Manhattan in their respective boats, with Hamilton taken to the home of William Bayard Jr. in what is now Greenwich Village.

Hamilton survived long enough to say goodbye to his wife Eliza and their children. He died at 2 PM on July 12, 1804, approximately 31 hours after being shot.

Political Aftershocks

The nation was outraged. While duels were relatively common in early America, they rarely resulted in death, and the killing of someone as prominent as Alexander Hamilton sparked widespread condemnation. The political consequences proved catastrophic for everyone involved—and reshaped American politics for the next two decades.

Hamilton’s death turned him into a Federalist martyr. Even many who had disliked his arrogance now praised his intellect, service, and sacrifice. His economic vision, already embedded in American institutions, gained a kind of posthumous authority.

For Aaron Burr, the duel destroyed him politically and socially. Murder charges were filed against him in both New York and New Jersey, though neither reached trial—a grand jury in Bergen County, New Jersey indicted him for murder in November 1804, but the New Jersey Supreme Court quashed the indictment. Nevertheless, Burr fled to St. Simons Island, Georgia, staying at the plantation of Pierce Butler, before returning to Washington to complete his term as Vice President.

Rather than restoring his reputation as he’d hoped, the duel made Burr a pariah. He would never hold elected office again. His subsequent attempt to regain power through what historians call the “Burr Conspiracy”—an alleged plan to create an independent nation along the Mississippi River by separating territories from the United States and Spain—led to a treason trial in 1807. Chief Justice John Marshall presided and Burr was ultimately acquitted, but the trial further cemented Burr’s reputation as a dangerous schemer. He spent his later years quietly practicing law in New York.

For the Federalist Party, Hamilton’s death proved even more devastating than Burr’s personal ruin. Hamilton had been the party’s intellectual architect and most effective leader. At the time of his death, the Federalists were attempting a comeback after their national defeat in the 1800 election. Without Hamilton’s energy, strategic thinking, and ability to articulate a compelling vision for the country, the Federalists lost direction. As one historian put it, “The Federalists would be unable to find another leader as forceful and energetic as Hamilton had been, and their movement would slowly suffocate before finally petering out in the early 1820s”. The party’s decline ended what historians consider the first round of partisan struggles in American history.

An interesting footnote: while many Federalists wanted to portray Hamilton as a political martyr, Federalist clergy broke with the party line to condemn dueling itself as a violation of the sixth commandment. These ministers used Hamilton’s death as an opportunity to wage a moral crusade against the practice of dueling, helping to accelerate its decline in American culture—particularly in the northern states where it was already losing favor.

The duel produced a triple tragedy: Hamilton dead at age 47 (or 49—his birth year remains disputed), Burr politically destroyed despite being acquitted of murder charges, and the Federalist Party fatally weakened at a critical moment in American political development.

The Hamilton–Burr duel sits at the intersection of politics, personality, and culture. It reminds us that the early republic was not a calm, rational experiment run by marble statues—but a volatile environment shaped by ego, fear, and ambition. Institutions were young, norms were fragile, and reputations were all important. What began as fifteen years of professional rivalry and personal enmity ended with two brilliant men eliminating each other from the political stage, neither achieving what they’d hoped for through their fatal meeting on the heights of Weehawken.

Sources

Encyclopedia Britannica “Burr-Hamilton duel | Summary, Background, & Facts” https://www.britannica.com/event/Burr-Hamilton-duel

History.com “Aaron Burr slays Alexander Hamilton in duel” https://www.history.com/this-day-in-history/july-11/burr-slays-hamilton-in-duel

Library of Congress “Today in History – July 11” https://www.loc.gov/item/today-in-history/july-11

National Constitution Center “The Burr vs. Hamilton duel happened on this day” https://constitutioncenter.org/blog/burr-vs-hamilton-behind-the-ultimate-political-feud

National Park Service “Hamilton-Burr Duel” https://www.nps.gov/articles/000/hamilton-burr-duel.htm

PBS American Experience “Alexander Hamilton and Aaron Burr’s Duel” https://www.pbs.org/wgbh/americanexperience/features/duel-alexander-hamilton-and-aaron-burrs-duel/

The Gospel Coalition “American Prophets: Federalist Clergy’s Response to the Hamilton–Burr Duel of 1804” https://www.thegospelcoalition.org/themelios/article/american-prophets-federalist-clergys-response-to-the-hamilton-burr-duel-of-1804/

Wikipedia “Burr–Hamilton duel” https://en.wikipedia.org/wiki/Burr–Hamilton_duel

World History Encyclopedia “Hamilton-Burr Duel” https://www.worldhistory.org/article/2548/hamilton-burr-duel/​​​​​​​​​​​​​​​​

For more information about the history of dueling in early America see my earlier post: Pistols at Dawn, The Rise and Fall of the Code Duello.

Images generated by author using ChatGPT.

The Founding Feuds: When America’s Heroes Couldn’t Stand Each Other

The mythology of the founding fathers often portrays them as a harmonious band of brothers united in noble purpose. The reality was far messier—these brilliant, ambitious men engaged in bitter personal feuds that sometimes threatened the very republic they were creating.  In some ways, the American revolution was as much of a battle of egos as it was a war between King and colonists.

The Revolutionary War Years: Hancock, Adams, and Washington’s Critics

The tensions began even before independence was declared. John Hancock and Samuel Adams, both Massachusetts firebrands, developed a rivalry that simmered throughout the Revolution. Adams, the older political strategist, had been the dominant figure in Boston’s resistance movement. When Hancock—wealthy, vain, and eager for glory—was elected president of the Continental Congress in 1775, the austere Adams felt his protégé had grown too big for his britches. Hancock’s request for a leave of absence from the presidency of Congress in 1777 coupled with his desire for an honorific military escort home, struck Adams as a relapse into vanity. Adams even opposed a resolution of thanks for Hancock’s service, signaling open estrangement. Their relationship continued to deteriorate to the point where they barely spoke, with Adams privately mocking Hancock’s pretensions and Hancock using his position to undercut Adams politically.

The choice of Washington as commander sparked its own controversies. John Adams had nominated Washington, partly to unite the colonies by giving Virginia the top military role. Washington’s command was anything but universally admired and as the war dragged on with mixed results many critics emerged.

After the victory at Saratoga in 1777, General Horatio Gates became the focal point of what’s known as the Conway Cabal—a loose conspiracy aimed at having Gates replace Washington as commander-in-chief. General Thomas Conway wrote disparaging letters about Washington’s military abilities. Some members of Congress, including Samuel Adams, Thomas Mifflin, and Richard Henry Lee, questioned whether Washington’s defensive strategy was too cautious and if his battlefield performance was lacking. Gates himself played a duplicitous game, publicly supporting Washington while privately positioning himself as an alternative.

When Washington discovered the intrigue, his response was characteristically measured but firm.  Rather than lobbying Congress or forming a counter-faction, Washington leaned heavily on reputation and restraint. He continued to communicate respectfully with Congress, emphasizing the army’s needs rather than defending his own position.  Washington did not respond with denunciations or public accusations. Instead, he handled the situation largely behind the scenes. When he learned that Conway had written a critical letter praising Gates, Washington calmly informed him that he was aware of the letter—quoting it verbatim.

The conspiracy collapsed, in part because Washington’s personal reputation with the rank and file and with key political figures proved more resilient than his critics had anticipated. But the episode exposed deep fractures over strategy, leadership, and regional loyalties within the revolutionary coalition.

The Ideological Split: Hamilton vs. Jefferson and Madison

Perhaps the most consequential feud emerged in the 1790s between Alexander Hamilton and Thomas Jefferson, with James Madison eventually siding with Jefferson. This wasn’t just personal animosity—it represented a fundamental disagreement about America’s future.

Hamilton, Washington’s Treasury Secretary, envisioned an industrialized commercial nation with a strong central government, a national bank, and close ties to Britain. Jefferson, the Secretary of State, championed an agrarian republic of small farmers with minimal federal power and friendship with Revolutionary France. Their cabinet meetings became so contentious that Washington had to mediate. Hamilton accused Jefferson of being a dangerous radical who would destroy public credit. Jefferson called Hamilton a monarchist who wanted to recreate British aristocracy in America.

The conflict got personal. Hamilton leaked damaging information about Jefferson to friendly newspapers. Jefferson secretly funded a journalist, James Callender, to attack Hamilton in print. When Hamilton’s extramarital affair with Maria Reynolds became public in 1797, Jefferson’s allies savored every detail. The feud split the nation into the first political parties: Hamilton’s Federalists and Jefferson’s Democratic-Republicans. Madison, once Hamilton’s ally in promoting the Constitution, switched sides completely, becoming Jefferson’s closest political partner and Hamilton’s implacable foe.

The Adams-Jefferson Friendship, Rivalry, and Reconciliation

John Adams and Thomas Jefferson experienced one of history’s most remarkable personal relationships. They were close friends during the Revolution, working together in Congress and on the committee to draft the Declaration of Independence (though Jefferson did the actual writing). Both served diplomatic posts in Europe and developed deep mutual respect.

But the election of 1796 turned them into rivals. Adams won the presidency with Jefferson finishing second, making Jefferson vice president under the original constitutional system—imagine your closest competitor becoming your deputy. By the 1800 election, they were bitter enemies. The campaign was vicious, with Jefferson’s supporters calling Adams a “hideous hermaphroditical character” and Adams’s allies claiming Jefferson was an atheist who would destroy Christianity.

Jefferson won in 1800, and the two men didn’t speak for over a decade. Their relationship was so bitter that Adams left Washington early in the morning, before Jefferson’s inauguration. What makes their story extraordinary is the reconciliation. In 1812, mutual friends convinced them to resume correspondence. Their letters over the next fourteen years—158 of them—became one of the great intellectual exchanges in American history, discussing philosophy, politics, and their memories of the Revolution. Both men died on July 4, 1826, the fiftieth anniversary of the Declaration of Independence, with Adams’s last words reportedly being “Thomas Jefferson survives” (though Jefferson had actually died hours earlier).

Franklin vs. Adams: A Clash of Styles

In Paris, the relationship between Benjamin Franklin and John Adams was a tense blend of grudging professional reliance and deep personal irritation, rooted in radically different diplomatic styles and temperaments. Franklin, already a celebrated figure at Versailles, cultivated French support through charm, sociability, and patient maneuvering in salons and at court, a method that infuriated Adams. He equated such “nuances” with evasiveness and preferred direct argument, formal memorandums, and hard‑edged ultimatums. Sharing lodgings outside Paris only intensified Adams’s resentment as he watched Franklin rise late, receive endless visitors, and seemingly mix pleasure with business, leading Adams to complain that nothing would ever get done unless he did it himself, while Franklin privately judged Adams “always an honest man, often a wise one, but sometimes and in some things, absolutely out of his senses.” Their French ally, Foreign Minister Vergennes, reinforced the imbalance by insisting on dealing primarily with Franklin and effectively sidelining Adams in formal diplomacy. This deepened Adams’s sense that Franklin was both overindulged by the French and insufficiently assertive on America’s behalf. Yet despite their mutual loss of respect, the two ultimately cooperated—often uneasily—in the peace negotiations with Britain, and both signatures appear on the 1783 Treaty of Paris, a testament to the way personal feud and shared national purpose coexisted within the American diplomatic mission.

Hamilton and Burr: From Political Rivalry to Fatal Duel

The Hamilton-Burr feud ended in the most dramatic way possible: a duel at Weehawken, New Jersey, on July 11, 1804, where Hamilton was mortally wounded and Burr destroyed his own political career.

Their rivalry had been building for years. Both were New York lawyers and politicians, but Hamilton consistently blocked Burr’s ambitions. When Burr ran for governor of New York in 1804, Hamilton campaigned against him with particular venom, calling Burr dangerous and untrustworthy at a dinner party. When Burr read accounts of Hamilton’s remarks in a newspaper, he demanded an apology. Hamilton refused to apologize or deny the comments, leading to the duel challenge.

What made this especially tragic was that Hamilton’s oldest son, Philip, had been killed in a duel three years earlier defending his father’s honor. Hamilton reportedly planned to withhold his fire, but he either intentionally shot into the air or missed. Burr’s shot struck Hamilton in the abdomen, and he died the next day. Burr was charged with murder in both New York and New Jersey and fled to the South.  Though he later returned to complete his term as vice president, his political career was finished.

Adams vs. Hamilton: The Federalist Crack-Up

One of the most destructive feuds happened within the same party. John Adams and Alexander Hamilton were both Federalists, but their relationship became poisonous during Adams’s presidency (1797-1801).

Hamilton, though not in government, tried to control Adams’s cabinet from behind the scenes. When Adams pursued peace negotiations with France (the “Quasi-War” with France was raging), Hamilton wanted war. Adams discovered that several of his cabinet members were more loyal to Hamilton than to him and fired them. In the 1800 election, Hamilton wrote a fifty-four-page pamphlet attacking Adams’s character and fitness for office—extraordinary since they were in the same party. The pamphlet was meant for limited circulation among Federalist leaders, but Jefferson’s allies got hold of it and published it widely, devastating both Adams’s re-election chances and Hamilton’s reputation. The feud helped Jefferson win and essentially destroyed the Federalist Party.

Washington and Jefferson: The Unacknowledged Tension

While Washington and Jefferson never had an open feud, their relationship cooled significantly during Washington’s presidency. Jefferson, as Secretary of State, increasingly opposed the administration’s policies, particularly Hamilton’s financial program. When Washington supported the Jay Treaty with Britain in 1795—which Jefferson saw as a betrayal of France and Republican principles—Jefferson became convinced Washington had fallen under Hamilton’s spell.

Jefferson resigned from the cabinet in 1793, partly from policy disagreements but also from discomfort with what he saw as Washington’s monarchical tendencies (the formal receptions and the ceremonial aspects of the presidency). Washington, in turn, came to view Jefferson as disloyal, especially when he learned Jefferson had been secretly funding attacks on the administration in opposition newspapers and had even put a leading critic on the federal payroll. By the time Washington delivered his Farewell Address in 1796, warning against political parties and foreign entanglements, many saw it as a rebuke of Jefferson’s philosophy. They maintained outward courtesy, but their warm relationship never recovered.

Why These Feuds Mattered

These weren’t just personal squabbles—they shaped American democracy in profound ways. The Hamilton-Jefferson rivalry created our two-party system (despite Washington’s warnings). The Adams-Hamilton split showed that parties could fracture from within. The Adams-Jefferson reconciliation demonstrated that political enemies could find common ground after leaving power.

The founding fathers were human, with all the ambition, pride, jealousy, and pettiness that entails. They fought over power, principles, and personal slights. What’s remarkable isn’t that they agreed on everything—they clearly didn’t—but that despite their bitter divisions, they created a system robust enough to survive their feuds. The Constitution itself, with its checks and balances, almost seems designed to accommodate such disagreements, ensuring that no single person or faction could dominate.

SOURCES

  1. National Archives – Founders Online

https://founders.archives.gov

2.   Massachusetts Historical Society – Adams-Jefferson Letters

https://www.masshist.org/publications/adams-jefferson

       3.    Founders Online – Hamilton’s Letter Concerning John Adams

https://founders.archives.gov/documents/Hamilton/01-25-02-0110

       4.    Gilder Lehrman Institute – Hamilton and Jefferson

https://www.gilderlehrman.org/history-resources/spotlight-primary-source/alexander-hamilton-and-thomas-jefferson

       5.    National Park Service – The Conway Cabal

https://www.nps.gov/articles/000/the-conway-cabal.htm

       6.    American Battlefield Trust – Hamilton-Burr Duel

https://www.battlefields.org/learn/articles/hamilton-burr-duel

        7.   Mount Vernon – Thomas Jefferson

https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/thomas-jefferson

        8.   Monticello – Thomas Jefferson Encyclopedia

https://www.monticello.org/research-education/thomas-jefferson-encyclopedia

        9.   Library of Congress – John Adams Papers

https://www.loc.gov/collections/john-adams-papers

10. Joseph Ellis – “Founding Brothers: The Revolutionary Generation”

https://www.pulitzer.org/winners/joseph-j-ellis

Illustration generated by author using ChatGPT.

13 Stars, Betsy Ross and the Story of the American Flag

On a steamy June day in 1777, the Continental Congress took a brief break from the monumental task of running a revolution to deal with something that seems surprisingly simple in retrospect: what should the American flag look like? The resolution they passed on June 14th was refreshingly concise, stating that “the flag of the United States be thirteen stripes, alternate red and white; that the union be thirteen stars, white in a blue field, representing a new constellation.”

That poetic phrase about a “new constellation” turned out to be both inspiring and maddeningly vague. Congress didn’t specify how the stars should be arranged, how many points they should have, or even whether the flag should start with a red or white stripe at the top. This ambiguity led to one of the interesting aspects of early American flag history—for decades, no two flags looked exactly alike.

The 1777 resolution came out of Congress’s Marine Committee business, and at least some historians caution that it may have been understood initially as a naval ensign, not a fully standardized “national flag for all uses.”

A Constellation of Designs

The lack of official guidance meant that flag makers exercised considerable artistic freedom. Smithsonian researcher Grace Rogers Cooper found at least 17 different examples of 13-star flags dating from 1779 to around 1796, and flag expert Jeff Bridgman has documented 32 different star arrangements from the era. Some makers arranged the stars in neat rows, others formed them into a single large star, and still others created elaborate patterns that spelled out “U.S.” or formed other symbolic shapes.  An official star pattern would not be specified until 1912 and versions of the 13-star flag remained in ceremonial use until the mid-1800s.

The most famous arrangement, of course, is the Betsy Ross design with its circle of 13 stars. What many people don’t realize is that experts date the earliest known example of this circular pattern to 1792—in a painting by John Trumbull, not on an actual flag from 1776.

Did the Continental Army Actually Use This Flag?

Here’s where things get interesting and a bit murky. The short answer is: not much, and not right away. The Continental Army had been fighting for over two years before Congress even adopted the Stars and Stripes, and by that point, individual regiments had already developed their own distinctive colors and banners. These regimental flags served practical military purposes—they helped units identify each other in the chaos of battle and gave soldiers something to rally around.  Additionally, the Continental Army frequently used the Grand Union Flag (13 stripes with a British Union in the canton), which predates the 13-star design.

What’s more revealing is a series of letters from 1779—two full years after the Flag Resolution—between George Washington and Richard Peters, Secretary of the Board of War. In these letters, Peters is essentially asking Washington what flag he wants the army to use. This correspondence raises an obvious question: if Congress had settled the flag issue in 1777, why was Washington still trying to figure it out in 1779? The evidence suggests that variations of the 13-star flag were primarily used by the Navy in those early years, while the Army continued to use various regimental standards.

Navy Captain John Manley expressed this confusion perfectly when he wrote in 1779 that the United States “had no national colors” and that each ship simply flew whatever flag the captain preferred. Even as late as 1779, the War Board hadn’t settled on a standard design for the Army. When they finally wrote to Washington for his input, they proposed a flag that included a serpent and numbers representing different states—a design that never caught on.

National “stars and stripes” banners did exist during the late war years and appear in some period art and descriptions, but clear, securely dated 13‑star Army battle flags are rare and often disputed.13‑star flags are better documented in early federal service such as maritime and lighthouse use in the 1790s than they are in Continental Army field service before 1783.

The Betsy Ross Question

Now we come to one of America’s most enduring flag legend. The story is familiar to most Americans: in 1776, George Washington, Robert Morris, and George Ross visited Philadelphia upholsterer Betsy Ross and asked her to sew the first American flag. She suggested changing the six-pointed stars to five-pointed ones, demonstrated her one-snip technique for making a perfect five-pointed star, and she then produced the first Stars and Stripes.

It’s a great story. There’s just one problem: historians have found virtually no documentary evidence to support it. The tale didn’t surface publicly until 1870—nearly a century after the supposed event—when Betsy Ross’s grandson, William Canby, presented it in a speech to the Historical Society of Pennsylvania. Canby relied entirely on family oral history, including affidavits from Ross’s daughter, granddaughter, and other relatives who claimed they had heard Betsy tell the story herself. But Canby himself admitted that his search through official records revealed nothing to corroborate the account.

Historians don’t dispute that Betsy Ross was a real person who did real work. Documentary evidence shows that on May 29, 1777, the Pennsylvania State Navy Board paid her a substantial sum for “making ships colours.” She ran a successful upholstery business and continued making flags for the government for more than 50 years. But as historian Marla Miller puts it, “The flag, like the Revolution it represents, was the work of many hands.” Modern scholars generally view the question not as whether Ross designed the flag—she almost certainly didn’t—but whether she may have been among the many people who produced early flags.

Who Really Designed It?

If not Betsy Ross, then who? The strongest candidate is Francis Hopkinson, the New Jersey delegate to the Continental Congress who also helped design the Great Seal of the United States and early American currency. In 1780, Hopkinson sent Congress a bill requesting payment for his design work, specifically mentioning “the flag of the United States of America.”    He likely designed a flag with the stars arranged in rows rather than circles, and his bills for payment submitted to Congress mentioned six-pointed stars rather than the five-pointed ones that became standard.

 Unfortunately for Hopkinson, Congress refused to pay him, arguing that he wasn’t the only person on the Navy Committee and therefore shouldn’t receive singular credit or compensation.

The irony is rich: Hopkinson was asking for a quarter cask of wine or £2,700 for designing what would become one of the world’s most recognizable symbols.  Congress essentially told him, “Thanks, but we’re not paying.” There’s a lesson about government contracts in there somewhere.

What Survived

Of the hundreds of flags made and carried during the Revolutionary War, only about 30 are known to survive today. These rare artifacts offer fascinating glimpses into how Americans visualized their new nation. The Museum of the American Revolution brought together 17 of these original flags in a 2025 exhibition—the largest gathering of such flags since 1783.

The most significant surviving 13-star flag is probably Washington’s Headquarters Standard, a small blue silk flag measuring about two feet by three feet. It features 13 white, six-pointed stars on a blue field and descended through George Washington’s family with the tradition that it marked the General’s presence on the battlefield throughout the war. Experts consider it the earliest surviving 13-star American flag. Due to light damage, it can only be displayed on special occasions.

Other surviving flags tell different stories. The Brandywine Flag, used at the September 1777 battle of the same name, is one of the earliest stars and stripes—the flag is red, with a red and white American flag image in the canton.

 The Dansey Flag, captured from the Delaware militia by a British soldier, was taken to England as a war trophy and remained in his family until 1927. The flag features a green field with 13 alternating red and white stripes in the upper left corner signifying the 13 colonies.

These and other flags weren’t just military equipment—they were powerful symbols that people fought under and, sometimes, died defending.

The Bigger Picture

What makes the story of the 13-star flag so compelling isn’t really about who sewed it or exactly when it first flew. It’s about what the flag represented in an era when the very concept of the United States was still being invented. The June 1777 resolution called for stars forming “a new constellation”—a beautiful metaphor for a new nation finding its place among the powers of the world.

The fact that no two early flags looked exactly alike might seem like a problem from our standardized modern perspective, but it tells us something important about the Revolution itself. Just as the colonies were learning to act as united states while maintaining their individual identities, flag makers across the new nation were interpreting a simple congressional resolution in their own ways, creating variations on a shared theme.

As historian Laurel Thatcher Ulrich points out, there was no “first flag” worth arguing over. The American flag evolved organically, shaped by the practical needs of the Navy, the Army, militias, and civilian flag makers who each contributed to its development. Whether Betsy Ross made one of those early flags or not, her story endures because it captures something Americans want to believe about our origins: that ordinary citizens, working in small shops and homes, helped create the symbols of the new republic.

Sources:

History.com: https://www.history.com/this-day-in-history/june-14/congress-adopts-the-stars-and-stripes

Flags of the World: https://www.crwflags.com/fotw/flags/us-1777.html

Wikipedia Flag of the United States: https://en.wikipedia.org/wiki/Flag_of_the_United_States

Museum of the American Revolution: https://www.amrevmuseum.org/

American Battlefield Trust: https://www.battlefields.org/learn/articles/short-history-united-states-flag

US History (Betsy Ross): https://www.ushistory.org/betsy/

Library of Congress “Today in History”: https://www.loc.gov/item/today-in-history/june-14/

Flag images from Wikimedia Commons

What “Woke” Really Means: A Look at a Loaded Word

Why everyone’s fighting over a word nobody agrees on

Okay, so you’ve probably heard “woke” thrown around about a million times, right? It’s in political debates, online arguments, your uncle’s Facebook rants—basically everywhere. And here’s the weird part: depending on who’s saying it, it either means you’re enlightened or you’re insufferable.

So let’s figure out what’s actually going on with this word.

Where It All Started

Here’s something most people don’t know: “woke” wasn’t invented by social media activists or liberal college students. It goes way back to the 1930s in Black communities, and it meant something straightforward—stay alert to racism and injustice.

The earliest solid example comes from blues musician Lead Belly. In his song “Scottsboro Boys” (about nine Black teenagers falsely accused of rape in Alabama in 1931), he told Black Americans to “stay woke”—basically meaning watch your back, because the system isn’t on your side. This wasn’t abstract philosophy; it was survival advice in the Jim Crow South.

The term hung around in Black culture for decades. It got a boost in 2008 when Erykah Badu used “I stay woke” in her song “Master Teacher,” where it meant something like staying self-aware and questioning the status quo.

But the big explosion happened around 2014 during the Ferguson protests after Michael Brown was killed. Black Lives Matter activists started using “stay woke” to talk about police brutality and systemic racism. It spread through Black Twitter, then got picked up by white progressives showing solidarity with social justice movements. By the late 2010s, it had expanded to cover sexism, LGBTQ+ issues, and pretty much any social inequality you can think of.

And that’s when conservatives started using it as an insult.

The Liberal Take: It’s About Giving a Damn

For progressives, “woke” still carries that original vibe of awareness. According to a 2023 Ipsos poll, 56% of Americans (and 78% of Democrats) said “woke” means “to be informed, educated, and aware of social injustices.”

From this angle, being woke just means you’re paying attention to how race, gender, sexuality, and class affect people’s lives—and you think we should try to make things fairer. It’s not about shaming people; it’s about understanding the experiences of others.

Liberals see it as continuing the work of the civil rights movement—expanding who we empathize with and include. That might mean supporting diversity programs, using inclusive language, or rethinking how we teach history. To them, it’s just what thoughtful people do in a diverse society.

Here’s the Progressive Argument in a Nutshell

The term literally started as self-defense. Progressives argue the problems are real. Being “woke” is about recognizing that bias, inequality, and discrimination still exist. The data back some of this up—there are documented disparities in policing, sentencing, healthcare, and economic opportunity across racial lines. From this view, pointing these things out isn’t being oversensitive; it’s just stating facts.

They also point out that conservatives weaponized the term. They took a word from Black communities about awareness and justice and turned it into an all-purpose insult for anything they don’t like about the left. Some activists call this a “racial dog whistle”—a way to attack justice movements without being explicitly racist.

The concept naturally expanded from racial justice to other inequalities—sexism, LGBTQ+ discrimination, other forms of unfairness. Supporters see this as logical: if you care about one group being treated badly, why wouldn’t you care about others?

And here’s their final point: what’s the alternative? When you dismiss “wokeness,” you’re often dismissing the underlying concerns. Denying that racism still affects American life can become just another way to ignore real problems.

Bottom line from the liberal side: being “woke” means you’ve opened your eyes to how society works differently for different people, and you think we can do better.

The Conservative Take: It’s About Going Too Far

Conservatives see it completely differently. To them, “woke” isn’t about awareness—it’s about excess and control.

They see “wokeness” as an ideology that forces moral conformity and punishes anyone who disagrees. What started as social awareness has turned into censorship and moral bullying. When a professor loses their job over an unpopular opinion or comedy shows get edited for “offensive” jokes, conservatives point and say: “See? This is exactly what we’re talking about.”  To them, “woke” is just the new version of “politically correct”—except worse. It’s intolerance dressed up as virtue.

Here’s the conservative argument in a nutshell:

Wokeness has moved way beyond awareness into something harmful. They argue it creates a “victimhood culture” where status and that benefits come from claiming you’re oppressed rather than from merit or hard work. Instead of fixing injustice, they say it perpetuates it by elevating people based on identity rather than achievement.

They see it as “an intolerant and moralizing ideology” that threatens free speech. In their view, woke culture only allows viewpoints that align with progressive ideology and “cancels” dissenters or labels them “white supremacists.”

Many conservatives deny that structural racism or widespread discrimination still exists in modern America. They attribute unequal outcomes to factors other than bias. They believe America is fundamentally a great country and reject the idea that there is systematic racism or that capitalism can sometimes be unjust.

They also see real harm in certain progressive positions—like the idea that gender is principally a social construct or that children should self-determine their gender. They view these as threats to traditional values and biological reality.

Ultimately, conservatives argue that wokeness is about gaining power through moral intimidation rather than correcting injustice. In their view, the people rejecting wokeness are the real critical thinkers.

The Heart of the Clash

Here’s what makes this so messy: both sides genuinely believe they’re defending what’s right.

Liberals think “woke” means justice and empathy. Conservatives think it means judgment and control. The exact same thing—a company ad featuring diverse families, a school curriculum change, a social movement—can look like progress to one person and propaganda to another.

One person’s enlightenment is literally another person’s indoctrination.

The Word Nobody Wants Anymore

Here’s the ironic part: almost nobody calls themselves “woke” anymore. Like “politically correct” before it, the word has gotten so loaded that it’s frequently used as an insult—even by people who agree with the underlying ideas. The term has been stretched to cover everything from racial awareness to climate activism to gender identity debates, and the more it’s used, the less anyone knows what it truly means.

Recently though, some progressives have started reclaiming the term—you’re beginning to see “WOKE” on protest signs now.

So, Who’s Right?

Maybe both. Maybe neither.

If “woke” means staying aware of injustice and treating people fairly, that’s good. If it means acting morally superior and shutting down disagreement, that’s not. The truth is probably somewhere in the messy middle.

This whole debate tells us more about America than about the word itself. We’ve always struggled with how to balance freedom with fairness, justice with tolerance. “Woke” is just the latest word we’re using to have that same old argument.

The Bottom Line

Whether you love it or hate it, “woke” isn’t going anywhere soon. It captures our national struggle to figure out what awareness and fairness should look like today.

And honestly? Maybe we’d all be better off spending less time arguing about the word and more time talking about the actual values behind it—what’s fair, what’s free speech, what kind of society do we want?

Being “woke” originally meant recognizing systemic prejudices—racial injustice, discrimination, and social inequities many still experience daily. But the term’s become a cultural flashpoint.  Here’s the thing: real progress requires acknowledging both perspectives exist and finding common ground. It’s not about who’s “right”—it’s about building bridges.

 If being truly woke means staying alert to injustice while remaining open to dialogue with those who see things differently, seeking solutions that work for everyone, caring for others, being empathetic and charitable, then call me WOKE.

Supply-Side Economics and Trickle-Down: What Actually Happened?

The Basic Question

You’ve probably heard politicians arguing about tax cuts—some promising they’ll supercharge the economy, others dismissing them as giveaways to the rich. These debates usually involve two terms that get thrown around like political footballs: “supply-side economics” and “trickle-down economics.” But what do these terms actually mean, and more importantly, do they work? After four decades of real-world experiments, we finally have enough data to answer that question.

Understanding Supply-Side Economics

Supply-side economics is a legitimate economic theory that emerged in the 1970s when the U.S. economy was struggling with both high inflation and high unemployment—a combination that traditional economic theories said shouldn’t happen. The core idea is straightforward: economic growth comes from producing more goods and services (the “supply” side), not just from boosting consumer demand.

The theory rests on three main pillars. First, lower taxes—the thinking is that if people and businesses keep more of their money, they’ll work harder, invest more, and create jobs. According to economist Arthur Laffer’s famous curve, there’s supposedly a sweet spot where lower tax rates can actually generate more government revenue because the economy grows so much. Second, less regulation removes government restrictions so businesses can innovate and operate more efficiently. Third, smart monetary policy keeps inflation in check while maintaining enough money in the economy to fuel growth.

All of this sounds reasonable in theory. After all, who wouldn’t work harder if they kept more of their paycheck?

The Political Rebranding: Enter “Trickle-Down”

Here’s where economic theory meets political messaging. “Trickle-down economics” isn’t an academic term—it’s essentially a catchphrase, and not a complimentary one. Critics use it to describe supply-side policies when those policies mainly benefit wealthy people and corporations. The idea behind the name: give tax breaks to rich people and big companies, and the benefits will eventually “trickle down” to everyone else through job creation, higher wages, and economic growth.

Here’s the interesting part: no economist actually calls their theory “trickle-down economics.” Even David Stockman, President Reagan’s own budget director, later admitted that “supply-side” was basically a rebranding of “trickle-down” to make tax cuts for the wealthy easier to sell politically. So while they’re not identical concepts, they’re two sides of the same coin.

The Reagan Revolution: Testing the Theory

Ronald Reagan became president in 1981 and implemented the biggest supply-side experiment in U.S. history. He slashed the top tax rate from 70% down to 50%, and eventually to just 28%, arguing this would unleash economic growth that would lift all boats.

The results were genuinely mixed. On one hand, the economy created about 20 million jobs during Reagan’s presidency, unemployment fell from 7.6% to 5.5%, and the economy grew by 26% over eight years. Those aren’t small achievements.

But the picture gets more complicated when you look deeper. The tax cuts didn’t pay for themselves as promised—they reduced government revenue by about 9% initially. Reagan had to backtrack and raise taxes multiple times in 1982, 1983, 1984, and 1987 to address the mounting deficit problem. Income inequality increased significantly during this period, and surprisingly, the poverty rate at the end of Reagan’s term was essentially the same as when he started. Perhaps most telling, government debt more than doubled as a percentage of the economy.

There’s another wrinkle worth mentioning: much of the economic recovery happened because Federal Reserve Chairman Paul Volcker had already broken the back of inflation through tight monetary policy before Reagan’s tax cuts took effect. Disentangling how much credit Reagan’s policies deserve versus Volcker’s groundwork is genuinely difficult.

The Pattern Repeats

The story didn’t end with Reagan. George W. Bush enacted major tax cuts in 2001 and 2003, especially benefiting wealthy Americans. The result? Economic growth remained sluggish, deficits ballooned, and income inequality continued its upward march.

Then there’s Bill Clinton—the plot twist in this story. In 1993, Clinton actually raised taxes on the wealthy, pushing the top rate from 31% back up to 39.6%. Conservative economists predicted economic disaster. Instead, the economy boomed with what was then the longest sustained growth period in U.S. history, creating 22.7 million jobs. Even more remarkably, the government ran a budget surplus for the first time in decades.

Donald Trump’s 2017 tax cuts, focused heavily on corporations, showed minimal wage growth for workers while generating significant stock buybacks that primarily benefited shareholders—and yes, larger deficits. Trump’s subsequent economic policies in his second term have been characterized by such volatility that reasonable long-term assessments remain difficult.

The Kansas Experiment: A Modern Test Case

At the state level, Kansas Governor Sam Brownback implemented one of the boldest modern experiments in supply-side policy between 2012 and 2017, dramatically slashing income taxes especially for businesses. Proponents called it a “real live experiment” that would demonstrate supply-side principles in action.

Instead of unleashing growth, Kansas faced severe budget shortfalls that forced cuts to education and infrastructure. Economic growth actually lagged behind neighboring states that didn’t implement such aggressive cuts, and the state legislature eventually reversed many of the tax reductions. This case has become a frequently cited cautionary tale for critics of supply-side policies.

What Does Half a Century of Data Show?

After 50 years of real-world experiments, researchers finally have enough data to move beyond political rhetoric. A comprehensive study analyzed tax policy changes across 18 developed countries over five decades, looking at what actually happened after major tax cuts for the wealthy.

The findings are remarkably consistent. Tax cuts for the rich reliably increase income inequality—no surprise there. But they show no significant effect on overall economic growth rates and no significant effect on unemployment. Perhaps most damaging to the theory, they don’t “pay for themselves” through increased growth. At best, about one-third of lost revenue gets recovered through expanded economic activity.

In simpler terms: when you cut taxes for wealthy people, wealthy people get wealthier. The promised broader benefits largely fail to materialize. The 2022 World Inequality Report reinforced these conclusions, finding that the world’s richest 10% continue capturing the vast majority of all economic gains, while the bottom half of the population holds just 2% of all wealth.

Why the Theory Doesn’t Match Reality

When you think about it logically, the disconnect makes sense. If you give a tax cut to someone who’s already wealthy, they’ll probably save or invest most of it—they were already buying what they wanted and needed. Their daily spending habits don’t change much. But if you give money to someone who’s struggling to pay bills or afford necessities, they’ll spend it immediately, directly stimulating economic activity.

Economists call this concept “marginal propensity to consume,” and it explains why giving tax breaks to working and middle-class people actually does more to boost the economy than supply-side cuts focused on the wealthy. A dollar in the hands of someone who needs to spend it has more immediate economic impact than a dollar added to an already-substantial investment portfolio.

The Bottom Line

After 40-plus years of repeated experiments, the pattern is clear. Supply-side policies and trickle-down approaches consistently increase deficits, widen inequality, and fail to significantly boost overall economic growth or create more jobs than alternative policies. Meanwhile, periods with higher taxes on the wealthy, like the Clinton years, saw strong growth, robust job creation, and balanced budgets.

The Nuance Worth Keeping

None of this means all tax cuts are bad or that high taxes are always good—economics is rarely that simple. The critical questions are: who receives the tax cuts, and what outcomes do you realistically expect? Targeted tax cuts for working families, small businesses, or specific industries facing genuine challenges can serve as effective policy tools. Child tax credits, research and development incentives, or relief for struggling sectors might accomplish specific goals.

But the evidence accumulated over four decades is clear: broad tax cuts focused primarily on the wealthy and large corporations don’t deliver the promised economic benefits for everyone else. The benefits don’t trickle down in any meaningful way.

You’ll keep hearing these arguments for years to come. Politicians will continue promising that tax cuts for businesses and the wealthy will boost the entire economy. Now you know what the actual evidence shows, and you can judge those promises accordingly.


Sources:

The Sugar Act of 1764: The Tax Cut That Sparked a Revolution

The Sugar Act of 1764: The Tax Cut That Sparked a Revolution

Imagine a time when people rose up in protest of a tax being lowered.  Welcome to the world of the Sugar Act.

The Sugar Act of 1764 stands as one of the most ironic moments in the history of taxation. Here Britain was actually lowering a tax, and yet colonists reacted with a fury that would help spark a revolution. To understanding this paradox, we must understand that this new act represented something far more threatening than any previous attempt by Britain to regulate its American colonies.

The Old System: Benign Neglect

For decades before 1764, Britain had maintained what historians call “salutary neglect” toward its American colonies. The Molasses Act of 1733 had imposed a steep duty of six pence per gallon on foreign molasses imported into the colonies. On paper, this seemed like a significant burden for the rum-distilling industry, which depended heavily on cheap molasses from French and Spanish Caribbean islands. In practice, though, the tax was rarely collected. Colonial merchants either bribed customs officials or simply smuggled the molasses past them. The British government essentially looked the other way, and everyone profited.

This informal arrangement worked because Britain’s primary interest in the colonies was commercial, not fiscal. The Navigation Acts required colonists to ship certain goods only to Britain and to buy manufactured goods from British merchants, which enriched British traders and manufacturers without requiring aggressive tax collection in America. As long as this system funneled wealth toward London, Parliament didn’t care much about collecting relatively small customs duties across the Atlantic.

Everything Changed in 1763

The Seven Years’ War (which Americans call the French and Indian War) changed this comfortable arrangement entirely. Britain won decisively, driving France out of North America and gaining vast new territories. But victory came with a staggering price tag. Britain’s national debt had nearly doubled to £130 million, and annual interest payments alone consumed half the government’s budget. Meanwhile, Britain now needed to maintain 10,000 troops in North America to defend its expanded empire and manage relations with Native American tribes.

Prime Minister George Grenville faced a political problem. British taxpayers, already heavily burdened, were in no mood for additional taxes. The logic seemed obvious: since the colonies had benefited from the war’s outcome and still required military protection, they should help pay for their own defense. Americans, paid far lower taxes than their counterparts in Britain—by some estimates, British residents paid 26 times more per capita in taxes than colonists did.

What the Act Actually Did

The Sugar Act (officially the American Revenue Act of 1764) approached colonial taxation differently than anything before it. First, it cut the duty on foreign molasses from six pence to three pence per gallon—a 50% reduction. Grenville calculated, reasonably, that merchants might actually pay a three-pence duty rather than risk getting caught smuggling, whereas the six-pence duty had been so high it encouraged universal evasion.

But the Act did far more than adjust molasses duties. It added or increased duties on foreign textiles, coffee, indigo, and wine imported into the colonies. colonialtened regulations around the colonial lumber trade and banned the import of foreign rum entirely. Most significantly, the Act included elaborate provisions designed to strictly enforce these duties for the first time.

The enforcement mechanisms represented the real revolution in British policy. Ship captains now had to post bonds before loading cargo and had to maintain detailed written cargo lists. Naval patrols increased dramatically. Smugglers faced having their ships and cargo seized.

Significantly, the burden of proof was shifted to the accused.  They were required to prove their innocence, a reversal of traditional British justice. Most controversially, accused smugglers would be tried in vice-admiralty courts, which had no juries and whose judges received a cut of any fines levied.

The Paradox of the Lower Tax

So why did colonists react so angrily to a tax cut? The answer reveals the fundamental shift in the British-American relationship that the Sugar Act represented.

First, the issue wasn’t the tax rate. It was the certainty of collection. A six-pence tax that no one paid was infinitely preferable to a three-pence tax rigorously enforced. New England’s rum distilling industry, which employed thousands of distillery workers and sailors, depended on cheap molasses from the French West Indies. Even at three pence per gallon, the tax significantly increased operating costs. Many merchants calculated they couldn’t remain profitable if they had to pay it.

Second, and more importantly, colonists recognized that the Act’s purpose had changed relationships. Previous trade regulations, even if they involved taxes, were ostensibly about regulating commerce within the empire. The Sugar Act openly stated its purpose was raising revenue—the preamble declared it was “just and necessary that a revenue be raised” in America. This might seem like a technical distinction, but to colonists it mattered enormously. British constitutional theory held that subjects could only be taxed by their own elected representatives. Colonists elected representatives to their own assemblies but sent no representatives to Parliament. Trade regulations fell under Parliament’s legitimate authority to govern imperial commerce, but taxation for revenue was something else entirely.

Third, the enforcement mechanisms offended colonial sensibilities about justice and traditional British rights. The vice-admiralty courts denied jury trials, which colonists viewed as a fundamental right of British subjects. Having to prove your innocence rather than being presumed innocent violated another core principle. Customs officials and judges profiting from convictions created obvious incentives for abuse.

Implementation and Colonial Response

The Act took effect in September 1764, and Grenville paired it with an aggressive enforcement campaign. The Royal Navy assigned 27 ships to patrol American waters. Britain appointed new customs officials and gave them instructions to strictly do their jobs rather than accept bribes. Admiralty courts in Halifax, Nova Scotia became particularly notorious.  Colonists had to travel hundreds of miles to defend themselves in a court with no jury and with a judge whose income game from convictions.

Colonists responded immediately. Boston merchants drafted a protest arguing that the act would devastate their trade. They explained that New England’s economy depended on a complex triangular trade: they sold lumber and food to the Caribbean in exchange for molasses, which they distilled into rum, which they sold to Africa for slaves, who were sold to Caribbean plantations for molasses, and the cycle repeated. Taxing molasses would break this chain and impoverish the region.

But the economic arguments quickly evolved into constitutional ones. Lawyer James Otis argued that “taxation without representation is tyranny”—a phrase that would echo through the coming decade. Colonial assemblies began passing resolutions asserting their exclusive right to tax their own constituents. They didn’t deny Parliament’s authority to regulate trade, but they drew a clear line: revenue taxation required representation.

The protests went beyond rhetoric. Colonial merchants organized boycotts of British manufactured goods. Women’s groups pledged to wear homespun cloth rather than buy British textiles. These boycotts caused enough economic pain in Britain that London merchants began lobbying Parliament for relief.

The Road to Revolution

The Sugar Act’s significance extends far beyond its immediate economic impact. It established precedents and patterns that would define the next decade of imperial crisis.

Most fundamentally, it shattered the comfortable arrangement of salutary neglect. Once Britain demonstrated it intended to actively govern and tax the colonies, the relationship could never return to its previous informality. The colonists’ constitutional objections—no taxation without representation, right to jury trials, presumption of innocence—would be repeated with increasing urgency as Parliament passed the Stamp Act (1765), Townshend Acts (1767), and Tea Act (1773).

The Sugar Act also revealed the practical difficulties of governing an empire across 3,000 miles of ocean. The vice-admiralty courts became symbols of distant, unaccountable power. When colonists couldn’t get satisfaction through established legal channels, they increasingly turned to extralegal methods, including committees of correspondence, non-importation agreements, and eventually armed resistance.

Perhaps most importantly, the Sugar Act forced colonists to articulate a political theory that ultimately proved incompatible with continued membership in the British Empire. Once they agreed to the principle that they could only be taxed by their own elected representatives, and that Parliament’s authority over them was limited to trade regulation, the logic led inexorably toward independence. Britain couldn’t accept colonial assemblies as co-equal governing bodies since Parliament claimed supreme authority over all British subjects. The colonists couldn’t accept taxation without representation since they claimed the rights of freeborn Englishmen. These positions couldn’t be reconciled.

The Sugar Act of 1764 represents the point where the British Empire’s century-long success in North America began to unravel. By trying to make the colonies pay a modest share of imperial costs through what seemed like reasonable means, Britain inadvertently set in motion forces that would break the empire apart just twelve years later.

Sources

Mount Vernon Digital Encyclopedia – Sugar Act https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/sugar-act/ Provides overview of the Act’s provisions, economic context, and relationship to British debt from the Seven Years’ War. Includes information on tax burden comparisons between Britain and the colonies.

Britannica – Sugar Act https://www.britannica.com/event/Sugar-Act Covers the specific provisions of the Act, enforcement mechanisms, vice-admiralty courts, and the shift from the Molasses Act of 1733. Useful for technical details of the legislation.

History.com – Sugar Act https://www.history.com/topics/american-revolution/sugar-act Discusses colonial constitutional objections, the “taxation without representation” argument, and the enforcement provisions including burden of proof reversal and jury trial denial.

American Battlefield Trust – Sugar Act https://www.battlefields.org/learn/articles/sugar-act Details colonial response including boycotts, James Otis’s arguments, and the triangular trade system that the Act disrupted.

Additional Recommended Sources

Library of Congress – The Sugar Act https://www.loc.gov/collections/continental-congress-and-constitutional-convention-broadsides/articles-and-essays/continental-congress-broadsides/broadsides-related-to-the-sugar-act/ Primary source collection including contemporary colonial broadsides and protests against the Act.

National Archives – The Sugar Act (Primary Source Text) https://founders.archives.gov/about/Sugar-Act The actual text of the American Revenue Act of 1764, useful for verifying specific provisions and language.

Yale Law School – Avalon Project: Resolutions of the Continental Congress (October 19, 1765) https://avalon.law.yale.edu/18th_century/resolu65.asp Colonial responses to the Sugar and Stamp Acts, showing how the arguments evolved.

Massachusetts Historical Society – James Otis’s Rights of the British Colonies Asserted and Proved (1764) https://www.masshist.org/digitalhistory/revolution/taxation-without-representation Primary source for the “taxation without representation” argument that emerged from Sugar Act opposition.

Colonial Williamsburg Foundation – Sugar Act of 1764 https://www.colonialwilliamsburg.org/learn/deep-dives/sugar-act-1764/ Discusses economic impact on colonial merchants and the rum distilling industry.

Slavery and the Constitutional Convention: The Compromise That Shaped a Nation

When fifty-five delegates gathered in Philadelphia during the sweltering summer of 1787, they faced a challenge that would haunt American politics for the next eight decades. The question wasn’t whether slavery was morally right—many delegates privately acknowledged its evil—but whether a unified nation could exist with slavery as a part of it. That summer, the institution of slavery nearly killed the Constitution before it was born.

The Battle Lines

The convention revealed a stark divide. On one side stood delegates who spoke forcefully against slavery, though they represented a minority voice. Gouverneur Morris of Pennsylvania delivered some of the most scathing condemnations, calling slavery a “nefarious institution” and “the curse of heaven on the states where it prevailed.” According to James Madison’s notes, Morris argued passionately that counting enslaved people for representation would mean that someone “who goes to the Coast of Africa, and in defiance of the most sacred laws of humanity tears away his fellow creatures from their dearest connections & damns them to the most cruel bondages, shall have more votes in a Government instituted for protection of the rights of mankind.”

Luther Martin of Maryland, himself a slaveholder, joined Morris in opposition. He declared the slave trade “inconsistent with the principles of the revolution and dishonorable to the American character.”.  Even George Mason of Virginia, who owned over 200 enslaved people, denounced slavery at the convention, warning that “every master of slaves is born a petty tyrant” and that it would bring “the judgment of heaven on a country.”

The Southern Coalition

Facing these critics stood delegates from the Deep South—primarily South Carolina and Georgia—who made it abundantly clear that protecting slavery was non-negotiable. The South Carolina delegation was particularly unified and aggressive in defending the institution. All four of their delegates—John Rutledge, Charles Pinckney, Charles Cotesworth Pinckney, and Pierce Butler—owned slaves, and they spoke with one voice.

Charles Cotesworth Pinckney stated bluntly: “South Carolina and Georgia cannot do without slaves.” John Rutledge framed it even more starkly: “The true question at present is, whether the Southern States shall or shall not be parties to the Union.” The message was unmistakable—attempt to restrict slavery, and there would be no Constitution and perhaps no United States.

The Southern states didn’t just defend slavery; they threatened to walk out repeatedly. When debates over the slave trade heated up on August 22, delegates from North Carolina, South Carolina, and Georgia stated they would “never be such fools as to give up” their right to import enslaved Africans.  These weren’t idle threats—they were credible enough to force compromise.

The Three-Fifths Compromise

The central flashpoint came over representation in Congress. The new Constitution would base representation on population, but should enslaved people count? Southern states wanted every enslaved person counted fully, which would dramatically increase their congressional power. Northern states argued that enslaved people—who had no rights and couldn’t vote—shouldn’t count at all.

The three-fifths ratio had actually been debated before. Back in 1783, Congress had considered using it to calculate state tax obligations under the Articles of Confederation, though that proposal failed. James Wilson of Pennsylvania resurrected the idea at the Constitutional Convention, suggesting that representation be based on the free population plus three-fifths of “all other persons”—the euphemism they used to avoid writing the word “slave” in the Constitution.

The compromise passed eight states to two. New Jersey and Delaware are generally identified as the states voting against the compromise, New Hampshire is not listed as taking part in the vote. Rhode Island did not send a delegation to the convention and by the time of the vote New York no longer had a functioning delegation.

Though the South ultimately accepted the compromise, it wasn’t what they wanted. Southern delegates had pushed to count enslaved people equally with free persons—but otherwise ignored on all issues of human rights. The three-fifths ratio was a reduction from their demands—a limitation on slave state power, though it still gave them substantial advantage. With about 93% of the nation’s enslaved population concentrated in just five southern states, this compromise increased the South’s congressional delegation by 42%.

James Madison later recognized the compromise’s significance. He wrote after the convention: “It seems now to be pretty well understood that the real difference of interests lies not between the large and small but between the northern and southern states. The institution of slavery and its consequences form the line of discrimination.”

Could the Constitution Have Happened Without It?

Here’s where I need to speculate, but I’m fairly confident in this assessment: no, the Constitution would not have been ratified without the three-fifths compromise and related concessions on slavery.

The evidence is overwhelming. South Carolina and Georgia delegates stated explicitly and repeatedly that they would not join any union that restricted slavery. Alexander Hamilton himself later acknowledged that “no union could possibly have been formed” without the three-fifths compromise. Even delegates who despised slavery, like Roger Sherman of Connecticut, argued it was “better to let the Southern States import slaves than to part with them.”

The convention negotiated three major slavery compromises, all linked. Beyond the three-fifths clause, they agreed Congress couldn’t ban the international slave trade until 1808, and they included the Fugitive Slave Clause requiring the return of escaped enslaved people even from free states. These deals were struck together on August 29, 1787, in what Madison’s notes reveal was a package negotiation between northern and southern delegates.

Without these compromises, the convention would likely have collapsed. The alternative wouldn’t have been a better Constitution—it would have been no Constitution at all, potentially leaving the thirteen states as separate nations or weak confederations. Whether that would have been preferable is a profound counterfactual question that historians still debate.

The Impact on Early American Politics

The three-fifths compromise didn’t just affect one document—it shaped American politics for decades. Its effects were immediate and substantial.

The most famous early example came in the presidential election of 1800. Thomas Jefferson defeated John Adams in what’s often called the “Revolution of 1800″—the first peaceful transfer of power between opposing political parties. But Jefferson’s victory owed directly to the three-fifths compromise. Virginia’s enslaved population gave the state extra electoral votes that proved decisive. Historian Garry Wills has speculated that without these additional slave-state votes, Jefferson would have lost. Pennsylvania had a free population 10% larger than Virginia’s, yet received 20% fewer electoral votes because Virginia’s numbers were inflated by the compromise.

The impact extended far beyond that single election. Research shows the three-fifths clause changed the outcome of over 55% of legislative votes in the Sixth Congress (1799-1801). (The additional southern representatives—about 18 more than their free population warranted—gave the South what became known as the “Slave Power” in Congress.

This power influenced major legislation throughout the antebellum period. The Indian Removal Act of 1830, which forcibly relocated Native Americans to open land for plantation agriculture, passed because of margins provided by these extra southern representatives. The Missouri Compromise, the Kansas-Nebraska Act, and numerous other slavery-related measures bore the fingerprints of this constitutional imbalance.

The compromise also affected Supreme Court appointments and federal patronage. Southern-dominated Congresses ensured pro-slavery justices and policies that protected the institution. The sectional tensions it created led directly to later compromises—the Missouri Compromise of 1820, the Compromise of 1850—each one a temporary bandage on a wound that wouldn’t heal.

By the 1850s, the artificial political power granted to slave states had become intolerable to many northerners. When Abraham Lincoln won the presidency in 1860 without carrying a single southern state, southern political leaders recognized they had lost control of the federal government. Senator Louis Wigfall of Texas complained that non-slaveholding states now controlled Congress and the Electoral College. Ten southern states seceded in large part because they believed the three-fifths compromise no longer protected their interests.

The Bitter Legacy

The framers consciously avoided using the words “slave” or “slavery” in the Constitution, recognizing it would “sully the document.” But the euphemisms fooled no one. They had built slavery into the structure of American government, trading moral principles for political union.

The Civil War finally resolved what the Constitutional Convention had delayed. The Thirteenth Amendment abolished slavery in 1865, but not until 1868 did the Fourteenth Amendment finally strike the three-fifths clause from the Constitution, requiring that representation be based on counting the “whole number of persons” in each state.

Was it worth it? That’s ultimately a question of values. The Constitution created a stronger national government that eventually abolished slavery, but it took 78 years and a war that killed over 600,000 Americans. As Thurgood Marshall noted on the Constitution’s bicentennial, the framers “consented to a document which laid a foundation for the tragic events which were to follow.”

The convention delegates knew what they were doing. They chose union over justice, pragmatism over principle. Whether that choice was necessary, wise, or moral remains one of the most contested questions in American history.

____________________________________________________

Sources

  1. https://www.battlefields.org/learn/articles/slavery-and-constitution
  2. https://en.wikipedia.org/wiki/Luther_Martin
  3. https://schistorynewsletter.substack.com/p/7-october-2024
  4. https://www.americanacorner.com/blog/constitutional-convention-slavery
  5. https://www.nps.gov/articles/000/constitutionalconvention-august22.htm
  6. https://en.wikipedia.org/wiki/Three-fifths_Compromise
  7. https://www.brennancenter.org/our-work/analysis-opinion/electoral-colleges-racist-origins
  8. https://www.gilderlehrman.org/history-resources/teaching-resource/historical-context-constitution-and-slavery
  9. https://www.nps.gov/articles/000/constitutionalconvention-august29.htm
  10. https://www.lwv.org/blog/three-fifths-compromise-and-electoral-college
  11. https://www.aaihs.org/a-compact-for-the-good-of-america-slavery-and-the-three-fifths-compromise-part-ii/

Life Below Deck: Enlisted Sailors in America’s Continental Navy

When the Continental Congress established America’s first navy in October 1775, they faced a daunting challenge: how do you build a fleet from scratch when you’re fighting the world’s most powerful naval force? The Continental Navy peaked at around 3,000 men serving on approximately 30 ships, a tiny force compared to Britain’s massive Royal Navy. But who were these sailors who were willing to risk their lives for a fledgling republic?

Where They Came From

The colonial maritime community had extensive seafaring experience, as much of British trade was carried in American vessels, and North Americans made up a significant portion of the Royal Navy’s seamen. Continental Navy sailors came primarily from port cities along the Atlantic coast, particularly New England communities where maritime trades were a way of life. Many had worked as merchant sailors, fishermen, or privateers before joining.

The naval service was notably diverse for its time, including native-born Americans, British deserters, free and enslaved Black sailors, and European immigrants. Unlike the Continental Army, which had periods of banning Black soldiers or sometimes placing them in segregated regiments, the Continental Navy was mostly integrated. At sea, there was less distinction between free and enslaved sailors, and those held in bondage had opportunities to work toward freedom. This maritime tradition of relative equality distinguished naval service from other Revolutionary War experiences.

Getting Into the Service

Recruiting sailors proved to be one of the Continental Navy’s biggest headaches. Navy boards supervised appointing petty officers and enlisting seamen, though these duties were chiefly performed by ship commanders or recruiting agents. The first Marine recruiting station was located at Tun’s Tavern, a bar in Philadelphia.

Enlistment was generally voluntary, though the line between volunteering and impressment—forced service—was sometimes blurred. Recruiting parties would scour port towns seeking able-bodied men, advertising not only pay but also the possibility of capturing British prizes for sale, with proceeds shared among the crew—a powerful incentive.

The problem was competition. Privateering—private ships licensed by congress to seize enemy vessels—was far more attractive to sailors because cruises were shorter and pay could be better. With over 2,000 privateers operating during the war, the Continental Navy struggled constantly to maintain adequate crew sizes. Continental captains often found themselves unable to man their ships due to privateers’ superior inducements.

Landsmen, Seamen, and Petty Officers

At the bottom rung of a Navy crew stood the landsman—a recruit with little or no sea experience. Many were farm boys or tradesmen who had never set foot on a ship. Their days were filled with the hardest labor: hauling ropes, scrubbing decks, and learning basic seamanship.

Above them were ordinary seamen, who had some experience afloat, and the more skilled able seamen who knew their way around sails, rigging, and naval gunnery. These sailors formed the backbone of the Continental Navy. Sailors skilled in managing the ship’s rigging were said to “know the ropes.” Without their knowledge of wind, tide, and timber, ships would have been little more than floating platforms.

The most experienced enlisted men were promoted to petty officers. These weren’t commissioned officers but rather specialists and leaders—boatswain’s mates directing rigging crews, gunner’s mates overseeing cannon fire, and carpenters’ mates keeping the wooden hulls afloat. They were the Navy’s “non-commissioned officers,” long before the U.S. Navy had a formal NCO corps.

Most Continental Navy ships also carried detachments of Continental Marines. These enlisted men were soldiers at sea, tasked with keeping order on deck, manning small arms in combat, and leading boarding parties.

What They Wore

Unlike officers who had prescribed uniforms, enlisted sailors received no standard clothing from the Continental Navy. Due to meager funds and lack of manufacturing capacity, sailors generally provided their own clothing, usually consisting of pantaloons often tied at the knee or knee breeches, a jumper or shirt, neckerchief, short waisted jacket, and low crowned hats. Most sailors went barefoot, and a kerchief was worn either as a sweat band or as a simple collar closure. The short trousers served a practical purpose—they didn’t interfere with climbing the ship’s rigging. This lack of uniforms reflected the Continental Navy’s financial struggles, where everything from ships to ammunition took priority over standardized clothing.

Daily Life at Sea

Shipboard duties for enlisted sailors were grueling and dangerous. Landsmen cleaned the deck, helped raise or lower the anchor, worked in the galley, and assisted other crew members. More experienced sailors handled the complex work of managing sails, operating guns during combat, standing watch, and maintaining the vessel. Specialized roles were filled by experienced hands, and most sailors worked long shifts in harsh conditions, often enduring crowded, wet, and unsanitary quarters below deck.

Living conditions were cramped. Sailors lived in close quarters with limited privacy, shared hammocks on the lower decks, and endured monotonous food rations. Meals were simple, based on salted meat, ship’s biscuit, and whatever could be supplemented from local ports or captured prizes. Leisure was rare, and recreation was often limited to singing, storytelling, or gambling. The work was physically demanding and accidents were common—falling from rigging, being crushed by shifting cargo, or drowning were constant risks.

Discipline and Relations with Officers

Discipline in the Continental Navy was deeply influenced by the British Royal Navy and the “ancient common law of the sea.” The Continental Congress issued articles governing naval discipline, empowering officers to maintain strict order and punish infractions including drunkenness, blasphemy, theft, or disobedience. Punishments included wearing a wooden collar, spending time in irons, receiving pay deductions, confinement on bread and water, or, for serious offenses, flogging.

Flogging was often done with a multi-thonged whip known as the cat o’ nine tails. The most common flogging consisted of between 12 and 24 lashes, though mutineers might receive sentences in the hundreds of lashes—often becoming a death sentence.

Even though officers held absolute authority aboard their vessels, the Continental Navy sometimes suffered from severe discipline problems. Some commanders found it impossible to maintain control over squadrons made up of crews recruited from one area and commanded by officers from another. The relationship between officers and enlisted men reflected the social hierarchies of the time, with a clear divide between the educated officer class and working-class sailors. However, the shared dangers of combat and the sea could create bonds that transcended these divisions.

A Brief but Important Legacy

Enlisted sailors of the Continental Navy came from diverse and often hardscrabble backgrounds, shaped by the hard labor and hazards of maritime life. These men, whose names are mostly lost to history, formed the foundation of America’s first navy and contributed profoundly—through sacrifice and service—to the establishment of American independence.

Of approximately 65 vessels that served in the Continental Navy, only 11 survived the war, and by 1785 Congress had disbanded the Navy and sold the remaining ships. Despite its short existence and limited impact on the war’s outcome, the sailors of the Continental Navy created a foundation for American naval tradition and provided trained seamen who would serve in future conflicts.

Sources:

Personal note: The Grumpy Doc proudly served as an enlisted sailor in the U.S. Navy from 1967 to 1974.

Study The Past If You Would Define The Future—Confucius

I particularly like this quotation. It is similar to the more modern version: Those who don’t learn from the past are doomed to repeat it. However, I much prefer the former because it seems to be more in the form of advice or instruction. The latter seems to be more of a dire warning. Though I suspect, given the current state of the world, a dire warning is in order.

But regardless of whether it comes in the form of advice or warning, people today do not seem to heed the importance of studying the past.  The knowledge of history in our country is woeful. The lack of emphasis on the teaching of history in general and specifically American history, is shameful. While it is tempting to blame it on the lack of interest on the part of the younger generation, I find people my own age also have little appreciation of the events that shaped our nation, the world and their lives. Without this understanding, how can we evaluate what is currently happening and understand what we must do to come together as a nation and as a world.

I have always found history to be a fascinating subject. Biographies and nonfiction historical books remain among my favorite reading. In college I always added one or two history courses every semester to raise my grade point average. Even in college I found it strange that many of my friends hated history courses and took only the minimum. At the time, I didn’t realize just how serious this lack of historical perspective was to become.

Several years ago I became aware of just how little historical knowledge most people had. At the time Jay Leno was still doing his late-night show and he had a segment called Jaywalking. During the segment he would ask people in the street questions that were somewhat esoteric and to which he could expect to get unusual and generally humorous answers. On one show, on the 4th of July, he asked people “From what country did the United States declare independence on the 4th of July?” and of course no one knew the answer.

My first thought was that he must have gone through dozens of people to find the four or five people who did not know the answer to his question. The next day at work, the 5th of July, I decided to ask several people, all of whom were college graduates, the same question. I got not one single correct answer. Although, one person at least realized “I think I should know this”. When I told my wife, a retired teacher, she wasn’t surprised.  For a long time, she had been concerned about the lack of emphasis on social studies and the arts in school curriculums.  I was becoming seriously concerned about the direction of education in our country.

A lot of people are probably thinking “So what, who really cares what a bunch of dead people did 200 years ago?” If we don’t know what they did and why they did it how can we understand its relevance today?  We have no way to judge what actions may support the best interests of society and what might ultimately be detrimental.

Failure to learn from and understand the past results in a me-centric view of everything. If you fail to understand how things have developed, then you certainly cannot understand what the best course is to go forward. Attempting to judge all people and events of the past through your own personal prejudices leads only to continued and worsening conflict.

If you study the past you will see that there has never general agreement on anything. There were many disagreements, debates and even a civil war over differences of opinion.  It helps us to understand that there are no perfect people who always do everything the right way and at the right time. It helps us to appreciate the good that people do while understanding the human weaknesses that led to the things that we consider faults today. In other words, we cannot expect anyone to be a 100% perfect person. They may have accomplished many good and meaningful things and those good and meaningful things should not be discarded because the person was also a human being with human flaws.

Understanding the past does not mean approving of everything that occurred but it also means not condemning everything that does not fit into twenty-first century mores.  Only by recognizing this and seeing what led to the disasters of the past can we hope to avoid repetition of the worst aspects of our history. History teaches lessons in compromise, involvement and understanding. Failure to recognize that leads to strident argument and an unwillingness to cooperate with those who may differ in even the slightest way. Rather than creating the hoped-for perfect society, it simply leads to a new set of problems and a new group of grievances.

In sum, failure to study history is a failure to prepare for the future. We owe it to ourselves and future generations to understand where we came from and how we can best prepare our country and the world for them. They deserve nothing less than a full understanding of the past and a rational way forward. 

This was my first post after I started my blog in 2021.  I believe it is even more relevant now.

The Freemasons and the Founding Fathers: Secret Society or Just a Really Good Book Club?

You’ve probably heard the whispers—the Freemasons secretly controlled the American Revolution, George Washington wore a special apron, and there’s a hidden pyramid on the dollar bill. It’s the kind of thing that sounds like it came straight from a Nicolas Cage movie. But like most historical legends, the real story is more interesting (and less conspiratorial) than the mythology.

So, what’s the actual deal with Freemasons and America’s founding? Let’s dig in.

What Even Is Freemasonry?

First things first: Freemasonry started out as actual stonemasons’ guilds back in medieval Europe—think guys who built cathedrals sharing trade secrets. But by the early 1700s, it had transformed into something completely different: a philosophical club where educated men gathered to discuss big ideas about morality, reason, and how to be better humans.

The secrecy? That was part of the appeal. Lodges had rituals and passwords, sure, but the core values weren’t exactly hidden. Freemasons were all about Enlightenment thinking—liberty, equality, the pursuit of knowledge. Basically, the kind of stuff that gets you excited if you’re the type who actually enjoys reading philosophy books.

In colonial America, joining a Masonic lodge was a bit like joining an elite networking group today, except instead of swapping business cards, you discussed natural rights and wore fancy aprons. Lawyers, merchants, printers—the educated professional class—flocked to lodges for both the intellectual stimulation and the social connections.

The Founding Fathers: Who Was Actually In?

Let’s separate fact from fiction when it comes to which founders were card-carrying Masons.

Definitely Masons:

George Washington became a Master Mason at 21 in 1753. He wasn’t the most active member—he didn’t attend meetings constantly—but he took it seriously enough to wear his Masonic apron when he laid the cornerstone of the U.S. Capitol in 1793. That’s a pretty public endorsement.

Benjamin Franklin was perhaps the most dedicated Mason among the founders. Initiated in 1731, he eventually became Grand Master of Pennsylvania’s Grand Lodge and helped establish lodges in France during his diplomatic stint. Franklin was basically the poster child for Enlightenment Masonry.

Paul Revere—yes, that Paul Revere—was Grand Master of Massachusetts. His midnight ride gets all the attention, but his Masonic connections were just as important to his Revolutionary activities.

John Hancock also served as Grand Master of Massachusetts. His oversized signature on the Declaration was matched by his outsized commitment to Masonic ideals.

John Marshall, the Chief Justice who shaped American constitutional law, was a dedicated Mason. So was James Monroe, the fifth president.

Here’s a fun stat: of the 56 signers of the Declaration of Independence, at least nine (about 16%) were Masons. Among the 39 who signed the Constitution, roughly thirteen (33%) belonged to the fraternity.

The Maybes:

Thomas Jefferson? Probably not a Mason, despite endless conspiracy theories. There’s no solid evidence of membership, though his Enlightenment philosophy certainly sounded Masonic. His buddy the Marquis de Lafayette was definitely in, which hasn’t helped dispel the rumors.

Alexander Hamilton? The evidence is murky. Some historians think his writings hint at Masonic sympathies, but there’s no membership record.

Definitely Not:

John Adams wasn’t a Mason and was actually skeptical of secret societies. He still believed in many of the same principles, though—virtue, republican government, that sort of thing.

Did the Masons Really Influence the Revolution?

Here’s where it gets interesting. No, the Freemasons didn’t sit around a lodge plotting revolution like some shadowy cabal. But did their ideas and networks matter? Absolutely.

Think about what Masonic lodges provided: a space where educated colonists could meet, discuss radical ideas about natural rights and self-governance, and build trust across colonial boundaries—all without British officials breathing down their necks. These lodges brought together men from different colonies, different religious backgrounds (Anglicans, Quakers, Deists), and different social classes.

The radical part? Inside a lodge, everyone met “on the level.” It didn’t matter if you were born rich or poor—merit and virtue determined your standing. That’s pretty revolutionary thinking in the 1700s when most of the world still believed some people were just born better than others. Sound familiar? “All men are created equal” has a similar ring to it.

Freemasonry also championed religious tolerance. You had to believe in some kind of Supreme Being, but that was it—no specific creed required. This ecumenical approach directly influenced the founders’ commitment to religious freedom and separation of church and state.

The Masonic motto about moving “from darkness to light” through knowledge wasn’t just ritualistic mumbo-jumbo. It reflected genuine Enlightenment belief in reason and progress—the same intellectual current that powered revolutionary thinking.

What About All That Symbolism?

Okay, let’s address the pyramid and the all-seeing eye on the dollar bill. Are they Masonic? Maybe, maybe not. The Great Seal of the United States definitely uses imagery that Masons also used—but so did lots of 18th-century groups drawing on Enlightenment and classical symbolism. The connection is debated among historians.

What’s undeniable is that Masonic culture emphasized architecture and building as metaphors for constructing a just society. When Washington laid that Capitol cornerstone in his Masonic apron, he was making a statement about building something enduring and meaningful.

The “Conspiracy” Question

Let’s be clear: there was no Masonic conspiracy to create America. The fraternity wasn’t even unified—lodges operated independently, and members included both patriots and loyalists. Officially, Masonic organizations tried to stay neutral during the Revolution, though obviously that didn’t work out perfectly when the war split families and communities.

What is true is that many of the Revolution’s most articulate, influential leaders happened to be Masons. And the fraternity’s values—liberty, equality, reason, fraternity—aligned perfectly with revolutionary ideology. Correlation, not conspiracy.

After the Revolution, Freemasonry exploded in popularity. It became associated with the Enlightenment values that had supposedly won the day. Future presidents including Andrew Jackson, James Polk, and Theodore Roosevelt were all Masons. At its 19th-century peak, an estimated one in five American men belonged to a lodge.

What’s the Bottom Line?

The Freemason influence on America’s founding is real, but it’s cultural rather than conspiratorial. The lodges provided a space where Enlightenment ideas could circulate, where colonial leaders could build networks of trust, and where egalitarian principles could be practiced in miniature.

Washington, Franklin, Hancock, and the others weren’t sitting in smoke-filled rooms with secret handshakes planning to overthrow the British crown. They were part of a broader philosophical movement that valued personal improvement, moral virtue, and human rights. The Masonic lodge was one venue—among many—where those ideas took root.

Freemasonry was one tributary feeding into the river of revolutionary thought, along with classical republicanism, British common law, various religious traditions, and plain old grievances about taxes and representation.

The real story is somehow simpler and more fascinating than the conspiracy theories: a bunch of educated colonists joined a fraternity that encouraged them to think big thoughts about human nature and just governance. Those thoughts, debated in lodges and taverns and town halls, eventually sparked a revolution.

Not because of secret symbols or mysterious rituals, but because ideas about liberty and equality—once you start taking them seriously—are genuinely revolutionary.

True confession—The Grumpy Doc is not now, nor has he ever been, a Mason.

Page 2 of 4

Powered by WordPress & Theme by Anders Norén