
On a steamy June day in 1777, the Continental Congress took a brief break from the monumental task of running a revolution to deal with something that seems surprisingly simple in retrospect: what should the American flag look like? The resolution they passed on June 14th was refreshingly concise, stating that “the flag of the United States be thirteen stripes, alternate red and white; that the union be thirteen stars, white in a blue field, representing a new constellation.”
That poetic phrase about a “new constellation” turned out to be both inspiring and maddeningly vague. Congress didn’t specify how the stars should be arranged, how many points they should have, or even whether the flag should start with a red or white stripe at the top. This ambiguity led to one of the interesting aspects of early American flag history—for decades, no two flags looked exactly alike.
The 1777 resolution came out of Congress’s Marine Committee business, and at least some historians caution that it may have been understood initially as a naval ensign, not a fully standardized “national flag for all uses.”
A Constellation of Designs
The lack of official guidance meant that flag makers exercised considerable artistic freedom. Smithsonian researcher Grace Rogers Cooper found at least 17 different examples of 13-star flags dating from 1779 to around 1796, and flag expert Jeff Bridgman has documented 32 different star arrangements from the era. Some makers arranged the stars in neat rows, others formed them into a single large star, and still others created elaborate patterns that spelled out “U.S.” or formed other symbolic shapes. An official star pattern would not be specified until 1912 and versions of the 13-star flag remained in ceremonial use until the mid-1800s.
The most famous arrangement, of course, is the Betsy Ross design with its circle of 13 stars. What many people don’t realize is that experts date the earliest known example of this circular pattern to 1792—in a painting by John Trumbull, not on an actual flag from 1776.
Did the Continental Army Actually Use This Flag?
Here’s where things get interesting and a bit murky. The short answer is: not much, and not right away. The Continental Army had been fighting for over two years before Congress even adopted the Stars and Stripes, and by that point, individual regiments had already developed their own distinctive colors and banners. These regimental flags served practical military purposes—they helped units identify each other in the chaos of battle and gave soldiers something to rally around. Additionally, the Continental Army frequently used the Grand Union Flag (13 stripes with a British Union in the canton), which predates the 13-star design.

What’s more revealing is a series of letters from 1779—two full years after the Flag Resolution—between George Washington and Richard Peters, Secretary of the Board of War. In these letters, Peters is essentially asking Washington what flag he wants the army to use. This correspondence raises an obvious question: if Congress had settled the flag issue in 1777, why was Washington still trying to figure it out in 1779? The evidence suggests that variations of the 13-star flag were primarily used by the Navy in those early years, while the Army continued to use various regimental standards.
Navy Captain John Manley expressed this confusion perfectly when he wrote in 1779 that the United States “had no national colors” and that each ship simply flew whatever flag the captain preferred. Even as late as 1779, the War Board hadn’t settled on a standard design for the Army. When they finally wrote to Washington for his input, they proposed a flag that included a serpent and numbers representing different states—a design that never caught on.
National “stars and stripes” banners did exist during the late war years and appear in some period art and descriptions, but clear, securely dated 13‑star Army battle flags are rare and often disputed.13‑star flags are better documented in early federal service such as maritime and lighthouse use in the 1790s than they are in Continental Army field service before 1783.
The Betsy Ross Question
Now we come to one of America’s most enduring flag legend. The story is familiar to most Americans: in 1776, George Washington, Robert Morris, and George Ross visited Philadelphia upholsterer Betsy Ross and asked her to sew the first American flag. She suggested changing the six-pointed stars to five-pointed ones, demonstrated her one-snip technique for making a perfect five-pointed star, and she then produced the first Stars and Stripes.
It’s a great story. There’s just one problem: historians have found virtually no documentary evidence to support it. The tale didn’t surface publicly until 1870—nearly a century after the supposed event—when Betsy Ross’s grandson, William Canby, presented it in a speech to the Historical Society of Pennsylvania. Canby relied entirely on family oral history, including affidavits from Ross’s daughter, granddaughter, and other relatives who claimed they had heard Betsy tell the story herself. But Canby himself admitted that his search through official records revealed nothing to corroborate the account.
Historians don’t dispute that Betsy Ross was a real person who did real work. Documentary evidence shows that on May 29, 1777, the Pennsylvania State Navy Board paid her a substantial sum for “making ships colours.” She ran a successful upholstery business and continued making flags for the government for more than 50 years. But as historian Marla Miller puts it, “The flag, like the Revolution it represents, was the work of many hands.” Modern scholars generally view the question not as whether Ross designed the flag—she almost certainly didn’t—but whether she may have been among the many people who produced early flags.
Who Really Designed It?
If not Betsy Ross, then who? The strongest candidate is Francis Hopkinson, the New Jersey delegate to the Continental Congress who also helped design the Great Seal of the United States and early American currency. In 1780, Hopkinson sent Congress a bill requesting payment for his design work, specifically mentioning “the flag of the United States of America.” He likely designed a flag with the stars arranged in rows rather than circles, and his bills for payment submitted to Congress mentioned six-pointed stars rather than the five-pointed ones that became standard.
Unfortunately for Hopkinson, Congress refused to pay him, arguing that he wasn’t the only person on the Navy Committee and therefore shouldn’t receive singular credit or compensation.
The irony is rich: Hopkinson was asking for a quarter cask of wine or £2,700 for designing what would become one of the world’s most recognizable symbols. Congress essentially told him, “Thanks, but we’re not paying.” There’s a lesson about government contracts in there somewhere.
What Survived
Of the hundreds of flags made and carried during the Revolutionary War, only about 30 are known to survive today. These rare artifacts offer fascinating glimpses into how Americans visualized their new nation. The Museum of the American Revolution brought together 17 of these original flags in a 2025 exhibition—the largest gathering of such flags since 1783.
The most significant surviving 13-star flag is probably Washington’s Headquarters Standard, a small blue silk flag measuring about two feet by three feet. It features 13 white, six-pointed stars on a blue field and descended through George Washington’s family with the tradition that it marked the General’s presence on the battlefield throughout the war. Experts consider it the earliest surviving 13-star American flag. Due to light damage, it can only be displayed on special occasions.

Other surviving flags tell different stories. The Brandywine Flag, used at the September 1777 battle of the same name, is one of the earliest stars and stripes—the flag is red, with a red and white American flag image in the canton.

The Dansey Flag, captured from the Delaware militia by a British soldier, was taken to England as a war trophy and remained in his family until 1927. The flag features a green field with 13 alternating red and white stripes in the upper left corner signifying the 13 colonies.

These and other flags weren’t just military equipment—they were powerful symbols that people fought under and, sometimes, died defending.
The Bigger Picture
What makes the story of the 13-star flag so compelling isn’t really about who sewed it or exactly when it first flew. It’s about what the flag represented in an era when the very concept of the United States was still being invented. The June 1777 resolution called for stars forming “a new constellation”—a beautiful metaphor for a new nation finding its place among the powers of the world.
The fact that no two early flags looked exactly alike might seem like a problem from our standardized modern perspective, but it tells us something important about the Revolution itself. Just as the colonies were learning to act as united states while maintaining their individual identities, flag makers across the new nation were interpreting a simple congressional resolution in their own ways, creating variations on a shared theme.
As historian Laurel Thatcher Ulrich points out, there was no “first flag” worth arguing over. The American flag evolved organically, shaped by the practical needs of the Navy, the Army, militias, and civilian flag makers who each contributed to its development. Whether Betsy Ross made one of those early flags or not, her story endures because it captures something Americans want to believe about our origins: that ordinary citizens, working in small shops and homes, helped create the symbols of the new republic.
Sources:
History.com: https://www.history.com/this-day-in-history/june-14/congress-adopts-the-stars-and-stripes
Flags of the World: https://www.crwflags.com/fotw/flags/us-1777.html
Wikipedia Flag of the United States: https://en.wikipedia.org/wiki/Flag_of_the_United_States
Museum of the American Revolution: https://www.amrevmuseum.org/
American Battlefield Trust: https://www.battlefields.org/learn/articles/short-history-united-states-flag
US History (Betsy Ross): https://www.ushistory.org/betsy/
Library of Congress “Today in History”: https://www.loc.gov/item/today-in-history/june-14/
Flag images from Wikimedia Commons









Truth at a Crossroads: How Trust, Identity, and Information Shape What We Believe
By John Turley
On February 2, 2026
In Commentary
When Oxford Dictionaries declared “post-truth” its word of the year in 2016, it crystallized something many people had been feeling: that we’d entered a strange new era where objective facts seemed less influential in shaping public opinion than appeals to emotion and personal belief. The term exploded in usage that year, becoming shorthand for a troubling shift in how we process information. But have we really entered uncharted territory, or is this just the latest chapter in a very old story?
The short answer is: it’s complicated. The phenomenon itself isn’t new, but the scale and speed at which misinformation spreads certainly is. We are in a new world where the boundary between truth and untruth is blurred, institutions that once arbitrated facts are losing authority, and politics are running on “truthiness” and spectacle more than evidence.
The Psychology of Believing What We Want to Believe
To understand why people increasingly seem to choose sources over facts, we need to dive into how our minds actually work. People now seem to routinely sort themselves into information camps, each with its own “truth,” trusted voices, and shared worldview. But why is this and why does it seem to be getting worse?
Psychologists have spent decades studying something called confirmation bias—essentially, the tendency to seek out information that supports our existing beliefs while avoiding or dismissing information that contradicts them. This isn’t just about being stubborn. Research shows we actively sample more information from sources that align with what we already believe, and the higher our confidence in our initial beliefs, the more biased our information gathering becomes.
But there’s something even more powerful at play called motivated reasoning. While confirmation bias is about seeking information that confirms our beliefs, motivated reasoning is about protecting ideological beliefs by selectively crediting or discrediting facts to fit our identity-defining group’s position. In other words, we don’t just want to be right—we want to belong.
This matters because humans are fundamentally tribal creatures. When we form attachments to groups like political parties or ideological movements, we develop strong motivations to advance the group’s relative status and experience emotions like pride, shame, and anger on behalf of the group. Information processing becomes less about truth-seeking and more about identity protection.
Why Source Trumps Fact
So why do people trust a source they identify with over objective facts that contradict their worldview? Research points to several interconnected reasons.
First, there’s the practical matter of cognitive shortcuts. We’re bombarded with information daily, and people judge the reliability of evidence by using mental shortcuts called heuristics, such as how readily a particular idea comes to mind. If someone we trust says something, that’s an easier mental pathway than laboriously fact-checking every claim. This reliance becomes problematic when “trusted” means ideologically comfortable rather than factually reliable.
Analysts of the post‑truth phenomenon also highlight declining trust in traditional “truth tellers” such as mainstream media, scientific institutions, and government agencies. As these institutions lose authority, counter‑elites or influencers can present alternative narratives that followers treat as at least as plausible as established facts
Second, and more importantly, is the issue of identity. When individuals engage in identity-protective thinking, their processing of information more likely guides them to positions that are congruent with their membership in ideologically or culturally defined groups than to ones that reflect the best available scientific evidence. Being wrong about a fact might sting for a moment, but being cast out of your social group could have real consequences for your emotional support, social standing, and sense of self.
Third, there’s a feedback loop at work. In social media, confirmation bias is amplified by filter bubbles and algorithmic editing, which display to individuals only information they’re likely to agree with while excluding opposing views. The more we’re exposed only to sources that confirm our beliefs, the more alien and untrustworthy contradictory information appears.
Interestingly, being smarter doesn’t necessarily protect you from these biases. Some research suggests that people who are adept at using effortful, analytical modes of information processing may actually be even better at fitting their beliefs to their group identities, using their intelligence to construct more sophisticated justifications for what they already want to believe.
The Historical Echo Chamber
Despite the way it feels, this isn’t the first time truth has had competition. History is full of eras when myth, rumor, propaganda, and identity overshadowed facts.
During The Reformation of the1500s, misinformation was spread on both sides of the catholic-protestant divide. Pamphlets—many of them highly distorted or outright fabricated—spread rapidly thanks to the printing press. Propaganda became a political weapon. Ordinary people suddenly had access to arguments they weren’t equipped to verify. People were ostracized and some even executed based on little more than rumors or lies. We might have hoped for better from religious leaders.
The French Revolution (1780s–1790s) was awash in claims and counterclaims, many of them—if not most—had little basis in fact.Competing newspapers told wildly different stories about the same events. Rumors fueled paranoia, purges, and violence. Truth became secondary to whichever faction controlled the narrative.
Following the Civil War and Reconstruction, the “Lost Cause” narrative became a powerful example of source-driven myth making. Despite historical evidence, generations accepted a version of events shaped by postwar Southern elites, not by facts. Echoes of it still reverberate today, driving much of the opposition to the civil rights movement.
Fast forward to the 1890s, and we see something remarkably familiar. Yellow journalism, characterized by sensationalism and manipulated facts, emerged from the circulation war between Joseph Pulitzer’s New York World and William Randolph Hearst’s New York Journal. These papers used exaggerated headlines, unverified claims, faked interviews, misleading headlines, and pseudoscience to boost sales.
As early as 1898, a publication for the newspaper industry wrote that “the public is becoming heartily sick of fake news and fake extras”—sound familiar?
During the 20th-century propaganda states, typified by both fascist and communist regimes perfected source-based truth. The leader or the party defined reality, and disagreement was literally dangerous. In these systems, truth wasn’t debated—it was assigned.
What Makes Now Different?
While the psychological mechanisms and even the tactics aren’t new, several factors make our current moment distinct. The speed and scale of information spread is unprecedented. A false claim can circle the globe in hours. Studies show that people are bombarded by fake information online, leading the distinction between facts and fiction to become increasingly blurred as blogs, social media, and citizen journalism are awarded similar or greater credibility than other information sources.
We’re also experiencing a fragmentation of trusted authorities. Where once a handful of major newspapers and broadcast networks served as gatekeepers, now the fragmentation of centralized mass media gatekeepers has fundamentally altered information seeking, including ways of knowing, shared authorities, and trust in institutions.
So Are We in a Post-Truth Era?
Yes and no. The term “post-truth” captures something real about our current moment—the scale, speed, and sophistication of misinformation is unprecedented. But calling it “post-truth” suggests we’ve crossed some bright line into entirely new territory. I’d argue we’re not quite there—but we are navigating a world where truth is sometimes lost in the collision of ancient human tendencies and modern technology
The data clearly show that confirmation bias, motivated reasoning, and identity-protective cognition are real and powerful forces. Historical evidence demonstrates that propaganda, misinformation, and the choice of tribal loyalty over objective fact have been with us for millennia. What’s changed is our information ecosystem driven by the technology that allows false information to spread faster than ever, and the by the fragmentation of shared sources of authority that once helped create common ground.
Perhaps a better framing would be that we’re in an era of “turbo-charged tribal epistemology”—where our very human tendency to trust our tribe’s narrative over contradicting evidence has been supercharged by algorithms that feed us what we want to hear and isolate us from alternative perspectives. (I wish I could take credit for the term turbo-charged tribal epistemology. I really like it, but I read it somewhere, I just can’t remember where.)
The question isn’t really whether we’re in a post-truth society. The question is whether we can develop the individual and collective skills to navigate an information environment that exploits every cognitive bias we have. The environment has changed, but the task remains the same: finding ways to establish shared facts despite our deep-seated tendency to believe what we want to believe.
Sources: