
You’ve probably spent years learning how to pay attention—reading closely, analyzing deeply, and thinking critically. But here’s something nobody taught you in school: in today’s digital world, knowing what not to pay attention to might be just as important as knowing what deserves your focus.
That’s the essence of critical ignoring, a concept developed by researchers Anastasia Kozyreva, Sam Wineburg, Stephan Lewandowsky, and Ralph Hertwig . It’s basically the skill of deliberately and strategically choosing what information to ignore so you can invest your limited attention where it truly matters. I first became aware of this concept just a few weeks ago while reading an article by Christopher Mims in the Wall Street Journal.
Why This Matters Now
Think about your typical day online. You’re bombarded with news alerts, social media posts, clickbait headlines, and outrage-inducing content designed specifically to hijack your attention. Traditional advice tells you to carefully evaluate each source, read critically, and fact-check thoroughly. But here’s the problem: if you’re investing serious mental energy evaluating sources that should have been ignored in the first place, your attention has already been stolen.
The researchers make a crucial observation about how the digital world has changed the game. In the past, information was scarce and we had to seek it out. Now we’re drowning in it, and much of it is deliberately designed to be attention-grabbing through tactics like sparking curiosity, outrage, or anger. Our attention has become the scarce resource that advertisers and content providers are constantly trying to seize and exploit.
Critical ignoring is not sticking your head in the sand or refusing to hear anything that challenges you. Apathy is “I don’t care about any of this.” Critical ignoring is “I care enough to be selective, so that I can focus on what truly matters.” Denial is “I refuse to believe or even look at uncomfortable evidence.” Critical ignoring is “I’m not going to invest my time in sources that are clearly unreliable, or in discussions that are going nowhere, so I can better examine serious evidence elsewhere.”
The key distinction is that critical ignoring always serves better judgment, not comfort at any cost.
How To Actually Do It
The researchers outline three practical strategies you can use right away:
Self-Nudging: This is about redesigning your digital environment to remove temptations before they become problems. Think of it as changing your information ecosystem. Instead of relying on willpower alone, you might unsubscribe from inflammatory newsletters, turn off news notifications that stress you out, or use browser extensions to block certain websites during work hours. The idea is to design your environment so you can implement the resolutions you’ve made.
Lateral Reading: This one’s particularly clever. Instead of reading a website from top to bottom like you’ve always done, professional fact-checkers will open another browser tab and quickly research who’s behind the source. That way, you spend sixty seconds searching for information about the source rather than spending twenty minutes carefully reading content from a source that turns out to be backed by a lobbying group or known misinformation peddler. The researchers note this is often faster and more effective than trying to critically evaluate the content itself.
Don’t Feed the Trolls: This strategy advises you not to reward malicious actors with your attention. When you encounter inflammatory comments, deliberately misleading posts, or content clearly designed to provoke anger, the best response is often no response at all. Engaging with trolls or bad-faith content just amplifies it and wastes your mental energy.
I’ll Add Another
Ignore the Influencers: Refuse to click on miracle‑cure headlines or anecdote‑driven threads when you can go directly to professional medical sources, systematic reviews, or guidelines from reliable sources. Ignore influencers’ health claims unless they clearly cite solid evidence and expertise.
The Bigger Picture
What makes critical ignoring different from just being selective is that it’s strategic and informed. To know what to ignore, you need to understand the landscape first. It’s not about burying your head in the sand—it’s about being intentional with your attention budget.
The traditional approach of “pay careful attention to everything” made sense in a world of vetted textbooks and curated libraries. But on the unvetted internet, that approach often ends up being a colossal waste of time and energy. The admonition to “pay careful attention” is exactly what attention thieves exploit.
Making It Work For You
Start by taking inventory of your information landscape —all the apps, websites, notifications, and sources competing for your attention. Which ones consistently deliver value? Which ones leave you feeling manipulated, angry, or stressed? Practice self-nudging by removing or limiting access to the latter category.
When you encounter a new source making bold claims, resist the urge to dive deep into their content immediately. Instead, spend a minute or two doing lateral reading. Search for “who runs [site name]” or “[organization name] funding.” You’ll be amazed how quickly you can identify whether something deserves your time.
And when you see obvious rage-bait or trolling, practice the “scroll on by” technique. Your attention is valuable—don’t give it away for free to people trying to manipulate you.
Critical ignoring isn’t about being less informed. It’s about being better informed by focusing your limited cognitive resources on reliable sources and meaningful content rather than letting the algorithm’s latest outrage-of-the-day consume your mental bandwidth.
Sources:
Kozyreva, A., Wineburg, S., Lewandowsky, S., & Hertwig, R. (2023). Critical Ignoring as a Core Competence for Digital Citizens. Current Directions in Psychological Science, 32(1), 81-88. https://journals.sagepub.com/doi/10.1177/09637214221121570
∙Full text also available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC7615324/
∙Interview with lead researcher: https://www.mpg.de/19554217/new-digital-competencies-critical-ignoring
Mims, Christopher. “Your Key Survival Skill for 2026: Critical Ignoring.” The Wall Street Journal, January 3, 2026.
American Psychological Association. https://www.apa.org/news/podcasts/speaking-of-psychology/attention-spans
Lane, S. & Atchley, P. “Human Capacity in the Attention Economy”, American Psychological Association, 2020.








Truth at a Crossroads: How Trust, Identity, and Information Shape What We Believe
By John Turley
On February 2, 2026
In Commentary
When Oxford Dictionaries declared “post-truth” its word of the year in 2016, it crystallized something many people had been feeling: that we’d entered a strange new era where objective facts seemed less influential in shaping public opinion than appeals to emotion and personal belief. The term exploded in usage that year, becoming shorthand for a troubling shift in how we process information. But have we really entered uncharted territory, or is this just the latest chapter in a very old story?
The short answer is: it’s complicated. The phenomenon itself isn’t new, but the scale and speed at which misinformation spreads certainly is. We are in a new world where the boundary between truth and untruth is blurred, institutions that once arbitrated facts are losing authority, and politics are running on “truthiness” and spectacle more than evidence.
The Psychology of Believing What We Want to Believe
To understand why people increasingly seem to choose sources over facts, we need to dive into how our minds actually work. People now seem to routinely sort themselves into information camps, each with its own “truth,” trusted voices, and shared worldview. But why is this and why does it seem to be getting worse?
Psychologists have spent decades studying something called confirmation bias—essentially, the tendency to seek out information that supports our existing beliefs while avoiding or dismissing information that contradicts them. This isn’t just about being stubborn. Research shows we actively sample more information from sources that align with what we already believe, and the higher our confidence in our initial beliefs, the more biased our information gathering becomes.
But there’s something even more powerful at play called motivated reasoning. While confirmation bias is about seeking information that confirms our beliefs, motivated reasoning is about protecting ideological beliefs by selectively crediting or discrediting facts to fit our identity-defining group’s position. In other words, we don’t just want to be right—we want to belong.
This matters because humans are fundamentally tribal creatures. When we form attachments to groups like political parties or ideological movements, we develop strong motivations to advance the group’s relative status and experience emotions like pride, shame, and anger on behalf of the group. Information processing becomes less about truth-seeking and more about identity protection.
Why Source Trumps Fact
So why do people trust a source they identify with over objective facts that contradict their worldview? Research points to several interconnected reasons.
First, there’s the practical matter of cognitive shortcuts. We’re bombarded with information daily, and people judge the reliability of evidence by using mental shortcuts called heuristics, such as how readily a particular idea comes to mind. If someone we trust says something, that’s an easier mental pathway than laboriously fact-checking every claim. This reliance becomes problematic when “trusted” means ideologically comfortable rather than factually reliable.
Analysts of the post‑truth phenomenon also highlight declining trust in traditional “truth tellers” such as mainstream media, scientific institutions, and government agencies. As these institutions lose authority, counter‑elites or influencers can present alternative narratives that followers treat as at least as plausible as established facts
Second, and more importantly, is the issue of identity. When individuals engage in identity-protective thinking, their processing of information more likely guides them to positions that are congruent with their membership in ideologically or culturally defined groups than to ones that reflect the best available scientific evidence. Being wrong about a fact might sting for a moment, but being cast out of your social group could have real consequences for your emotional support, social standing, and sense of self.
Third, there’s a feedback loop at work. In social media, confirmation bias is amplified by filter bubbles and algorithmic editing, which display to individuals only information they’re likely to agree with while excluding opposing views. The more we’re exposed only to sources that confirm our beliefs, the more alien and untrustworthy contradictory information appears.
Interestingly, being smarter doesn’t necessarily protect you from these biases. Some research suggests that people who are adept at using effortful, analytical modes of information processing may actually be even better at fitting their beliefs to their group identities, using their intelligence to construct more sophisticated justifications for what they already want to believe.
The Historical Echo Chamber
Despite the way it feels, this isn’t the first time truth has had competition. History is full of eras when myth, rumor, propaganda, and identity overshadowed facts.
During The Reformation of the1500s, misinformation was spread on both sides of the catholic-protestant divide. Pamphlets—many of them highly distorted or outright fabricated—spread rapidly thanks to the printing press. Propaganda became a political weapon. Ordinary people suddenly had access to arguments they weren’t equipped to verify. People were ostracized and some even executed based on little more than rumors or lies. We might have hoped for better from religious leaders.
The French Revolution (1780s–1790s) was awash in claims and counterclaims, many of them—if not most—had little basis in fact.Competing newspapers told wildly different stories about the same events. Rumors fueled paranoia, purges, and violence. Truth became secondary to whichever faction controlled the narrative.
Following the Civil War and Reconstruction, the “Lost Cause” narrative became a powerful example of source-driven myth making. Despite historical evidence, generations accepted a version of events shaped by postwar Southern elites, not by facts. Echoes of it still reverberate today, driving much of the opposition to the civil rights movement.
Fast forward to the 1890s, and we see something remarkably familiar. Yellow journalism, characterized by sensationalism and manipulated facts, emerged from the circulation war between Joseph Pulitzer’s New York World and William Randolph Hearst’s New York Journal. These papers used exaggerated headlines, unverified claims, faked interviews, misleading headlines, and pseudoscience to boost sales.
As early as 1898, a publication for the newspaper industry wrote that “the public is becoming heartily sick of fake news and fake extras”—sound familiar?
During the 20th-century propaganda states, typified by both fascist and communist regimes perfected source-based truth. The leader or the party defined reality, and disagreement was literally dangerous. In these systems, truth wasn’t debated—it was assigned.
What Makes Now Different?
While the psychological mechanisms and even the tactics aren’t new, several factors make our current moment distinct. The speed and scale of information spread is unprecedented. A false claim can circle the globe in hours. Studies show that people are bombarded by fake information online, leading the distinction between facts and fiction to become increasingly blurred as blogs, social media, and citizen journalism are awarded similar or greater credibility than other information sources.
We’re also experiencing a fragmentation of trusted authorities. Where once a handful of major newspapers and broadcast networks served as gatekeepers, now the fragmentation of centralized mass media gatekeepers has fundamentally altered information seeking, including ways of knowing, shared authorities, and trust in institutions.
So Are We in a Post-Truth Era?
Yes and no. The term “post-truth” captures something real about our current moment—the scale, speed, and sophistication of misinformation is unprecedented. But calling it “post-truth” suggests we’ve crossed some bright line into entirely new territory. I’d argue we’re not quite there—but we are navigating a world where truth is sometimes lost in the collision of ancient human tendencies and modern technology
The data clearly show that confirmation bias, motivated reasoning, and identity-protective cognition are real and powerful forces. Historical evidence demonstrates that propaganda, misinformation, and the choice of tribal loyalty over objective fact have been with us for millennia. What’s changed is our information ecosystem driven by the technology that allows false information to spread faster than ever, and the by the fragmentation of shared sources of authority that once helped create common ground.
Perhaps a better framing would be that we’re in an era of “turbo-charged tribal epistemology”—where our very human tendency to trust our tribe’s narrative over contradicting evidence has been supercharged by algorithms that feed us what we want to hear and isolate us from alternative perspectives. (I wish I could take credit for the term turbo-charged tribal epistemology. I really like it, but I read it somewhere, I just can’t remember where.)
The question isn’t really whether we’re in a post-truth society. The question is whether we can develop the individual and collective skills to navigate an information environment that exploits every cognitive bias we have. The environment has changed, but the task remains the same: finding ways to establish shared facts despite our deep-seated tendency to believe what we want to believe.
Sources: