Infinite Media Psychosis and the Issue with Community Notes
Trying to solve for truths in a world with infinite realities is a near impossible task
Who gets to decide what’s true and what’s not? It’s a debate that goes back thousands of years, and one that seems to undergo significant upheaval with every new technological evolution. At the core of the question is what institutions are in power, and in a position to define shared knowledge. One of my favorite examples of this is Galileo’s trial. Although the trial is often framed through the lens of the brave, objectively correct scientist being punished for his scientific findings, built on Copernicus’ theory that the Earth revolves around the sun instead of the previously accepted alternative, the trial wasn’t actually about scientific fact. It was about troublesome truths and, therefore, powerful mistruths that were about to be dispelled.
Galileo’s scientific discovery, verified by other scientists at the time (often through the Jesuit institutes), and built on scientific experimentation from past scientists (again, notably Copernicus and Ptolemy), was not itself harmful to the Church. But the argument against the Earth being at the center of the galaxy went against actual scripture. (“Sun, stand still over Gibeon, and you, moon, over the Valley of Aijalon. So the sun stood still, and the moon stopped…” — Joshua 10:12-13).
You could see how this would be potentially confusing to Catholics at a time of immense backlash against the Church. Galileo’s discoveries and published findings came less than a hundred years after Martin Luther bolted his 95 Theses to the door of Castle Church monastery, kicking off the Reformation and leading the Church to launch its Counter Reformation between 1545 and 1563, just one year before Galielo was born. Galielo’s trial happened right before the 30 Year War, defined as a war that saw millions of people die over a divide in religious beliefs across Europe and, more to the point, over centuries long held truths.
Troublesome truths that go against personal beliefs and diminishing trust in institutions are two of the most important factors powering our current misinformation pandemic. And nowhere is this more true than with community notes, the current model of fact checking employed by tech companies who don’t want to work with content moderators to determine what is true and what is not in fear of insulting groups of people who believe something to be true. Most obvious examples include vaccinations, the 2020 U.S. election, or flat-earth conspiracies. We’ve all witnessed these debates and various claims pop up on our X, Facebook, Instagram, Threads, or TikTok feeds.
But it’s also because executives like Mark Zuckerberg, Elon Musk, and Adam Mosseri have tried to separate what is truth from what is fact, or what is believed from what is real. These may seem like little differences, but they explain why some executives see this as the way toward fighting misinformation on their platforms populated by hundreds of millions if not billions of people each month. It also explains why community notes won’t work in actually fighting disinformation, which is inherently an attempt to fight underlying perceived truths rather than pointing to realities that no longer carry as much weight. When everything is real and nothing is real because there is data, narrative, and self-proclaimed expertise to support any reality, then what is being defended as correct?
We’re going to get a little metaphysical in this essay, but fighting misinformation on the internet, especially in an emerging genAI moment where the level of content will go from infinity to infinity and beyond, requires some reworked thinking on the underlying goal, the underlying problem, and the underlying fault in leaving truth up to communities.
What Is Truth
Truth is a funny word in 2025. More than 400 years after the scientific revolution, 500 years after the printing press, and 300 years after the age of enlightenment, what is true — and who is in a position of power to wield that truth — is one of the most contested topics. This is despite the fact that we’ve codified the experimentation process, produced significant data on any given topic under the sun, and despite substantiated facts. Things like the earth is round and vaccines help save lives are now flattened to contested theories that large segments of people spend their days trying to prove are lies. Even with evidence proving these realities, there are people who question why those should be accepted truths.
See, truth is a metaphysical concept. And while reality falls under that bucket too, I’d define reality as evidenced by fact compared to truth, which is the end point of self-evidenced beliefs. The Earth revolves around the sun; but God is behind it all. The further you can drive a wedge between the two, the less dependent truth is on reality, and the more open people become to divorcing the importance of fact because of their perceived sense of truth. Introducing more data to back up any position of belief someone has also creates an inescapable hellhole of circulatory evidence. Someone doesn’t believe in vaccines because they have personal qualms and the more they seek out information to prove those beliefs as true, the more likely they are to find evidence from self-proclaimed experts and people fighting wrongful authority — fighting the man! — to double down on their own newly discovered reality. After a while, they’re also more likely to see other commonly believed facts as the last remnants from those in power, clinging to an authority that millions of people simultaneously lose trust in.
I keep coming back to Jean Baudrillard’s Simulacra and Simulation, which is not as The Matrix or Elon Musk made the philosophical text out to be in their interpretation. One of Baudrillard’s main arguments is that more and more digital simulation in our postmodern world means that any kind of distinction between real and illusory implodes and, therefore, any kind of difference in reality and illusion, in factual truth and personal truths, becomes so meaningless that it doesn’t matter in the slightest. Real doesn’t matter; hyperreal, an implosion of both reality and illusion, is all that exists.
“The impossibility of rediscovering an absolute level of the real is of the same order as the impossibility of staging illusion,” he writes in his book. “Illusion is no longer possible, because the real is no longer possible.”
Baudrillard was conducting his analysis and writing Simulacra and Simulation in the era of mass media dominance, and on the precipice of our current reality, which I refer to as infinite media psychosis. There are plenty of deep dive readings into Beaudrillard’s work on the subject, and I’m not here to give you an introduction to abstract, post-modernism philosophy. I couldn’t even if I wanted to, really. I am the furthest thing from a philosopher, so know that I may get things wrong but the gist will be here. But Baudrillard’s approach to understanding and fighting back against infinite media psychosis is the perfect place to start when it comes to discussing truth versus fact in a social media and generative AI era.
What you really have to know about Simulacra and Simulation is the foundational theory: if enough context is removed from the original fact or truth — from reality — then you eventually get to a copy of a copy. You create a simulacrum that is an entirely new reality for people to base their own beliefs of truth on because it’s so far divorced from the original truth. Take my favorite example of a pumpkin as provided by YouTuber Magdalen Rose. A physical pumpkin is the original fact as basic reality. We all know what a pumpkin is: a round, orange fruit.
But what if you manipulate that reality? Say, pumpkin pie. Pumpkin is present but it’s not the version of pumpkin that we think of when someone says the word. It’s not as orange, it’s not round, and it’s not hard or hollow. Go one step further by masking the absence of the original basic reality and you get a pumpkin spice latte. There is functionally no connection to pumpkin other than the name, and yet that’s what so many of us think about when the term pumpkin comes up. It’s one of the closest relationships and, for some who may not have ever scooped out a pumpkin or held one at a farm, it may be the only relationship to pumpkin.
Finally, there’s the fourth step in establishing simulacram — removing any and all basic reality associated with the word or object. Something like pumpkin flavored creamer. By divorcing our perceived reality of something so completely, annihilating the original meaning, we create what Baudrillard refers to as hyperreal. Post-modernists particularly cling to this term when talking about our 21st century stimulation overload. Any perceived distinction between actual reality and simulated reality completely blurs, causing swaths of populations to see the simulation, the simulacram, as more real than the actual original product or word. We encounter this every day: how we judge social media as real, how memes become citations in our everyday language, how our introduction to new facts comes from a place of no origin and total manipulation and, on that note, how facts are presented.
Hyperreality in entertainment is, arguably, mostly harmless. Family Guy referencing Heath Ledger’s Joker in the Dark Knight becomes the base of someone’s understanding of that particular scene if they see it before The Dark Knight. Whatever. We have so much information, so much entertainment, so much content and shit, that we have to reference it to create conversations that are unifying and understandable. It’s why memes work as language. Self-referentialism in a post-modern hyperreal digital age is the only thing that makes sense. It’s our new reality, completely divorced from the actuality of its origins.
When you combine all of these evolutionary problems brought on by co-creating new realities every day, you get to the major underlying issue of information: a complete loss of meaning and authenticity, choose-your-own adventure truths, and define-your-own-world realities. These are driven by platforms that create their empires on passive consumption and manipulated information that increases engagement by validating these new realities, and further fragmenting entire populations. We passively take in free stuff every day that is designed to fit within our new worlds, creating entirely separate realities for a growing number of communities, reducing the number of shared experiences, beliefs, and truths or realities that we can all agree upon and participate in.
Eventually, you have to consciously try to differentiate between actual reality and simulated realities. Not only is that foolish and impossible, as Baudrillard would argue, but there’s also far less incentive to do so on platforms that reward engagement with completely divorced realities by reasserting personal beliefs and ideologies. Enter community notes.
Truth No Longer Matters
Community notes — the very kind employed by Meta, X, and other companies who oversee the vast majority of day-to-day communication and information flows — not only asks groups of strangers to come together and determine what is real, but also to determine what posts rise to the ranks of needing reality defined.
Let’s use X as an example, as Elon Musk’s team implemented the system that others followed. Anyone can become a community moderator if they meet the eligible criteria: maintained an X account for longer than six months, doesn’t have any flags on their account, and has a phone number attached to the account in an effort to reduce spam. Full names are not applied to community notes once published in an effort to combat harassment. Pretty standard.
Where it gets particularly interesting, not in a negative or positive way but truly just interesting, is how X then ranks writing and rating impact. The more activity, and specifically the more positive activity as gauged by how often someone recommends a “good” note or receives good ratings based on notes they’ve submitted, the more points they accumulate. More points, more notes published. But the opposite is also true. If there are notes that receive enough downvotes (for lack of a better term) or if someone’s activity is seen as poor from a contribution standpoint, they lose points. Lose enough points, you can’t contribute as much.
Of course, one could argue this is the same gatekeeping process that exists in academia and journalism today. More publishing in peer-reviewed journals, more scoops published by institutional publications, the more weight we associate with that person’s expertise. How, then, is that different from what community notes are trying to accomplish, and how does one define a shared reality versus a perceived reality?
Fact is the simplest answer. Journalists and experts in their fields build upon their own decades-long expertise or expert sources to arrive at a fact that is proven by third-party sources and successful upon multiple experiments or investigations. It’s that pie chart way back at the top of this essay. Community notes are not necessarily interested in dispelling mistruths by virtue of removing those falsehoods, but creating an environment in which people can see the original argument alongside the agreed upon “correct” fact, and engage with it as they see fit. We’ll get to whether that’s working in just a second, but the big difference as I see it is asserting fact that creates shared reality versus suggesting a counter-opinion that works to further divide split realities. Academia and journalism builds on outdated facts to demonstrate new ones; community notes suggest an alternative but doesn’t dismiss the overall claim by removing it.
In order to combat ideological and political bias, X’s FAQ page also states that it picks from random applicants to ensure that it’s not entirely one-sided. And it does so using a bridging process to be as effective as possible as seeking out the widest array of opinions and voices. Good…so long as there are an equal number of people who want to contribute to community notes who come from different backgrounds. We know that X tends to skew older, heavily male, and right-wing. Theoretically, this would imply more note contributors from these ideological bases. This is especially true when you have a chief executive who consistently, publicly talks about trying to remove liberal bias from his platform and his AI chatbot, Grok. How, then, does that affect the community notes experiment and, for the point of this essay, how does that determine what facts or data are considered truth and what is considered real?
Now, it’s not all bad news by any means. When community notes work, they work. An investigation from USC that looked at more than 40,000 notes found that fact-checking notes “significantly reduces the engagement with and diffusion of false content.’ This resulted in an average of 46 percent less reposts, 44 percent less likes, and 13.5 percent fewer views. Great!
Here’s the bad news: the vast majority of notes do not get displayed at all because of “political disagreement.” This is because of a qualifier that “requires contributors of opposing political perspectives to agree that a note is needed before it gets shown,” according to a study from Spanish fact checking agency Maldita. There’s a very obvious takeaway from these findings, which is that obviously agreed upon facts — sky is blue, gravity exists — are easier for people across the political and ideological spectrum to agree upon, but any type of fact or reality that is politicized, as much of our information increasingly becomes, is nearly impossible to agree upon because the very definition of truth has changed.
After spending a bunch of time digging into how, exactly, community notes work on different platforms, the main thing I took away was that anything innocuous is typically going to find a community note — sharks don’t sleep, Australia is a continent and a country — but anything that could lead to actual simulacram being crafted through a total split from reality is less likely to rise to the point of having a community note assigned. Split realities prevail, built on masking the absence of the original reality through a completely new set of data points, articles, and experts.
Social media is the perfect breeding ground for this cracked reality to really start breaking down. After all, what do we know about why social media platforms are so addicting and intoxicating? I can break it down into four attributes:
Confirmation Bias — I’d argue that it starts here. We all have our own personal beliefs and, when we see those beliefs asserted by people in positions of power (on social media that means number of followers, which equates to authority, expertise, and knowledge), it further confirms our own biases.
Repeated Sentiment — When we see the same stance repeated again and again, we tend to believe that it’s the most accurate, popular, and truthful because we’re seeing it so often. It sticks in our head more.
Echo Chambers — Algorithms that want us to engage further put us into these self-aggrandizing circles, meaning that we don’t hear any additional sentiment unless it’s through dunking on those that a group of like-minded people disagree with in real time. Not only does this further isolate groups, but it completely eliminates all empathy.
Time Poverty — And arguably the most important part of this, we’re time poor — and social media platforms make you feel like you’re getting your information and social connection in one place. It brings the world to you rather than you having to seek out information in the world, but it does so with the first three components of social media guiding how that information flows, who it flows from, and how it targets us.
Keeping that in mind, here’s another illustrative data point about how well community notes actually work on two other platforms: Facebook and Instagram. Of the 15,000 notes that were submitted in the first six months that Meta implemented community notes instead of relying on actual moderators, “only 6% have been published and displayed to users,” according to Meta. The report continues that it actually “seems fairly obvious why these are not getting Community Noted in the app, despite there being factual sources to refute such claims, because on some topics, political opponents are never going to agree, which also means that X [and Meta] is helping to amplify these false claims in the app.”
All of which is a direct result of two things: the above criteria creating a perfect host body for truth to break down, and employing the third and fourth steps in creating simulacrum: masking the absence of basic reality (by removing information while repeatedly hammering home the same point) and removing any and all basic reality associated with the word or object. There is no truth or fact for someone to prove when the very discussions being had are based in completely different realities.
Missing from this discussion is one of the most important websites that is entirely reliant on community notes: Wikipedia. And its mode of operation is the difference between encouraging entirely new realities and distributing information while trying to maintain a shared reality. Why do community notes, and community contributions in general, seem to work so well on Wikipedia and not on X or Meta? A significant difference is in the approach. Wikipedia’s own Wiki article on contributions to different articles, which can be suggested by anyone, requires that all information supplied by a contributor is “verifiable” by linking back to “reputable sources.” This is already different from both X and Meta, which encourage linking to high quality sources but don’t demand it. It’s so central to Wikipedia’s philosophy that the article reiterates having reputable sources and verifiable information in the immediate paragraph that follows.
What Wikipedia teaches us is that community notes can, and often do, work. But they’re also a byproduct of their environments. On Wikipedia, there is less incentive for tribalism to occur. On X and Facebook, it’s all tribalism. And community notes may go from proving that a tweet contains wrongful information to becoming another catalyst in arguing that one side is trying to attack another. Because if truths aren’t agreed upon in a hyperreal environment, then the foundations of those very realities collapse, and people just return to their echo chambers where community notes don’t exist at all.
A New Truth World
I sincerely believe that most people are trying to find their way back to unification, to some kind of shared clarity. But I also believe that’s not possible in this post-modern, hyperreal era. So without one grand unified, shared belief of truth — this is, after all, partially what makes organized religion so intriguing and inviting, and it’s precisely what the Church was trying to maintain in its fight with Galileo — we will cling to new realities and new truths shared with smaller, intimate groups. One philosophy I’ve written about again and again is that our online worlds will inevitably get smaller as the internet gets more cluttered and unmanageable. Realities will further split into millions of tinier ones with their own truths. Reality is no longer universal, it’s perfectly situated to every person.
And this may sound depressing. It may sound scary. I don’t intend it to be, it just is. This is our lasting truth: there is no longer one truth. So, now, the question is how do the millions of us who care to protect the most vital information that can prevent the most harm try and work within these parameters? I don’t believe we’re in a post-truth world, but I do believe we’re in a new truth world. So what do we do next?
Baudrillard saw a version of this in the 1980s. We’ve lived a version of this in the era of social media. Platforms that integrate their own AI chatbots trained on these different communities — tweets, group chats, whatever it may be — will only create a more official-looking version of those individual truths. When it comes to community notes and, really, when it comes to talking about moderating falsehoods, to protecting actual realities, we need to take a giant step back.
What is false and how do you fight it when there is no one shared reality anymore? When everything is true because nothing is true, does it matter what a community note attached by a group of strangers means? I’d argue it doesn’t, because that is no longer the reality someone needs to abide by. So, we need guardrails for reality. Not having information is sometimes the only way forward into protecting our most basic understood realities.






How should we think about fighting this induced psychosis?
Wow, the Galileo trial insight about 'troublesome truths' versus scripture really hit home. It's wild how the core issue of defining knowledge just keeps rebooting through history. Kinda makes you wonder about today's info architecture and who holds the keys to truth. Great stuff, made me realy think.