Tech's Inevitable Ouroboros Problem
If you're running a for-profit company, the only thing that matters is increasing profit — even if you don't want it to be
I approach every trending new social media with the same mindset: if this thing actually takes off for longer than six months, it’s gonna be ruined in a few years.
This is social media’s ouroboros problem. An ouroboros is an image that all of you are familiar with even if you don’t know it by name. A snake eating its own tail; something that has appeared on coins, on hastily designed t-shirts sold on Instagram, and perhaps even on a flag adorning a man’s apartment wall (run). Its history is fascinating, emerging from ancient Egyptian drawings, adopted through ancient Greece's mystic community, and then embraced by alchemists in the late dark ages. Ouroboros are also seen as positive…generally. They’re the universal sign of life’s cycle — birth, death, rebirth. The very idea that one could devour themselves as a self-sacrificial attempt to be reborn as something grander, incorporating the memories of all that came before it, is inherently beautiful. It’s why so many first-year philosophy students get it tattooed on them.
Why, then, am I using it in such a negative light? Part of the reason that ouroboros work well as philosophical analogies is because so much of the theory working relies on the concept of approaching the rebirth portion correctly, learning from the destructive process to ensure a new wave of creation is fundamentally more positive. Most new social media operators start with the right approach to recreation. They’re biting the heads off all the platforms that came before. But something inevitably goes wrong. Despite all the best efforts, the correct planning, and the education from past cases. I like to think of these inevitabilities as two spikes in video game levels that get you no matter how many times you try to avoid them, knowing they’re just around the corner. They are:
Moderation problems that come with large scale user bases and
For profit mentalities that impact decisions as business lines grow
Precedent doesn’t lie. Precedent tells us that these two afflictions happen to all social media platforms that exist as more than a network on the peripheral for longer than six months and sees explosive growth. That’s where Bluesky is right now, but this isn’t so much an essay about Bluesky (you don’t need anymore of those) and rather about the ouroboros facing every wannabe platform that existed before Bluesky and will appear long after. Consider these the Ouroboros Rules of Walled Internets.
Precedent Speaks
Moderating at scale has plagued the internet as long as the internet went from being a relatively insular product used by governments and companies to being a living organism built on and contributed to by user generated content. MySpace, Facebook, and YouTube all experienced this first-hand very early into their growth. MySpace had a room full of moderators before that was a well-established, often outsourced job, trying to answer questions that would set the earliest foundation for how to approach bad behavior that naturally began to appear.
I said it in a previous essay, and I’ll type it out again here because it is the crux of almost everything I write: attention is a disease. That disease feeds on what draws the most attention.
As more people moved to social media platforms and left their own bubbles (those they knew in real life), the more opportunities there were to use attention-grabbing posts to build followings. People also learned, very quickly, that negative or shocking posts secured more attention than something relatively benign or positive. This isn’t new, it’s biologically evolutionary. Psychologists deem it “negativity bias” but basically it boils down to a very simple feeling: we remember and dwell on negative images, stories, and comments as a form of self-protection. Negative stories or images are also more jarring, and they capture our attention.
People use this phenomena to explain why news organizations are drawn to plane crashes or sad stories, but I’d argue that news isn’t inherently more negative or positive because of a bias; news is just a reporting of what has occurred. Someone going for a walk and seeing a cute dog is great! It’s not news. Someone going for a walk and seeing a cute dog across the street before they got hit by a drunk driver? That’s news. It also happens to grab our attention.
Without getting into a whole Section 230 debate (firstly because you can read about that elsewhere and perhaps most importantly, I still don’t fully understand it despite years of my former Verge colleagues doing a remarkably wonderful job of reporting on it), news also comes with editorial judgement. It comes with editorial fact checking. It comes with a layer of additional support structures to ensure that it’s not just jarring images and stories designed to shock, horrify, or disgust strangers on the internet. There is also an expectation that when someone is watching news, they may see video or learn information that is distressing. It’s news. This isn’t the same expectation when someone opens their Facebook account. So how do you attempt to moderate sites that are growing faster than anything in history, that are designed to encourage people to post anything that grabs attention, and stem the psychological reality that negative posts attract more engagement than positive?
Attention is a disease. That disease feeds on what draws the most attention
Moderation is clearly difficult enough! Now add in more than two billion hours of content being uploaded each month to a site like YouTube or more than 3.6 billion people using a collection of apps that encourage posting — like those owned by Facebook. It’s nearly impossible to keep up with, let alone moderate well, without the help of machine learning tools. But there’s still issues there, as anyone who has visited a website in their life (read: basically everyone) can attest. As a piece by former Atlantic editor Alexis Madrigal points out, “The current stable of machine-learning technologies is not good at looking at the context of a given post or user or community group.” Even acknowledging that technologies have evolved tenfold since 2018, and the introduction of stronger artificial intelligence capabilities will likely aid with moderation in some capacity, it’s not a total game changer. Look at Google Overview AI, which pulls from Reddit forums for information but loses critical context of the conversation, thereby giving wrong information. But I digress — this is also not an essay about AI.
MySpace’s version of moderation was similar to Facebook, YouTube’s and X’s. These weren’t perfect systems — moderators weren’t cared for, videos and posts still got through the cracks — but the overarching approach was to create a blanket set of rules that made it clear what was acceptable, what wasn’t, and using machine learning tools to keep up with the unprecedented scale. Back in 2018, Facebook’s then head of global content policy, Monika Bickert, told The Atlantic that Facebook’s policy team tries “to make sure that our standards are sufficiently granular so that they don’t leave a lot of room for interpretation.” New issues come up all the time that Facebook’s team attempted to navigate through bi-weekly meetings, but the problem of context was one that Facebook’s teams couldn’t figure out then and continue to struggle with today.
Precedent for moderating these large walled-garden spaces that live and die based on whether people actually post — and, therefore, people feeling like there is a reward if they participate — reminds us that at some point nearly all moderation becomes a problem. Even for those spaces with more lax, but creative approaches to moderation…like Tumblr and Reddit. Sarah T. Roberts, Director at the Center for Critical Internet Inquiry, told the Harvard Business Review in 2022 that moderation is almost terrible by design because moderation isn’t built for large scale operations. By dividing moderation into roles designated for distinct communities, like Reddit does, it theoretically puts moderation into a role of vital importance to each community member.
“Often in those cases people doing moderation had visible leadership roles and were curating a particular kind of cultural space with an authority granted to them by the users. There are still sites with similar models,” Roberts said. “Consider Reddit. On each subreddit, or topic forum, you have community leader–moderators and clear rules about what is acceptable and how violators will be penalized. Each subreddit sets itself up differently, and users can decide whether to participate. Then, at a higher level, you have professional moderators who deal with things such as requests for information from law enforcement agencies or governments. This kind of model is in sharp contrast to the one in which content moderation is mostly invisible and unacknowledged.”
What worked for Reddit’s user base, didn’t exactly work for Reddit’s public image — something that became even more of a concern when Reddit went public. Reddit leadership warned in its public filing two years ago that because of its more creative, arguably lax approach to content moderation, there could be significant disruptions to the business. Analysts thereby warned, and therefore replied in turn with their own warning, that if the business were to face financial harm because of less moderation oversight, Reddit may have to change its politics.
Tumblr tried another creative approach. Instead of banning certain content outright, Tumblr would issue a second choice screen ensuring the user was age appropriate and wanted to see it, as well as producing a PSA of sorts for content that could be self-destructive or harmful, such as searching for “thinspo” eating disorder images or self-mutilation photos. This, creatively, puts the onus of moderating a user’s own experience on the user. Illegal content was removed, as was hateful content, but those grey in-betweens that moderation teams really focus on when creating policy?
Tumblr was more hands off. Everything changed in 2018 when Tumblr decided to remove pornography from the platform and clamped down on searches. The former action, done to appease Apple’s App Store policies (more than 60% of traffic comes from the app), drove massive outcry. None of that outcry seemed to particularly matter to Tumblr, nor did the reported 30% drop in traffic that came after the ban was instituted. Tumblr was back on the App Store. With a new owner around the corner, Tumblr could potentially be built back up as a business.
Ouroboros Effect
Like everything, profit structures change approaches to original ideas. Profit follows strong revenues. Revenue follows growth. It's at that stage between growth and revenue that protective features like moderation are brought into question, whether as part of the internal team or from external investors looking into the well of a potentially new Facebook or YouTube. If I had to draw out that ouroboros effect sequence of events it would look something like this:
The philosophy of Posting Nexus is that attention is a disease, and that incentivization structures (or disincentivization structures) are born out of the intersection between platforms that cater toward attention and identity, and the business structures that allow those outlets to become parts of our daily lives. Tumblr and Reddit are businesses. Just like Meta and YouTube, they have investors interested in seeing continuous growth. There is no end state, it’s just continuous growth as a sign of health or stagnation as a sign of failure.
I wrote last week that Bluesky, similar to Tumblr and Reddit at one point before it, is between the growth and profit part of the ouroboros chain. There are obvious signs we can point to. A 48% increase in new members over the last four weeks. More mentions of the site in articles. Comparisons to other large social platforms. Then there are less obvious signs, but arguably just as important. A couple of weeks before the election, but amidst Elon’s continued tyrannical overhaul of key Twitter features — including “removing” the block function — Bluesky raised $15 million from a few investors. The news was announced on October 24th, one week after searches for Bluesky shot up by more than 50% on Google and more than one million new members joined the site.
“We’re excited to announce that we’ve raised a $15 million Series A financing led by Blockchain Capital with participation from Alumni Ventures, True Ventures, SevenX, Amir Shevat of Darkmode, co-creator of Kubernetes Joe Beda, and others,” the team wrote on its blog. “With every month that passes, the need for an open social network becomes more clear. We’re very excited about where we’re headed — we’re building not just another social app, but an entire network that gives users freedom and choice.”
Bluesky raising money to invest in its growing business makes sense. And the Blueksy team goes out of its way in its announcement to show where other investment has gone. Message is clear: this isn’t about making money, it’s about being able to scale with a scaling business. Logical! One of the areas that Bluesky has used previous investment to build up is trust and safety, specifically around moderation. A post from September lists areas such as banned counts’ evasion, spam, changes to list features to limit abuse, and designing video products with safety in mind. One area that Bluesky is investing considerable time in is rude content, according to the post, noting that “rude content especially can drive people away from forming connections, posting, or engaging for fear of attacks and dogpiles.” A better version of X that I — and so many others — genuinely and desperately want.
The Bluesky Part of it All
So why am I coming down on it so hard? There’s a point from Joe Cardillo, director of disinformation defense at ProgressNow, that I quite like: “Modern venture capitalists invest for one of two reasons: either they want a big return ($15M > $150M considered acceptable, but $1.5B+ is the goal), or they want a place where they can influence/control larger narratives for their personal, political, or financial gain (e.g. Marc Andreessen and a16z investing in Clubhouse and then being featured as "top recommended accounts").”
Think about X. My own personal two cents, but I don’t believe Jack Dorsey was looking to offload his pride and joy to a pal, Elon Musk, because he thought Elon would turn it into his own private clubhouse even if that was an “unintended consequence” that some professionals saw coming. Hell, even his own exit from the company was inline with the ouroboros effect. Dorsey, according to journalists like Kurt Wagner who covered his role as Twitter C.E.O for years, didn’t want to run a “business” so much as build a product he loved. By the time that activist management firm Elliot Management came in as an activist shareholder and effectively forced Dorsey into rethinking his role at the company, he had entered the last stage of the cycle. As Wagner told Business Insider’s Peter Kafka in February, “I think Elliott forced his hand on that front and made the job really, really un-fun for him. Suddenly, this thing that he wanted to run and make [sic] good had these strings attached to it.” Since Twitter was public, and since profit incentives outweigh product needs, there was only one way out.
“There were a ton of people who are making money, who had stakes invested in this thing. There was no clear way to bring it out from being public to private,” Wagner said. “Which is part of why he was so excited when Elon showed up. Suddenly this thing he was talking about, or dreaming about, for a long time, was suddenly possible.”
Dorsey is the person who set up Bluesky. All of the regrets that Dorsey had with Twitter, including eventually selling to Musk (Dorsey went from saying in 2022 that Musk was the “singular voice” to be trusted with Twitter to declaring his moves “fairly reckless” a year later), could theoretically be fixed via Bluesky. A decentralized, open source opportunity to build a community moderated experience for people looking to find a new text-based social media home. And then Dorsey left the Bluesky board, with reports suggesting he didn’t like the direction the platform was going in.
Now, I won’t claim to know what those were, and I won’t claim to know the direction Bluesky wants to go in. Nor will I claim that Dorsey is the seer of all futures and is in the right. But Dorsey’s actions seem to coincide with feelings I posted in last week’s piece:
“I want Bluesky to succeed. I think the best outcome for Bluesky is that it kind of becomes Tumblr. Not an at-scale social platform. Not a key piece of the global town square, or whatever. Not as the next big thing everyone is using. If we’ve learned anything from all the issues with YouTube and Instagram and especially X, it’s that having too many people on one platform designed to react instead of engaging leads to total disillusionment and societal fracturing. We should seek out attention for emotional connection, not attention furthering an addictive disease…
I watched the C.E.O. appear on CNBC and tout the difference of Bluesky from other apps. Bluesky certainly does feel different today. But venture capitalists want growth. Even the choice of news outlet is interesting. CNBC speaks to Wall Street, not Silicon Valley or the average user. What makes Bluesky work today doesn’t make it work tomorrow if the goal is growth and capital — and as a for-profit business in America, your only goal really is growth and capital, even if everyone involved wants to build something truly special. We may all be trying to reinvent ourselves on a new app, but the bigger question is whether Bluesky truly wants to reinvent itself…or if we’re just waiting for the inevitable X-ification to play out. Is the attention it’s receiving a net positive or is it the beginning of a disease?”
I don’t want to put words into Dorsey’s mouth, so I’m going to say what I assume is part of what he may have thought based on his conversation earlier this year with Pirate Wires. Dorsey talks about all the uncertainty around Elon looking to buy Twitter when (former C.E.O.) Parag Agarwal had basically just started. Although Twitter didn’t own Bluesky, the company had an advisory seat on the company’s board. Dorsey wanted Bluesky C.E.O. Jay Garber to stay within Twitter, but she “decided she wanted to set up a completely different entity, a B Corp,” Dorsey told Pirate Wires in May.
“That accelerated even more when Elon made the acquisition offer, and it very quickly turned into more of a survival thing, where she felt she needed to build a company, and build a model around it, get VCs into it, get a board, issue stock, and all these things,” Dorsey added. “That was the first time I felt like, whoa, this isn’t going in a direction I'm really happy with, or that wasn’t the intention. This was supposed to be an open source protocol that Twitter could eventually utilize.”
Two ways to read the quote: the first is that this was always designed for Twitter’s use specifically. The second is that the corporatization of the company wasn’t aligned with Dorsey’s decentralized, crypto-first approach to a new social media platform. The truth is probably somewhere in the middle — this is clearly just Dorsey’s point of view, and it’s not like Dorsey is exactly Mr. Social Media Saint — but I also think at that point Dorsey was stuck in-between the new product and growth part of the ouroboros effect, and had precedent knowledge for what happens next. When VCs and investors are involved, as Cardillo pointed out, profit has to come first even if the best intentions are product-focused.
Inevitability defines the ouroboros effect. The snake always eats its tail. I would like to believe that Bluesky won’t come to that, and I think Garber’s point about Bluesky being “billionaire proof” because people can take their followers and “bring them” to other platforms is a good start. But I also don’t think most people want to start over and learn new site etiquettes all the time. Nor do people want to have to keep track of all their different follower groups.
They want one site where they can hang out — and the problem is that as the number of people hanging out on one platform grows, moderation becomes near impossible even as finances may start to tick up and up. Or perhaps because finances start to tick up and up. I think someone like Dorsey knows this, and I think we all do too. Here’s hoping I’m wrong — and to be very clear, I often am — but once the tail starts to wind its way back to the snake’s head, the only way to stop it completely is cutting the snake’s head off.