Skip to content

Out of Site, Out of Mind

How Shadowbanning Threatens Online Communities

Maybe it’s happened to you: you were trying to find a user or post you really love to send to a friend. You hit the search bar and to your dismay, nothing comes up. You scroll a bit and can still see their posts in your feed – weird. The account hasn’t been deleted, but it’s not all there.

While selective user silencing is not new, nowadays, users on both Twitter and Instagram might find themselves shadowbanned, or partly silenced. Shadowbanning is the act of blocking a user’s content online such that the user doesn’t know it’s happening. Instagram has previously acknowledged the issue (though not explicitly) by referring to the fact that some users are unable to surface posts using hashtag search. According to Instagram community guidelines around what types of censorship they undertake, “[they] remove content that contains credible threats or hate speech, content that targets private individuals to degrade or shame them, personal information meant to blackmail or harass someone, and repeated unwarranted messages.” Elsewhere on the platform, these community guidelines refer to “fostering meaningful and genuine interactions” by not artificially collecting likes or followers. To this end, “overstepping” community guidelines may lead to “deleted content, disabled accounts, or other restrictions.” While restricting platforms for “harmful content” (trolls, bots, and disinformation) seems necessary for social media platforms, shadowbanning brings up the questions of how and why “harmful content” is flagged and restricted, and who exactly is getting banned. Are some people more affected than others? For online communities, is shadowbanning a real problem?

In an interview with the Daily, researcher, cultural critic, and meme-creator Kristen Cochrane (@ripannanicolesmith) commented that shadowbanning can pose a real threat to online communities, and can act as a form of unpredictable and unreliable agenda setting. Community bonding through self-mockery or jokes about one’s own identity also run a high risk of being censored; posts intended as humour or satire can often be misread. “There’s a nuance that’s missing, things like tone, facial expression, cultural context, that [a moderator] might not read.” The material implications of shadowbanning are that online social communities can be restricted, and their reach is consequently limited.

In an Instagram direct message to the Daily, popular meme account @beesdyingalarmingrate spoke to experiencing censorship, though they’ve never been shadowbanned: “I’ve definitely had posts taken down for ridiculous reasons like using the word ‘dyke,’ or saying cis men are whack […] Most of the content on my page isn’t original and I don’t make money off the page in any way so it doesn’t really affect me in any serious way but it’s a bummer. What’s more upsetting to me is when accounts get deleted wrongfully, which, maybe I’m paranoid, but seems like it happens more with the LGBTQ and POC meme accounts that I follow than with more hegemonic accounts.” Cochrane also explained that, in her view, “agenda setting” unfolds this way: while people might otherwise have the opportunity to engage with more diverse content, restricting content that falls outside of centrist norms prohibits interactions with content that might move you to think differently. There is an argument to be made that these types of regulations might also force artists and creators to subconsciously produce content that falls within certain existing norms, undermining creativity, and creating monocultures.

“I haven’t been shadowbanned,” Kristen explained, “but a lot of people on the left are shadowbanned, especially millennials who use absurdist humour. I’ve also noticed accounts dealing with mental health and intersectional feminism writing about being shadowbanned, and these are often marginalized individuals. I’ve noticed it happening to a lot of working class and precarious folks as well as POC with more radical, but not offensive content. One account I really like, @patiasfantasyworld, from New York, is often hidden, and she has been spreading mostly via word of mouth. Her memes are not radically left stuff, it’s more surreal millennial humour.”

Patia Borja, creator of @patiasfantasyworld, told the Daily that she has been shadowbanned, a lot, and that it can make work-related things hard. “Sometimes my friends will recommend me for projects and tell me they’ve given so & so my username to contact, and I have to tell them I might not come up in search.” When asked if she thought shadowbanning is a real threat, she said she views it more as an inconvenience. “I want to vocalize my thoughts about what is going on in the world on a public forum and the process of who gets to be shadowbanned deters that.”

She added that “every person I know who is shadowbanned doesn’t even post crazy shit or anything. […] Why should my content [such as a picture of poop, or a meme about men being trash] be hidden when it isn’t harming others? Since Facebook bought Instagram, I feel the app has gone downhill. Everything gets reported except for racist content. […] How many school shooters or bullies have pages indicating their terrible activities yet they’ve never been taken down?”

Are Posts Differentiated?

It’s difficult to gauge if different content is treated differently, and if it is, the magnitude of this difference. In April 2019, Techcrunch reported that Instagram now “demotes” vaguely inappropriate content, such as “sexually suggestive” content, or memes that are not outright hate speech but could be “in poor taste.” A leak of slides presented to journalists at the press event shows “non-recommendable content,” as per Instagram. These indicate a picture of a woman in their underwear as “sexually suggestive,” and recommend against posts that contain “misinformation.” On this rough guideline alone, it’s unclear what even falls into this category – which by definition could cover anything from pictures taken in gym or beachwear to pornographic images. Are women more likely to have content flagged? Are only posts consistent with conventional beauty norms left up? Do shirtless gym pictures or posts taken at the beach constitute “sexually suggestive” content?

Salty Mag, a “newsletter (for & by) badass women, trans, and non-binary peeps,” spoke up earlier this year on Instagram’s ban of their content on the grounds of “promoting escort services,” In each case, the people featured in the content were either trans, racialized, and/or intersex. In each case, the individuals are fully clothed.

Salty describes continual difficulties with censorship on the platform since their inception. This has included the removal of topless photos of non-binary and trans individuals. Instagram currently prohibits female nipples being shown, and as such removes any photos within this binary optic. While the ads Salty speaks of are not for escort services, the mag strongly supports sex workers and advocates for the fact that these are often the people most affected by these “new” censorship regulations. Though the posts were re-instated eventually (after much pushback from Salty) the stories (and screenshots of Instagram content regulation) they are but a few examples of bodies being policed via these guidelines – especially bodies that are racialized, fat, queer, disabled, and/or engaged in sex work.

It’s hard to know exactly how the “black box” (algorithm) functions to make these value judgements. Facebook has already come under fire for its call-centre style of contracting out the emotionally turbulent work of moderating violent, traumatic, and inappropriate content. In an in-depth exploration of content moderators, The Verge described how consensus is reached on special topics being censored to make up a “rapidly changing rulebook” to guide flagging and taking down content. This is also tightly wound up in stringent “quality assurance” measures with “narrow margins of error” in an intense work environment – forcing employees to make rapid judgements on content. The emotional and psychological toll of this type of work is undeniable, and raises a lot of questions: whether an underlying moderator ideology exists, and what ideologies “quality assurance” converges on (or is being told to converge on). The question of whether artificial intelligence models could instead regulate content fairly and accurately is an entirely different conversation. In a Facebook-published manifesto on content governance, founder Mark Zuckerberg seems to acknowledge the problem, arguing in favour of the algorithms: “The vast majority of mistakes we make are due to errors enforcing the nuances of our policies rather than disagreements about what those policies should actually be. Today, depending on the type of content, our review teams make the wrong call in more than one out of every ten cases.”

How Dangerous Can Shadowbanning Really Be?

While merely an inconvenience for some, shadowbanning can cause real problems for people whose livelihoods depend on their social media presence.

In 2018, the U.S. Senate and House passed FOSTA-SESTA – a package of bills to “Allow States and Victims to Fight Online Sex Trafficking” (FOSTA) and the “Stop Enabling Sex Traffickers Act” (SESTA). The legislation came largely out of an investigation into Backpage, an online classified ad service that has been accused of facilitating and profiting off of child sex trafficking. The Bills were (perhaps unsurprisingly) largely opposed by proponents of “free speech,” but also by online sex workers. While FOSTA-SESTA might harm sex traffickers, it also has the effect of limiting sex workers’ ability to offer and discuss sexual services online. Journalist Violet Blue explored the relationship between “cracking down on child sex trafficking” and stifling adult sex work altogether in her Engadget piece earlier this year. In it, she describes how FOSTA equates adult sex work with online sex trafficking, and that most major internet platforms backed FOSTA. In the aftermath, sex workers have reported being forced to pay exorbitant sums of money to get their accounts back up and running, which some have called outright extortion. Following passage of the Bill, violent physical and financial harms to sex workers have only increased. What’s more, the stated aim of stopping sex trafficking has also become compromised – law enforcement professionals are less able to track advertisements or digital footprints for prosecutors. The backlash caused by FOSTA-SESTA is a salient example of how political regulation of the internet is both influenced by and supportive of existing power structures. As a direct result, those most excluded from traditional markets bear the brunt of these regulations. Exclusion from digital markets compromises livelihoods, safety, and community.

While outright banning and policing of certain bodies is a huge problem in and of itself, shadowbanning poses a unique threat to social media users and content creators because it is hard to identify and track when and if it’s happening. For rejected ads and promoted content, there is a direct interaction with Instagram that can be referenced, enabling users to protest unfair or discriminatory bans. Shadowbans are particularly insidious because we can’t lay direct claim to any type of discrimination; the main symptom of a shadowban is decreased engagement, which could be attributed solely to a “lack of interest” or content that simply “isn’t engaging enough.” As Kristen mentioned, this is agenda setting, controlling the types of content we are allowed to access.

In our fast-paced attention economy, changes to our technological landscape and the political implications that come with them can be missed in a blink. A primary value of social media is the visibility that social networks are intended to provide for consumers. As digital citizens, it’s not entirely clear how we can interrogate these regulatory systems or work against undue censorship. That said, there is room to push back. If you get content banned, report it (when cases that have been previously flagged and banned, such as Salty’s, get reinstated, it creates precedent). And for those of us not stripped of a voice, moving the conversation forward through sharing content by word of mouth, and attributing to creators as much as possible, is imperative. As scholar Safiya Noble posits in the opening to her book Algorithms of Oppression: “we must ask ourselves who the intended audience is for a variety of things we find, and question the legitimacy of being in a ‘filter bubble,’ when we do not want racism and sexism, yet they still find their way to us.”

Am I shadowbanned?
Here’s how to check:
1. Post something with an uncommon hashtag.
2. Ask five people who don’t follow you to search the hashtag.
3. If none of them see your post, you’re probably shadowbanned.