The algorithms eliminate or limit the reach of inappropriate or hateful posts. These arcane algorithms have resisted efforts to govern them due to a combination of corporate secrecy as well as technical complexity, leading to what is known as the "black box problem".
At times though, social media influencers have reported sudden drops in the reach and reactions for wholesome content, leading the posters to suspect that the posts have been algorithmically suppressed. Colloquially, influencers describe the situation of an unexplainable drop in reach as "shadowbanning", which social media platforms have largely denied exists.
The people on the platform find it difficult to refute these denials because of a lack of transparency in the algorithms, as well as their complexity. As a consequence, the users of the platform doubt themselves and second-guess what they know about how the platforms, as well as their social media algorithms work. In an article published in Information, Communication & Society, Kelley Cotter, assistant professor at the Penn State College of Information Sciences and Technology, has termed the phenomenon "black box gaslighting".
Cotter explains, "Black box gaslighting suggests that the lack of transparency around algorithms and our inability to always explain their behavior creates a space to undermine people's perceptions of reality - to make users think that what they believe about algorithms is wrong or to question their own ability to perceive them clearly. Black box gaslighting is a threat to our collective ability to hold social media platforms accountable."
Cotter warns that the lack of transparency of social media algorithms, and the errors within them can contribute to larger societal problems, especially for vulnerable members of the population who see their content inexplicably restricted or having their posts distributed minimally to different algorithms. Cotter explains, "If we have critical claims about shadowbanning and censoring of different marginalised communities, but social media platforms can deny it and effectively convince the public otherwise, then it's really hard to ensure that algorithmic systems operate in the public's best interest and can be held accountable for the ills that they might perpetrate. And, in fact, influencers and other users are often the first to see and experience problems because the problems are so dependent on the who, what, where of algorithmic operations."
In the study, Cotter explored claims of influencers about shadowbanning, and the responses by the platform, to better understand how influencers experience shadowbanning. In the few cases where statements about shadowbanning were released to the media, Cotter found that the sense in which the term was used by the platform differed from how it was used by the users. Cotter suggests that the platform attempted to debunk the claims of the influencers by offering alternate explanations such as glitches, the failure by the users to create engaging content, and implicating changes in the behaviour of the consumers, beyond the control of the platform, leading to the reach of the posts being a matter of chance.
Cotter hopes that by drawing attention to the issue, users can use the shared knowledge to hold the platforms accountable, "In this case, with blackbox gaslighting, it has to do with the kind of power asymmetry between platforms and influencers who are labouring on those platforms."
Official website Pvt Ltd.