Serikzhan Bilash founded the Atajurt Kazakh Human Rights organization in 2017 to be a voice for the voiceless.
Now, the very platform that once gave them a voice and an audience attempted to silence them.
On June 15, Bilash’s access to his YouTube account was temporarily suspended, initially without explanation. The account has more than 11,000 videos, mostly testimonials of family members of persons currently detained inside the more than 260 known political reeducation camps that currently hold between 1.8 million to 3 million ethnic Kazakhs, Kyrgyz, and predominantly Uyghurs, a Muslim minority in China.
YouTube has since clarified that it revoked access to the account due to the channel’s use of personally identifiable information, which violates its community standards. While YouTube’s reasoning for not permitting the use of personally identifiable information in videos may be justified in preventing bullying in one circumstance, it may prevent good record-keeping and awareness of human rights violations in another.
Bilash includes personally identifiable information in the videos to verify the identities of the individuals providing testimony and their detained family members.
Those who gave testimony willingly provided their personally identifiable information for the sake of their families. By providing this information, Bilash thinks that it strengthens the veracity of their claims and makes it difficult for authorities in China to label the videos as propaganda or false information.
YouTube also serves as an important archive of the videos (according to Bilash, it’s the only complete archive), since they have been targets of the Kazakh government and other authorities due to the sensitive nature of their advocacy.
Bilash’s first instinct upon hearing of his YouTube account’s suspension was fear that the majority of his work had been lost due to a misapplication of policy.
While YouTube has since reinstated Bilash’s access to his account and has made efforts to scrub personally identifiable information from the videos already posted, Bilash is fearful that this won’t be the last time will be targeted.
And he’s likely not wrong.
The mass gaming of community guidelines to target political enemies on social media is here to stay. Indeed, it’s already commonplace.
Bad actors working together to wage war against a third party using a tech company’s own policies adds a new element to the cat-and-mouse game of content moderation.
Unlike more straightforward cases, such as Zoom executive Xinjiang Jin’s direct work with Chinese Communist Party officials to pull down and stymie Zoom calls on Tiananmen Square and other topics “unacceptable” to the party, new efforts trade on obfuscation.
Previous attempts look like an army of Twitter accounts directly controlled by the Saudi regime attempting to drown out information on Jamal Kashoggi’s death by amplifying pro-Crown Prince Mohammed bin Salman content.
But more recent cases, such as Bilash’s—even if his targeting was not directly sponsored by an authoritarian government—highlight the blending of strategic intent by state actors with activists, engaged citizens, and chaos agents.
For example, during renewed violence in Gaza in May, a pro-Israel Facebook group with 77 million followers found itself the target of more than 800,000 “hate comments and posts [that] came within an hour,” including numerous invocations of Hitler and Hitler quotes.
Facebook shut down the page, Jerusalem Prayer Team, claiming the site violated its rules for “inauthentic behavior,” due to the 2 million “examples of hate speech” that accrued in the comments.
The “success” of this pressure campaign—reportedly conducted by radical Islamist groups and anti-Israel activists—will embolden similar internet mobs to attempt to deplatform pages, organizations, and individuals with whom they disagree.
While some protestations may be organic, platforms will increasingly contend with bad actors that manipulate their vague and inconsistently enforced set of rules to restrict free speech.
Companies must be agile enough to contend with this type of crowdsourced manipulation without sacrificing consistent and transparent enforcement of community guidelines.
Platforms should recognize and acknowledge that bad actors actively abuse their rules to silence their perceived enemies. They much create rule sets to aggressively counter such bad actors as closely as they scrutinize mainstream conservative speech.
Due to their size, scale, and reach, a Big Five company will never be able to completely root out all mischief conducted on their platforms and products. As such, these companies—the new gatekeepers of information—must also articulate clear and uniformly applied methods of recourse for all users and organizations.
Further, the Atajurt Kazakh Human Rights and Jerusalem Prayer Team stories demonstrate the importance of efforts to empower users through privacy-preserving technical solutions and alternative platforms.
Bilash transferred his videos to YouTube competitor Odysee, a website built on a blockchain protocol that is fast becoming a new home for viewpoints that fall outside the prevailing narratives of today’s culture.
Diversification is the immediate path forward. Short of convincing the major platforms that content moderation is a human rights issue, the best option to stave off permanent deletion is to use an alternative service provider that endorses and defends the freedom of expression.
Olivia Enos is an Asian Studies Center senior policy analyst within The Davis Institute for National Security and Foreign Policy at The Heritage Foundation. Kara Frederick is a research fellow at The Heritage Foundation’s Center for Technology Policy.
Editor’s Note: This piece originally appeared on The Daily Signal.