English

Facebook censors IYSSE online meeting about Trump coup threat

An online meeting being hosted jointly by the International Youth and Students for Social Equality (IYSSE) at the University of Michigan and Wayne State University was blocked from being posted as an event page by Facebook on Monday.

The virtual meeting, planned for Thursday, Nov. 19 from 8 p.m. to 9:30 p.m., is titled “Trump’s Electoral Coup and the Threat of Dictatorship.” When members of the IYSSE attempted to schedule the meeting as a Facebook event, they were blocked and a notice came up with the following message: “The event has been removed because it goes against our Community Standards. Go to your Support Inbox to learn more.”

No further explanation of how the meeting violated Facebook’s community standards was provided. There was also no additional information in the Facebook account support inbox, as stated in the message.

That the blocking of the event was an act of political censorship by Facebook—either by means of artificial intelligence or a human monitor—is proven by the fact that when the organizers changed the event to the generic title “IYSSE Meeting,” the warning message disappeared and the event was scheduled without any problems.

The Facebook censorship against the IYSSE is an example of how the unprecedented suppression of political speech that was rolled out by the social media platforms prior to Nov. 3 has been intensified in the two weeks since the 2020 US elections.

Shutting down accounts—such as Twitter’s suspension of the International Youth and Students for Social Equality (US) account last Wednesday—removing and “fact checking” posts and “slowing the spread of misinformation” are being ramped up post-election by the tech monopolies on the basis of thoroughly undemocratic and authoritarian policies.

Twitter made it clear last Thursday that its pre-election information censorship regime of “labels, warnings and pre-bunks” will continue indefinitely and that some “significant product changes” that prevent Tweets from being “amplified” on the platform have been made permanent.

A Twitter blog post by Vijaya Gadde, company Legal, Policy and Trust & Safety Lead, and Kayvon Beykpour, Twitter co-founder, said they “want to be very clear that we do not see our job as done—our work here continues, and our teams are learning and improving how we address these challenges.”

Gadde and Beykpour reported that between Oct. 27 and Nov. 11, approximately 300,000 tweets had been labeled “for content that was disputed and potentially misleading.” They added that 456 tweets were covered by a warning message and “had engagement features limited (Tweets could be Quote Tweeted but not Retweeted, replied to or liked).”

The blog post said the Twitter “pre-bunk” prompts—posting messages at the top of a user’s timeline to pre-emptively debunk “misinformation”—during the specified time frame “were seen 389 million times, appeared in people’s home timelines and in Search, and reminded people that election results were likely to be delayed, and that voting by mail is safe and legitimate.”

That the measures now being made permanent, in an effort to slow the spread of “misleading information” on Twitter, are actually reducing overall sharing was acknowledged by Gadde and Beykpour. They wrote in the blog post that one of the product changes being kept in place—prompting users to comment before they retweet another user’s tweet—led to a 20 percent drop in all post sharing on the platform.

In addition to event blocking as described above, Facebook has continued to place fact-check labels on every post or shared link that mentions the US elections, ballots or election results. The fact-check labels link users to the Facebook Voting Information Center which features content from the Bipartisan Policy Center and the National Conference on Citizenship, two organizations dedicated to protecting the two-party system.

Due to the massive scale of Facebook—there are 2.7 billion worldwide users and 4.75 billion items posted on the platform each day—the company relies heavily upon machine-learning and natural-language processing technology to review posts and remove them, label them as “disinformation” or throttle their spread on the platform. In addition, Facebook employs an army of 35,000 content review specialists who work with third-party fact-checking organizations.

When asked by the Columbia Journalism Review about the process used by the company to identify “misinformation,” a Facebook representative said, “If one of our independent fact-checking partners determines a piece of content contains misinformation, we use technology to identify near-identical versions across Facebook and Instagram.”

In explaining its post-election censorship, Facebook issued a statement that said it was taking additional steps “to keep this content from reaching more people.” Among the steps taken are “demotions for content on Facebook and Instagram” that “our systems predict” may be misinformation, “including debunked claims about voting.”

Loading