
Mark Zuckerberg launched Meta’s Twitter copycat app Threads as a “friendly” haven for online public discourse, framing it in sharp distinction from the more adversarial Twitter owned by billionaire Elon Musk.
“We’re definitely focusing on kindness and making this place friendly,” Meta CEO Zuckerberg said on Wednesday shortly after the service launched.
Maintaining that idealistic vision of Threads — which attracted over 70 million users in its first two days — is another story.
To be sure, Meta Platforms is no stranger to managing the Internet’s hordes of rage-inducing, obscene postings. The company said it would subject users of the new Threads app to the same rules it maintains at its photo- and video-sharing social media service, Instagram.
The Facebook and Instagram owner has also been actively taking an algorithmic approach to serving content, which gives him greater control over what kind of fare does well as he tries to shift more towards entertainment and away from the news.
However, by connecting Threads with other social media services like Mastodon and giving microblogging appeal to news junkies, politicians and other fans of rhetorical combat, Meta is also courting new challenges with Threads and seeking to carve a new path through them.
For starters, the company will not be extending its existing fact-checking program to Threads, spokeswoman Christine Pai said in an emailed statement Thursday. This eliminates a distinguishing feature of how Meta has managed misinformation in its other apps.
Pai added that Facebook or Instagram posts classified as false by fact-checking partners – which include a Reuters unit – will carry their labels if posted on Threads as well.
Asked by Reuters to explain why it was taking a different approach to disinformation on Threads, Meta declined to respond.
In a New York Times podcast on Thursday, Adam Mosseri, head of Instagram, acknowledged that Threads was more “supportive of public discourse” than Meta’s other services and therefore more inclined to attract a news-focused crowd, but said the company intended to focus on lighter subjects like sports, music, fashion and design.
However, Meta’s ability to distance itself from the controversy was immediately disputed.
Within hours of launch, Threads accounts seen by Reuters were posting about the Illuminati and “billionaire Satanists”, while other users compared themselves to Nazis and fought over everything from gender identity to West Bank violence.
Conservative personalities, including the son of former US President Donald Trump, have complained about censorship after labels appeared warning potential followers that they had posted false information. Another Meta spokesperson said these labels were a mistake.
IN THE FUNDIVERSE
Further challenges in content moderation are available when Meta links Threads to so-called fediverse, where users of servers operated by other non-Meta entities will be able to communicate with users of Threads. Pai da Meta said that Instagram rules also apply to these users.
“If one account or server, or if we find many accounts from a given server, violate our rules, they will be blocked from accessing the Threads, which means that the server’s content will no longer appear in the Threads and vice versa,” she said.
Still, online media researchers said the devil was in the details of how Meta handles those interactions.
Alex Stamos, director of Stanford’s Internet Observatory and former head of security at Meta, posted on Threads that the company would face greater challenges in enforcing key types of content moderation enforcement without access to backend data about users posting banned content.
“With federation, the metadata that the big platforms use to link accounts to a single actor or detect abusive behavior at scale is not available,” said Stamos. “This will make it much harder to stop spammers, troll farms and economically motivated abusers.”
In his posts, he said he expected Threads to limit the visibility of reeking servers with large numbers of abusive accounts and apply tougher penalties to those who post illegal material like child pornography.
Yet the interactions themselves raise challenges.
“There are some really strange complications that arise when you start thinking about illegal things,” said Solomon Messing of the Center for Social Media and Politics at New York University. He cited examples such as child exploitation, non-consensual sexual imagery and gun sales.
“If you come across this type of material while indexing content (from other servers), do you have a responsibility beyond just blocking it from Threads?”
© Thomson Reuters 2023