
“Extremely concerning.”
That is how tech mogul Elon Musk has expressed deep anxiety over a bombshell report published by the Wall Street Journal (WSJ), which claims that Instagram’s algorithm is actively promoting pedophile networks, facilitating the sale of explicit videos, circulating ‘preteensex’ menus, and even enabling in-person meetups with underage boys and girls.
Shockingly, these illicit activities are said to be communicated through the use of coded emojis, such as a map and a cheese pizza.
Musk, a vocal advocate against child pornography and pedophilia, played a significant role in amplifying the exposé.
His tweet sharing a screenshot of the WSJ’s news exclusive titled ‘Instagram Connects Vast Pedophile Network’ garnered over 120,000 likes and 26,000 retweets as of this posting.
In his reply to a tweet, Musk expressed his horror, stating, “Wow, this is terrible,” directing his 142 million followers to the shocking details of how Instagram allegedly aids pedophiles in finding child pornography and arranging meetings with children.
It is worth noting that shortly after acquiring Instagram, Musk and his team took action by banning tens of thousands of accounts that promoted child sexual exploitation and non-consensual nudity.
This recent revelation has sparked outrage on social media platforms, with the hashtag #PedoGram trending on Twitter. Thousands of users are condemning Mark Zuckerberg, Meta’s CEO and founder, for allegedly inviting pedophiles and child predators onto the Instagram platform.
“The child rapists had no where to go after Epstein’s island, so Zuckerberg invited them to instagram… #Pedogram,” tweeted user Gunther Eagleman (@GuntherEagleman).
The WSJ report, based on investigations conducted by the journal and researchers from Stanford University, claims that Instagram not only hosts pedophilic content, but its algorithms actively promote it. Researchers discovered that Instagram serves as a breeding ground for child pornography, facilitating connections between pedophiles and content sellers through sophisticated recommendation systems that excel at linking individuals with shared niche interests.
The study also found that Instagram enabled users to search for hashtags such as ‘#pedowhore’ and ‘#preeteensex,’ which connected them to accounts selling child pornography. Disturbingly, many of these accounts claimed to be operated by children themselves, adopting handles like “little slut for you.”
Instead of openly publishing explicit material, these accounts provided “menus” of their content, allowing buyers to select their preferred exploitative material. Shockingly, some accounts even offered paid meetups with the children involved.
To uncover the extent of Instagram’s role in promoting child sexual content, researchers set up test accounts to gauge the speed at which the platform’s “suggested for you” feature would recommend accounts selling child sexual material. Within a short timeframe, Instagram’s algorithm inundated the test accounts with content that sexualized children, often linking to off-platform content trading sites.
The researchers discovered 405 sellers of “self-generated” child sexual material through hashtags alone. Some of these accounts claimed to be operated by children as young as 12, using hashtags associated with illegal material. Astonishingly, Instagram sometimes displayed a pop-up warning stating, “These results may contain images of child sexual abuse,” but still offered users the option to proceed and view the content anyway.
Pedophiles on Instagram reportedly utilized a system of emojis to communicate in coded language about the illicit content they were facilitating. For instance, an emoji of a map symbolized “MAP” or “Minor-attracted person,” while a cheese pizza emoji represented “CP” or “Child Porn.” Accounts involved in these activities often identified themselves as “seller” or “s3ller” and subtly indicated the ages of the children being exploited, such as using phrases like “on Chapter 14” instead of explicitly stating their age.
Although Instagram claims to crack down on such activities, the report highlights several shortcomings. Even after numerous posts were reported, not all of them were taken down. Instead, Instagram responded by suggesting users hide the accounts to avoid seeing them. Despite banning certain hashtags associated with child pornography, Instagram’s AI-driven hashtag suggestions managed to find workarounds, recommending users try variations of their searches and suggesting additional keywords like “boys” or “CP” to yield desired results.
Comparatively, the Stanford research team conducted a similar test on Twitter and found fewer accounts offering to sell child sexual abuse material. They noted that Twitter’s algorithm was less likely to recommend such accounts and that offending accounts were removed more swiftly than on Instagram.
Cases like this underscore the pressing need for social media companies, particularly Meta, to enhance their regulation of AI systems to prevent the proliferation of such abhorrent content.