A cracked shield with social media logos falling through the crack

While millions of teens scroll through Instagram daily, relying on the platform’s safety features to protect them, a report from Fair Play for Kids recently revealed that 64% of those protections are ineffective or non-existent. The report found that of the 47 features tested in the app, only eight functioned effectively, while 30 had been discontinued and nine were limited. As a result, harmful content and behavior go unchecked, with teens exposed to accounts related to suicide, eating disorders and illegal substances. 

Across different platforms, safety policies are designed and enforced differently. However, reports on major social media apps reveal the harm caused by these safety failures. 

TikTok has been repeatedly reported for encouraging self-harm and harmful mental-health related content, ineffective age gating and for allowing dangerous viral challenges to continue to bypass safety filters. According to ‌research from Amnesty International, more than half the videos in the “For You” feed showed videos related to mental health struggles, with multiple recommended videos, between three and 20 minutes long, romanticizing or encouraging suicide. 

When it comes to online safety, some teens feel that the responsibility should be shared between parents and the platforms. 

“I feel like they kind of have half the responsibility because most of it should actually be on the parents, but the platform should crack down and regulate what they show,” junior King Lossie said. 

Even with Restricted Mode enabled, Global Witness researchers have also found explicit sexual content within minutes after creating fake TikTok accounts registered as 13-year-olds. Another concerning aspect of the investigation was how easily the platform’s protections could be bypassed. The platform didn’t ask for any additional information to confirm their ages on any of the test accounts. 

The Federal Trade Commission filed a lawsuit in 2019 against ByteDance, TikTok’s parent company, claiming it failed to comply with the Children’s Online Privacy Protection Act. Also known as COPPA, this act serves to protect the online personal information of kids under 13, which requires parental consent for collection. After the legal settlement with the FTC, TikTok created a Family Pairing option that enables parents and children to connect their TikTok accounts and allows parents to control screen time, view direct messaging and access specific content. However, this feature has several major flaws, which allow children to avoid the Family Pairing by either not enabling the feature in the first place or by creating anonymous accounts without parental approval. 

In addition to all this, a lot of TikTok’s viral challenges have led to serious injuries and even death, such as the “Blackout Challenge,” which was recommended on the “For You” page of Tawainna Anderson’s 10-year-old daughter Nylah and killed her. This challenge involved users choking themselves until they passed out. According to the Social Media Victims Law Center, Tawainna Anderson sued TikTok in 2022 after the death of her daughter from attempting the challenge. At first, the case was dismissed, but an appeals court reversed it in 2024, which was the first time they allowed a “Blackout Challenge” case against TikTok to proceed. 

Similar safety failures have also been documented on Meta-owned platforms such as Instagram, with vulnerabilities to predators and exploitation, including sextortion and grooming. They have also been criticized for lenient policies against sex trafficking, prostitution and sexual solicitation with their 17x strike policy. 

According to an article from Time Magazine, Meta had announced changes to the platform where users under 18 would be automatically placed in “Teen Accounts” designed to screen harmful content and restrict messages from users they don’t follow. However, new reports show the opposite, as harmful content and unwanted messages are still widespread on Instagram, with 56% of teen users ignoring it rather than reporting it because of normalization. It was also reported that, despite Meta’s ban on users under 13, there was evidence that elementary school-aged kids were not only using the platform but were actively incentivized by Instagram’s recommendation-based algorithm to perform risky sexualized behavior because of the amplification of sexualized content. 

The 17x strike policy refers to an internal enforcement rule at Meta that allowed accounts reported for sex trafficking to remain active until the 17th violation. This meant that an account can incur up to 16 violations for prostitution and sexual solicitation, and it’s upon the 17th violation that the account would be suspended, which was considered a very high strike threshold in the industry. Learning about Meta’s internal enforcement policy also raised concern among students. 

“It shouldn’t take 17 times for Instagram to ban that account,” freshman Gavin Halpert said. 

Alongside this policy, the platform allegedly also made it difficult to report child sexual abuse material. According to the same Time Magazine article, Vaishnavi Jayakymar, who was Meta’s former head of safety and well-being, had raised concerns about the issue multiple times but was dismissed, citing that it would be too difficult to resolve. A Meta spokesperson had disputed the allegation, but it’s up to the court to confirm the claims. 

Continuing with the pattern of inappropriate sexual content, X, which was formerly known as Twitter, has faced growing concerns about the wide misuse of Grok AI's new feature among users. In late December 2025, X introduced a new feature in its AI tool that allowed users to edit others' posted images using Grok. From a New York Times article, interest in Grok’s image editing abilities exploded when Elon Musk shared an image of himself in a bikini generated by the chatbot. Afterward, users flooded ‌the social media app with many requests for the bot to remove clothing in images of women and children, which the bot publicly posted later. 

This led to a massive surge in generated content, as the bot created 4.4 million images in just nine days. An analysis by the Center for Countering Digital Hate reported that 65% of them were sexual imagery of men, women or children. Due to the backlash, X limited Grok’s AI image creation on Jan. 8 to only be available for users who pay for some premium features, which reduced the number of images, but those restrictions are not extended to Grok’s app or website, which allows users to continue doing it in private. 

“What concerns me the most about AI being used to manipulate images of real people is that, at some point, people will not know which one is true and which one isn’t,” junior Ayesha Tabb said. 

Through all these platforms, the pattern remains consistent. Safety features are found to not always be reliable, with many being easy to bypass and often introduced only after serious harm has already occurred. While many social media companies promote safety tools meant to protect young users, investigations show that enforcement frequently falls short. These failures leave teens to navigate spaces that were not designed with their safety as the priority, emphasizing the importance of teaching young people how to protect themselves online.

“My advice is, when anyone seems too pushy, just block them,” JAG teacher Raven Carter said. “Don’t even question it because your real friends, the real people who really genuinely like you, wouldn’t do that to you.”