Meta deletes 600K accounts linked to predatory behavior

Meta introduced new safety tools for teen accounts on Wednesday, along with stats that show the impact of their latest safety features.

In a blog post, Meta said that it removed approximately 635,000 Instagram accounts earlier this year, part of a larger effort to make Instagram safer for teens.

The new features include the option for teens to view Safety Tips, to block and report accounts with just one button, and to view the date a person joined Instagram, which is all “designed to give teens age-appropriate experiences and prevent unwanted contact.”

“At Meta, we work to protect young people from both direct and indirect harm. Our efforts range from Teen Accounts, which are designed to give teens age-appropriate experiences and prevent unwanted contact, to our sophisticated technology that finds and removes exploitative content,” the platform said in a press release. “Today, we’re announcing a range of updates to bolster these efforts, and we’re sharing new data on the impact of our latest safety tools.”

Mashable Trend Report

new teen safety features on Instagram


Credit: Meta

Teens on Instagram blocked accounts one million times in June and reported another one million after seeing a Safety Notice on Instagram, Meta reported. Last year, the company implemented a new nudity protection feature that blurs suspicious images. Now, the company says the vast majority — 99 percent — keep the tool activated. In June, over 40 percent of those blurred images stayed blurred, “significantly reducing exposure to unwanted nudity,” the blog post read. Meta recently started giving users a warning when they attempted to forward a blurred image, asking them to “think twice before forwarding suspected nude images.” And in May, 45 percent of people who saw the warning didn’t forward the blurred message.

The platform is also implementing protections for adult-managed Instagram accounts that feature — or represent — children. Among those protections are the new Teen Account protections and additional notifications about privacy settings. The company says it will also stop these accounts from showing up as recommendations for adult accounts with suspicious behavior. Finally, the company will bring its Hidden Words feature to these kid-focused accounts, which should help prevent sexualized comments from appearing on these accounts’ posts.

As part of these teen safety efforts, Meta has removed “nearly 135k violating Instagram accounts that were sexualizing these accounts,” and 500,000 accounts “that were linked to the original accounts,” according to the blog post.

This move from Meta is part of its continued efforts to make Facebook and Instagram safer for kids and teens — but it also comes as the company successfully lobbied to stall the Kids Online Safety Act in 2024. The Kids Online Safety Act was reintroduced this year, despite, according to Politico, a “concerted Meta lobbying campaign” to keep the bill out of Congress. Meta opposes the bill because it says it violates the First Amendment, although critics argue that its opposition is financially motivated.

This announcement comes after Meta announced it removed 10 million fake profiles impersonating creators as part of a broader push to clean up users’ Facebook Feeds.

Read More
Source link

Leave a Comment