Instagram said Thursday it will start alerting parents if their kids repeatedly search for terms clearly associated with suicide or self-harm. The alerts will only go to parents who are enrolled in Instagram’s parental supervision program.
Instagram says it already blocks such content from showing up in teen accounts’ search results and directs people to helplines instead.
The announcement comes as Meta is in the midst of two trials over harms to children. A trial underway in Los Angeles questions whether Meta’s platforms deliberately addict and harm minors. Another, in New Mexico, seeks to determine whether Meta failed to protect kids from sexual exploitation on its platforms. Thousands of families — along with school districts and government entities — have sued Meta and other social media companies claiming they deliberately design their platforms to be addictive and fail to protect kids from content that can lead to depression, eating disorders and suicide.
“Hey just a heads up our algorithm is making your spawn suicidal”
Yeah, I hate that our only two realistic options are allowing companies to self-regulate or age verification, but age verification feels like the lesser of two evils. We’d need to see much more immediate and catastrophic effects for the government to force social media companies to give up their algorithms and actually moderate without using imprecise LLM’s and automods.
Forcing people to give their picture to Peter Thiel is not regulation. Age verification laws are giving companies more licence to abuse users, not less.
Respectfully, I’ve heard all the arguments and do not care to litigate it again.
What you and I think is immaterial. These are the only realistic outcomes in the current ecosystem. You can go have pointless arguments about it with someone else.
If you don’t want to discuss something just don’t respond lol
Parental controls are a thing that exist. Used to be parents were responsible for monitoring what their children do, not the government or private corporations
You can go have pointless arguments about it with someone else.
“Cigarettes now with filters!”
Parents everywhere: oh thank goodness
Bullshit. Tell me one reason why can’t a gocernment decide that the algorithm is bad and let the company choose in removing the algorithm or become banned in that country.
And how will Instagram know who my parents are?
The alerts will only go to parents who are enrolled in Instagram’s parental supervision program.
Like this?
There’s a music band named Suicidal Tendencies. Can’t even look that shit up online without getting a notice and probably flagged on a list.
Side note, bad name for a band…
“Alexa, play Suicidal Dream by Silverchair.”
“Contacting the Suicide Prevention Hotline…”
I’m just gonna drop this link here…
Do any of us think Meta will moderate content in any meaningful way? Even for this supposed parental supervision program?
What if the parents are the cause of the suicide thoughts?
I cannot even imagine giving a social media platform enough information to even do this. Maybe just don’t?
I wonder how they will monetise this feature…






