The wrongful death lawsuit against several social media companies for allegedly contributing to the radicalization of a gunman who killed 10 people at a grocery store in Buffalo, New York, will be allowed to proceed.
Are the platforms guilty or are the users that supplied the radicalized content guilty? Last I checked, most of the content on YouTube, Facebook and Reddit is not generated by the companies themselves.
most of the content on YouTube, Facebook and Reddit is not generated by the companies themselves
Its their job to block that content before it reaches an audience, but since thats how they make their money, they dont or wont do that. The monetization of evil is the problem, those platforms are the biggest perpetrators.
Its their job to block that content before it reaches an audience
The problem is (or isn’t, depending on your perspective) that it is NOT their job. Facebook, YouTube, and Reddit are private companies that have the right to develop and enforce their own community guidelines or terms of service, which dictate what type of content can be posted on their platforms. This includes blocking or removing content they deem harmful, objectionable, or radicalizing. While these platforms are protected under Section 230 of the Communications Decency Act (CDA), which provides immunity from liability for user-generated content, this protection does not extend to knowingly facilitating or encouraging illegal activities.
There isn’t specific U.S. legislation requiring social media platforms like Facebook, YouTube, and Reddit to block radicalizing content. However, many countries, including the United Kingdom and Australia, have enacted laws that hold platforms accountable if they fail to remove extremist content. In the United States, there have been proposals to amend or repeal Section 230 of CDA to make tech companies more responsible for moderating the content on their sites.
The argument could be made (and probably will be) that they promote those activities by allowing their algorithms to promote that content. Its’s a dangerous precedent to set, but not unlikely given the recent rulings.
Yeah i have made that argument before. By pushing content via user recommended lists and auto play YouTube becomes a publisher and meeds to be held accountable
Not how it works. Also your use of “becomes a publisher” suggests to me that you are misinformed - as so many people are - that there is some sort of a publisher vs platform distinction in Section 230. There is not.
Oh no i am aware of that distinction. I just think it needs to go away and be replaced.
Currently sec 230 treats websites as not responsible for user generated content. Example, if I made a video defaming someone I get sued but YouTube is in the clear. But if The New York Times publishes an article defaming someone they get sued not just the writer.
Why? Because NYT published that article but YouTube just hosts it. This publisher platform distinction is not stated in section 230 but it is part of usa law.
This is frankly bizarre. I don’t understand how you can even write that and reasonably think that the platform hosting the hypothetical defamation should have any liability there. Like this is actually a braindead take.
Are the platforms guilty or are the users that supplied the radicalized content guilty? Last I checked, most of the content on YouTube, Facebook and Reddit is not generated by the companies themselves.
Its their job to block that content before it reaches an audience, but since thats how they make their money, they dont or wont do that. The monetization of evil is the problem, those platforms are the biggest perpetrators.
The problem is (or isn’t, depending on your perspective) that it is NOT their job. Facebook, YouTube, and Reddit are private companies that have the right to develop and enforce their own community guidelines or terms of service, which dictate what type of content can be posted on their platforms. This includes blocking or removing content they deem harmful, objectionable, or radicalizing. While these platforms are protected under Section 230 of the Communications Decency Act (CDA), which provides immunity from liability for user-generated content, this protection does not extend to knowingly facilitating or encouraging illegal activities.
There isn’t specific U.S. legislation requiring social media platforms like Facebook, YouTube, and Reddit to block radicalizing content. However, many countries, including the United Kingdom and Australia, have enacted laws that hold platforms accountable if they fail to remove extremist content. In the United States, there have been proposals to amend or repeal Section 230 of CDA to make tech companies more responsible for moderating the content on their sites.
The argument could be made (and probably will be) that they promote those activities by allowing their algorithms to promote that content. Its’s a dangerous precedent to set, but not unlikely given the recent rulings.
Yeah i have made that argument before. By pushing content via user recommended lists and auto play YouTube becomes a publisher and meeds to be held accountable
Not how it works. Also your use of “becomes a publisher” suggests to me that you are misinformed - as so many people are - that there is some sort of a publisher vs platform distinction in Section 230. There is not.
Oh no i am aware of that distinction. I just think it needs to go away and be replaced.
Currently sec 230 treats websites as not responsible for user generated content. Example, if I made a video defaming someone I get sued but YouTube is in the clear. But if The New York Times publishes an article defaming someone they get sued not just the writer.
Why? Because NYT published that article but YouTube just hosts it. This publisher platform distinction is not stated in section 230 but it is part of usa law.
This is frankly bizarre. I don’t understand how you can even write that and reasonably think that the platform hosting the hypothetical defamation should have any liability there. Like this is actually a braindead take.