I write ̶b̶u̶g̶s̶ features, show off my adorable standard issue cat, and give a shit about people and stuff. I’m also @CoderKat.

  • 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • The whole CSAM issue is why I’d never personally run an instance, nor any other kind of server that allows users to upload content. It’s an issue I have no desire to have to deal with moderating nor the legal risks of the content even existing on a server I control.

    While I’d like to hope that law enforcement would be reasonable and understand “oh, you’re just some small time host, just delete that stuff and you’re good”, my opinion on law enforcement is in the gutter. I wouldn’t trust law enforcement not to throw the book at me if someone did upload illegal content (or if I didn’t handle it correctly). Safest to let someone else deal with that risk.

    And even if you can win some case in court, just having to go to court can be ludicrously expensive and risk high impact negative press.


  • Scammers have long since used bots already for text based scams (though dumb ones). Phone calls are a lot harder, though. And there’s also “pig butchering” scams, which are the long cons. Most commonly those are fake relationships. I suspect those long cons would have a hard time convincing someone for months, as human scammers manage to do.

    I suspect that scammers will have a harder time utilizing AI, though. For one thing, scammers are often not that technologically advanced. They can put together some basic scripts, but utilizing AI is difficult. They could use established AI, but it’d almost surely be against their ToS, so the AI will likely try to filter scam attempts out.

    That said, it might be just a matter of time. Today, developing your own AI has a barrier to entry, but in the future, it is likely to get a lot easier. And with enough advancements, we could see AI being so good that fooling someone for months may be possible. Especially once AI gets good at generating video (long con scams usually do have scammers video chat their victims).

    And honestly, most scams have a hundred red flags anyway. As long as the AI doesn’t outright say something like “as a large language model…”, you could probably convince a non zero number of victims (and maybe even if the AI fucks up like that – I mean, somehow people get convinced the IRS takes app store gift cards, so clearly you don’t have to be that convincing).




  • Tiktok is the absolute worst at irrational censorship. It’s a shame because the site is immensely popular and that means it is full of very interesting content. Yet, this is far from the first unreasonable thing they’ve been removing. It’s well known how Tiktok users came up with alternative words to circumvent words that were likely to get their content removed (e.g., “unalived” instead of “killed”).


  • Strongly agreed. I think a lot of commenters in this thread are getting derailed by their feelings towards Meta. This is truly a dumb, dumb law and it’s extremely embarrassing that it even passed.

    It’s not just Meta. No company wants to comply with this poorly thought out law, written by people who apparently have no idea how the internet works.

    I think most of the people in the comments cheering this on haven’t read the bill. It requires them to pay news sites to link to the news site. Which is utterly insane. Linking to news sites is a win win. It means Facebook or Google gets to show relevant content and the news site gets users. This bill is going to hurt Canadian news sites because sites like Google and Facebook will avoid linking to them.


  • While you’re right that that’s a downside of downvotes, I think that it’s far better than the alternative.

    Downvotes means we have a way to discourage really bad behavior and lets others see that it’s discouraged. For example, suppose someone posts something bigoted. It sucks to see those kinda comments (especially when they affect you personally). When those comments are heavily downvoted, it feels better, since it tells you that the views expressed in the comment are not acceptable. It’s extremely discouraging when I see bigoted posts with a positive score. Without downvoting, they all have positive scores and it’s just “less positive”.

    It’d be nice if reporting was able to remove such comments before anyone sees them, but that will never be the case. Too many communities don’t remove comments fast enough and many more simply won’t remove comments unless they’re really bad, if at all. Some moderators are bigots themselves and others simply don’t have the ability to recognize dog whistles that may be in comments. Or they’re not personally affected by the malicious comment, so they can be more easily convinced that if the comment was politely worded, it’s acceptable even if it’s blatantly bigoted.

    To be clear, it does suck that users will use it as a disagree button for comments that are otherwise good, but that is far, far worth it. The presence of downvotes were a major reason why I used Reddit (and now this) while disliking the likes of twitter.