• stevedidwhat_infosec@infosec.pub
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    4 months ago

    I do not want that for anyone. AI is a tool that should be kept open to everyone, and trained with consent. But as soon as people argue that its only a tool that can harm, is where I’m drawing the line. That’s, in my opinion, when govts/ruling class/capitalists/etc start to put in BS “safeguards” to prevent the public from making using of the new power/tech.

    I should have been more verbose and less reactionary/passive aggressive in conveying my message, its something I’m trying to work on, so I appreciate your cool-headed response here. I took the “you clearly don’t know what ludites are” as an insult to what I do or don’t know. I specifically was trying to draw attention to the notion that AI is solely harmful as being fallacious and ignorant to the full breadth of the tech. Just because something can cause harm, doesn’t mean we should scrap it. It just means we need to learn how it can harm, and how to treat that. Nothing more. I believe in consent, and I do not believe in the ruling minority/capitalist practices.

    Again, it was an off the cuff response, I made a lot of presumptions about their views without ever having actually asking them to expand/clarify and that was ignorant of me. I will update/edit the comment to improve my statement.

    • HelloThere@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      AI is a tool that should be kept open to everyone

      I agree with this principle, however the reality is that given the massive computational power needed to run many (but not all) models, the control of AI is in the hands of the mega corps.

      Just look at what the FAANGs are doing right now, and compare to what the mill owners were doing in the 1800s.

      The best use of LLMs, right now, is for boilerplating initial drafts of documents. Those drafts then need to be reviewed, and tweaked, by skilled workers, ahead of publication. This can be a significant efficiency saving, but does not remove the need for the skilled worker if you want to maintain quality.

      But what we are already seeing is CEOs, etc, deciding to take “a decision based on risk” to gut entire departments and replace them with a chat bot, which then invents hallucinates the details of a particular company policy, leading to a lower quality service, but significantly increased profits, because you’re no longer paying for ensured quality.

      The issue is not the method of production, it is who controls it.