• ms.lane@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    3
    ·
    2 days ago

    It hasn’t taken any jobs, but this will keep being repeated so it can be used as a bludgeon against pay rises and keeping up with inflation.

    ‘you’re lucky to have a job’

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      I disagree, I have literally heard of people being laid off because managers think that AI can and will replace actual workers, I have literally seen it too. It’s already happening.

      • quetzaldilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        Corporations are firing and laying off labor, but that labor is not being done by AI-- it’s simply falling on those who are still employed or not getting done at all.

        I resigned from an international public accounting firm due to having AI forced on very sensitive and delicate projects in order to lower costs. As a professional, every alarm bell went off and I left because I could be held liable for their terrible managerial decisions.

        They told me they were sad to see me go, but AI is the future and hope I changed my mind-- this was all back in April.

        Not only did AI fail to do a fraction of the work we were told it was going to do, it caused over $2MM in client damages that the firm then used to justify the firing of the remaining members of the projects’ team for failing to properly supervise the AI, even though every manager struggles to open a PDF.

        AI is not the future because it is literally only capable of looking backwards.

        AI is a performative regurgitation of information that real people put the time and energy into gathering, distilling, refining, and presenting to others to evaluate and contribute to.

        Even worse, AI demonstrably makes its users dependent and intellectually lazy. If you think about it, the more prevalent AI usage becomes, the less and less capable people will be left to maintain it. And to all the fools crying out that AI will take care of itself or robots will, I say:

        All LLMs are hallucinating and going psychotic, and that is not something that can be fixed due to the very nature of how LLMs work.

        AI is not intelligent. And while it could be, that would take far too much energy and resources to make cost-effective machines with as many neural connections present in the brain of an average MAGA voter-- and that is already a super a low bar for most of us to clear.

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    AI isn’t taking the jobs, dipshit rich assholes are cutting the jobs. Taking a job implies doing the job, and from that perspective, the remaining people who weren’t laid off are taking the jobs, not AI.

    • innermachine@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 days ago

      The fact that “AI” training off other LLM slop produces worse and worse results is proof there is no “intelligence” going on just clever parroting.

      • luciferofastora@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        LLMs are the embodiment of “close enough”. They’re suitable if you want something resembling a certain mode of speech, formal tone or whatever without having to write it yourself.

        When using it to train other LLMs, you’re basically training them to get “close enough” to “close enough”, with each generation getting a little further from “actually good” until, at some point, it’s just not longer close enough.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    1
    ·
    4 days ago

    I think the fad will die down a bit, when companies figure out that AI will be more likely than humans to make very expensive mistakes that the company has to compensate, and saying it was the AI is not a valid cop out.
    I foresee companies will go bankrupt on that account.

    It doesn’t help to save $100k on cutting away an employee, if the AI causes damages for 10 or 100 times that amount.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      4 days ago

      When the bubble bursts, whoever is left standing is going to have to jack prices through the roof to put so much as a dent in their outlay. Their outlay so far. Can’t see many companies hanging in there at that point.

    • a4ng3l@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      I put my money on AI act here in Europe and the willingness of local authorities to make a few examples. That would help bringing some accountability here and there and stir a bit the pot. Eventually, as AI commodities, it will be less in the light. That will also help.

    • Jesus@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      Agreed, but I do think that some jobs are just going to be gone.

      For example, low level CS agents. I worked for a company that replaced that first line of CS defense with a bot, and the end-of-call customer satisfaction scores went up.

      I can think of a few other things in my company that had a similar outcome. If the role is gone, and the customers and employees are being served even better than when they had that support role, that role ain’t coming back.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        I’m pretty sure that even consumer services is an area where I saw a computer made an expensive mistake, promising the customer something very expensive, and a court decided the company had to honor the agreement the AI made. But I can’t find the story, because I’m flooded with product placement articles about how wonderful AI is at saving cost in CS.
        But yes CS is absolutely an area where AI is massively pushed.

        • architect@thelemmy.club
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          The court honored it for now. I expect the future it will be your problem.

          Oh but the EU?

          Once they are done with North America the EU will be a non issue for them.

      • architect@thelemmy.club
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        Oh 100%. The question will be are there more opportunities that come from it. Here’s my guess: if you can’t produce something interesting you will be fighting for scraps. Even that might not be good enough.

  • phutatorius@lemmy.zip
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 days ago

    Just look at who’s in charge of the Senate, and ask yourself if they are to be trusted to do anything but lie, steal and carry out witch hunts.

    As for LLMs, unless driving contact-centre customer satisfaction scores even further through the floor counts as an achievement, so far, all there’s been has been a vast volume of hype and wasted energy, and very little to show for it, except for some highly constrained point solutions which aren’t significant enough to make economic impact. Even then, the ROI is questionable.

  • jaybone@lemmy.zip
    link
    fedilink
    English
    arrow-up
    23
    ·
    4 days ago

    LOLLLLLLLL that’s like a third of the US population. Probably half of the number currently employed. There’s no way in hell this useless garbage will take 1/3 to 1/2 of all jobs. Companies that do this will go out of business fast.

    • TankovayaDiviziya@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      And these 1/3 are perfect horde for fascist brainwashing and consolidate the power of techno-fascists. The fascists will tell the jobless that immigrants took their jobs and not robots.

  • _stranger_@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    So they want to keep them terrified of losing their shitty, barely functioning status quo.

    The reality is that these are the numbers the Republicans want , because it’s the numbers their billionaire owners want. ChatGPT is just accidentally letting us know how they’ve poisoned the models.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    4 days ago

    And over the next 50 years it will take 485 million jobs, and the unemployment rate will be 235%.

  • thisbenzingring@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    12
    ·
    4 days ago

    funny… i expected IT workers to be in that list but we’re not. AI couldn’t do my job but it could be my boss and that frightens me.

  • tidderuuf@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    4 days ago

    Knowing the way our country is going I would expect in the end workers will have to pay an AI tax on their income and most workers will start working 50 hours a week.

  • GingaNinga@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 days ago

    and then 115 million will be needed to unwind the half-assed implementation and inevitable damage.

  • tal@olio.cafe
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    4 days ago

    I wouldn’t put it entirely outside the realm of possibility, but I think that that’s probably unlikely.

    The entire US only has about 161 million people working at the moment. In order for a 97 million shift to happen, you’d have to manage to transition most human-done work in the US to machines, using one particular technology, in 10 years.

    Is that technically possible? I mean, theoretically.

    I’m pretty sure that to do something like that, you’d need AGI. Then you’d need to build systems that leveraged it. Then you’d need to get it deployed.

    What we have today is most-certainly not AGI. And I suspect that we’re still some ways from developing AGI. So we aren’t even at Step 1 on that three-part process, and I would not at all be surprised if AGI is a gradual development process, rather than a “Eureka” moment.