• Soulphite@reddthat.com
    link
    fedilink
    arrow-up
    27
    ·
    22 days ago

    Talk about an extra slap in the fuckin face… getting blamed for something your replacement did. Cool.

  • frustrated_phagocytosis@fedia.io
    link
    fedilink
    arrow-up
    13
    ·
    22 days ago

    Would said employees have voluntarily used the agent if Amazon didn’t demand it? If no, this isn’t on them. They shouldn’t be responsible for forced use of unvetted tools.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    10
    ·
    21 days ago

    This is a terrible idea for Amazon, the cloud services company.

    But for Amazon, the AI company? This is them illustrating the new grift that almost any company can do: use AI to keep a plausible mirage of your company going while reducing opex, and sacrifice humans when necessary to dodge accountability.

    But his job wasn’t even to supervise the chatbot adequately (single-handedly fact-checking 10 lists of 15 items is a long, labor-intensive pro­cess). Rather, it was to take the blame for the factual inaccuracies in those lists. He was, in the phrasing of Dan Davies, “an accountability sink” (or as Madeleine Clare Elish puts it, a “moral crumple zone”).

    https://locusmag.com/feature/commentary-cory-doctorow-reverse-centaurs/

    • LurkingLuddite@piefed.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      22 days ago

      It’s working great to convince moronic executives to leave Windows when it fucks up majorly due to AI coding, which is a win for everyone.

  • Petter1@discuss.tchncs.de
    link
    fedilink
    arrow-up
    3
    arrow-down
    5
    ·
    22 days ago

    Well, AI code should be reviewed prior merge into master, same as any code merged into master.

    We have git for a reason.

    So I would definitely say this was a human fault, either reviewer’s or the human’s who decided that no (or AI driven) review process is needed.

    If I would manage devOps, I would demand that AI code has to be signed off by a human on commit taking responsibility with the intention that they review changes made by AI prior pushing

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      13
      ·
      22 days ago

      If I would manage devOps, I would demand that AI code has to be signed off by a human on commit taking responsibility with the intention that they review changes made by AI prior pushing

      And you would get burned. Today’s AI does one thing really really well - create output that looks correct to humans.

      You are correct that mandatory review is our best hope.

      Unfortunately, the studies are showing we’re fucked anyway.

      Because whether the AI output is right or wrong, it is highly likely to at least look correct, because creating correct looking output is where (what we call “AI”, today) AI shines.

    • Limerance@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      22 days ago

      Realistically what happens is the code review is done under time pressure and not very thoroughly.

    • heluecht@pirati.ca
      link
      fedilink
      arrow-up
      1
      ·
      21 days ago

      @Petter1 @remington at our company every PR needs to be reviewed by at least one lead developer. And the PRs of the lead developers have to be reviewed by architects. And we encourage the other developers to perform reviews as well. Our company encourages the usage of Copilot. But none of our reviewers would pass code that they don’t understand.

        • heluecht@pirati.ca
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          20 days ago

          @Petter1 I’m a lead developer. And often I hear from my architect when I missed stuff in some PR that I just checked.

          I worked in a lot of different software companies over the last 35 years. And this company has by far the highest standards. It’s sometimes really annoying when you maybe coded 8 hours for a use case, just to spend 10-12 additional hours just for the test cases and maybe some 1-2 additional hours because the QA or the PO found something that needs to be changed. But in the end we can be proud of what we coded.