• tal@lemmy.today
      cake
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      My guess is that it’s gonna wind up being a split, and it’s not going to be unique to “AI” relative to any other kind of device.

      There’s going to be some kind of reasonable expectation for how a device using AI should act, and then if the device acts within those expectations and causes harm, it’s the person who decided to use it.

      But if the device doesn’t act within those expectations, then it’s not them, may be the device manufacturer.

      • JohnDClay@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Yeah, if the company making the ai makes false claims about it, then it’d be on them at least partially.

    • chakan2@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      There are going to be a lot of instances going forward where you don’t know you were interacting with an AI.

      If there’s a quality check on the output, sure, they’re liable.

      If a Tesla runs you into an ambulance at 80mph…the very expensive Tesla lawyers will win.

      It’s a solid quandary.

      • JohnDClay@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Why would the lawyer defendant not know they’re interacting with AI? Would the AI generated content appear to be actual case law? How would that confusion happen?

  • NullPointer@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    edit-2
    2 months ago

    if the source code for said accusing AI cannot be examined and audited by the defense; the state is denying the defendant their right to face their accuser. mistrial.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    When these AIs make autonomous decisions that inadvertently cause harm – whether financial loss or actual injury – whom do we hold liable?

    The person who allowed the AI to make these decisions autonomously.

    We should do it like Asimov has shown us: create “robot laws” that are similar to slavery laws:

    In principle, the AI is a non-person and therefore a person must take responsibility.

    • Nomecks@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      The whole point of Asimov’s three laws were to show how they could never work in reality because it would be very easy to circumvent them.

  • fubarx@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    This topic came up when self-driving was first coming up. If a car runs over someone, who is to blame?

    • Person in driver seat
    • Dealer
    • Car manufacturer
    • Supplier who provided the driving control system
    • The people who designed the algorithm and did the ML training
    • People who wrote and tested the code
    • Insurer

    Most of these would likely be indemnified by all kinds of legal and contractual agreements, but the matter would still stand that someone died.

    • HauntedCupcake@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      An insurer is an interesting one for sure. They’d have the stats of how many times that AI model makes mistakes and be able to charge accordingly. They’d also have the funds and evidence to go after big corps if their AI was faulty.

      They seem like a good starting point, until negligence elsewhere can be proven.