• mrmaplebar@fedia.io
    link
    fedilink
    arrow-up
    22
    ·
    2 months ago

    This reads as a way to protect white collar industries from the effects of AI without addressing the root problem–that AI does not actually think, and that it is little more than a meat grinder full of scraped data.

      • CeeBee_Eh@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        2 months ago

        Why is it CALLED intelligent?

        Because it is “intelligent” by definition. You’re conflating the word with “highly intelligent” or just “smart”.

        Dogs are “intelligent” but can’t they write code, but we sometimes refer to dogs as “smart”.

        A flatworm has intelligence but no one would call it smart.

      • atopi@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        it had that name for a really long time

        a couple decades ago, a program learning was really impressive

        • SeeMarkFly@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 months ago

          I remember when LISP was available for my Atari 800.

          Yes, I had the FULL 64K of memory installed.

  • tinkermeister@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    2 months ago

    I may have become too cynical but, as is often the case when you dig deeper, this sounds like the result of lobbyists trying to protect licensing rather than people.

    We can be dumb, but we’ve been doing web searches for legal and medical advice for ages because it is too damned expensive and time consuming to go to professionals for every little thing. Not to mention, doctors have so little time for you that it is hard to get them to listen to the whole story to make connections between symptoms.

    The LLMs already tell you that they aren’t licensed professionals and, for many, provide citations for their sources (miles better than your typical health website).

    As a personal anecdote, my son was having stomach pain but was planning to tough it out. He checked with ChatGPT and it recommended he go to the ER. He did, and if he hadn’t, he would likely be dead now. He spent 3 days in the hospital having his bowels unobstructed through a tube in his nose.

    There is value in people having that kind of information at their fingertips.

    Regulation is absolutely needed, but I would rather they focus on protecting us from AI being used for military purposes, mass surveillance, etc. rather than protecting citizens from ourselves.

  • deathbird@mander.xyz
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 months ago

    If implemented, that would just ban chatbots that use large language models. It’s not a terrible idea.

    What would actually happen is that so-called AI chatbot systems would try to detect if someone is from New York and then try to exclude them from receiving medical or legal advice, fail, and then get sued and then pay a small fine, over and over again forever.

    • architect@thelemmy.club
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      2 months ago

      This is a really bad idea.

      First because healthcare is clearly being gatekept from people.

      Second, because even if you go to a healthcare professional nowadays, there is no guarantee that that person is not a fucking idiot that doesn’t believe in vaccines. I can’t believe I have to actually ask people before they touch me if they believe in vaccines or not and then tell them to not come back into my room if they answer that they don’t believe in science. But that has happened and it has happened to the people I’ve taken care of and because of this now healthcare can’t be trusted.

      The LLM is not any worse than that. In fact, I would say that it’s already too cautious. No way the model is ever going to tell me vaccines are bad. It’s not going to tell me to take a poison to clear Covid. It’s not going to tell me to drink bleach like the president did. It’s literally not any worse than the bullshit we are dealing with all day every fucking day.

      And I’m getting to the point that if you’re a full grown human fucking being and you’re going to believe something if it tells you to drink fucking bleach or swallow a fucking lightbulb then that’s nature saying something about you.

      • Doomsider@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 months ago

        Naw, completely disagree. If you had a calculator you knew was defective you would ban doctors and lawyers from using it.

        You also seem to think that LLM is going to be inherently more accurate than a expert human. We can see with GrokAI how easy it is to manipulate an AI into saying racist white nationalist garbage. So we are not just trusting the technology but also a layer of unpredictable corporate meddling.

        Why does the LLM recommend this drug but not the other one? We quickly see how a corporation could favor a certain medication due to behind the scene deals or even push a medication.

        You can’t trust a black box you are not allowed to look into. Trust in a LLM at this point is pure folly.

  • iegod@lemmy.zip
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    4
    ·
    2 months ago

    I don’t see how you police/enforce this. The technology is out of the bag, people will find ways to access. Do we need age/location verification for this now too? What if I’m running a local agent? I don’t agree with this.

  • willington@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    1 month ago
    1. Make laws against chatbots.
    2. Demand proof you are not a chatbot.
    3. Surveillance capitalism.

    The real target here is population control.

    The lawmakers, which take billionaire money by the ton, who HAVE NEVER given a shit, suddenly, NOW, they want to protect the vulnerable. Abso fucking lutely laughable on its face.

  • moroninahurry@piefed.social
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    5
    ·
    2 months ago

    Laws like this are great for these companies. This is how they will justify removing access to useful information and putting it behind paywalls. But oh your need a prescription so now the insurance companies are involved (spoiler: they already are) and so you don’t even have access to pay out the nose for medical information.

    Then when Google search has been completely replaced with AI, you won’t even be able to search for medical information.

    Healthcare companies aren’t about to provide anything for free.

    • Soup@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 months ago

      LLMs and chatbots should not be giving medical advice. You are afraid of the private healthcare system, not the lack of access to the most janky bandaid fix for its failures.

      • moroninahurry@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        5
        ·
        2 months ago

        Neither should Wikipedia or Google. So I guess by your logic nobody should search or learn about medical conditions on a computer.

        • Soup@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          2 months ago

          You know damn well there’s an important difference related to the confidence of a bot that has been a key problem since this whole thing started.

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    5
    ·
    2 months ago

    I mean.

    Is the wikipedia responsible for you reading an article about a law and then taking that as legal advice?

    [Edit: if you are downvoting this, downvote away, but you owe an argument below as to why. I promise this exact argument will come up in the courts over this issue]

    • WesternInfidels@feddit.online
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Is the wikipedia responsible for you reading an article about a law and then taking that as legal advice?

      Is the U.S. House of Representatives [or any equivalent publisher of the law] responsible for you reading the text of a law itself and then taking that as legal advice?

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        That’s a totally irrelevant comparison. There is no equivalent publisher of the law to the US House of reps. Nothing the Wikipedia publishes has legal bearing; Everything the house of Reps publish does have legal bearing.

        • WesternInfidels@feddit.online
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 months ago

          Your objection does nothing to address the issue you raised. Where is the line drawn between “information” and “legal advice?”

          Wikipedia and the lawmakers themselves present us with static information that is not specific to us personally or to any particular situation we may find ourselves in, and which generally does not include specific recommendations. I think most people would agree that’s just information, not advice.

          If an LLM can be coaxed into saying things like “you should,” advocating specific courses of action for your circumstances, is that legal advice? I think many of us would agree that would be unlicesenced legal advice.

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    2 months ago

    Mixed feelings about this. Let me play devils advocate and say that many Americans don’t have access to these resources at all. Having potentially inaccurate resources might be better than nothing, or is that worse?

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      There are billions being sunk into AI. How much health care could that buy? Your logic only makes sense if AI is free. It’s not.

    • thisbenzingring@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      the AI devices will just have preambles and disclaimers and word things in ways to refer the user to human resources

    • smh@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      We had a medical scare just yesterday. I was in the ER for 8 hours with my partner over a non-life-threatening but still emergency problem.

      An ultrasound, cat scan, and much poking and prodding later, we still don’t know what is up. The AI was at least able to predict next steps (if A then discharge and follow up with PCP, if B then surgery this week, if C then emergency surgery), something the ER was too busy to do for several hours. It was reassuring. The AI also gave me (working) links to more thorough resources on the topic.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    2 months ago

    Sounds like a start. More is needed though.

    The bill targets AI chatbots that impersonate licensed professionals — such as doctors and lawyers — and bars them from providing “substantive response, information, or advice” that would violate professional licensing laws or constitute the unauthorized practice of law.

    It also mandates that chatbot owners provide “clear, conspicuous, and explicit” notice to users that they are interacting with an AI system, with the notice displayed in the same language as the chatbot and in a readable font size. However, the bill clarifies that this notice for users, which indicates that they are interacting with a non-human system, does not absolve the chatbot owners of liability.

  • phx@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    AI in the legal field could be useful for assisting an actual legal professional in compiling precedent based against on-the-books laws, so long as it cites sources and they verify them.

    In the medical field, it could be useful for spotting anomalies between multiple images such as X-rays or cross-referencing medical documents WHEN USED BY A PROFESSIONAL.

    But the thing is, it should be a tool - carefully used - to enhance the existing profession, not replace actual professionals.

  • Katherine 🪴@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    2 months ago

    This bill gave us the “best” interaction:

    https://bsky.app/profile/badmedicaltakes.bsky.social/post/3mghyg5eufk2m

    A Bluesky skeet from @badmedicaltakes.bsky.social:

    "Twitter user eoghan:

    How dare poor people get free medical advice

    <quote tweet from Twitter user Polymarket: BREAKING: New York bill would ban AI from answering questions related to medicine, law, dentistry, nursing, psychology, social work, engineering, & more.>

    Twitter user YBrogard79094:
    JUST MAKE HEALTHCARE ACCESSIBLE

    Twitter user eoghan:

    AI is literally free healthcare. Being a communist must be exhausting"

  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    2 months ago

    If you don’t want legal or medical advice from an AI, you can already simply not ask the AI for legal or medical advice. But I don’t want your paternalistic restrictions on what I may ask.

    • moroninahurry@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Sir did you pay for that medical advice though? That’s what these laws will eventually enforce. Prescription advice.

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    1 month ago

    In the US especially, medical professionals are overworked and simply don’t have the time and energy properly diagnose. If you have a more complex, chronic issue, there’s a good chance you’ll be waiting months at a time to see various specialists who are only going to spend about 10 distracted minutes thinking about your case and might not even have any useful insights, or they might misdiagnose you and make your condition worse. You basically have to do your own research and show them studies. If you’re a person of color or a woman, etc., there’s a good chance you won’t even be taken seriously. In an ideal world, it would work like it does on TV, but in the real world, it’s all about maximizing profits and the patients be damned. Sure, LLMs are unreliable, but they do at least provide ideas to research.