“If you spell it out kind of clearly, it becomes so obvious that these tools have problems,” Janelle Shane told Fortune in an interview.

  • ArugulaZ@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    I kept reading that as “Weird Al” and wondered why the hell he was interested in this subject.

  • Masterkraft0r@discuss.tchncs.de
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    hoo boy… see this is the problem with machine learning being called AI… we talk in technically minded spaces about what a computer program thinks and nobody stops to think by themselves if this is even a sensible thing to say. which it is just not. ¯_(ツ)_/¯

  • tentphone@lemmy.fmhy.ml
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    “AI detectors” are bullshit preying on people who don’t know enough about neural networks to know they are bullshit.

    • secrethat@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Well you could in theory for lets say AI generated images, train a neural network model that could pick up on artifacts in an image that only seems to be present in AI art, as well as AI generated texts, seeing how common a certain sort of text structure appears or something like emotion or sentiment analysis where an AI generated text doesn’t do as good in terms of presenting genuine emotions.

      Of course it’s not 100% there yet. But to call them bullshit is closing doors that are not fully realised

  • people@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    A.I. thinks other A.I. is human, and it thinks humans writing in a language they’re not proficient in are A.I.

    Damn, they are going to kill us all.

  • JiFish@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    It’s only a matter of time before someone’s academic career is unjustly ruined by one of these tools.