• Plume (She/Her)@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I mean, it’s an issue in general when talking about those big subjects. It’s like global warming, we keep talking about it as a future risk, but really, the crisis is already there. Talking about it as a future problem is just a good way to keep ignoring it… :/

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    Ah yes, the old AI alignment vs. AI ethics slapfight.

    How about we agree that both are concerning?

    • lemmyng@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Both are concerning, but as a former academic to me neither of them are as insidious as the harm that LLMs are already doing to training data. A lot of corpora depend on collecting public online data to construct data sets for research, and the assumption is that it’s largely human-generated. This balance is about to shift, and it’s going to cause significant damage to future research. Even if everyone agreed to make a change right now, the well is already poisoned. We’re talking the equivalent of the burning of Alexandria for linguistics research.