• Peter Bronez@hachyderm.io
    link
    fedilink
    arrow-up
    5
    ·
    2 months ago

    @along_the_road

    “These were mostly family photos uploaded to personal and parenting blogs […] as well as stills from YouTube videos"

    So… people posted photos of their kids on public websites, common crawl scraped them, LAION-5B cleaned it up for training, and now there are models. This doesn’t seem evil to me… digital commons working as intended.

    If anyone is surprised, the fault lies with the UX around “private URL” sharing, not devs using Common Crawl

    #commoncrawl #AI #laiondatabase

    • wagoner@infosec.pub
      link
      fedilink
      arrow-up
      4
      ·
      2 months ago

      Doesn’t Digital Commons mean common ownership? A personal blog of family photos inherently owned by that photographer are surely not commonly owned. I see this as problematic.

    • Peter Bronez@hachyderm.io
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      @along_the_road what’s the alternative scenario here?

      You could push to remove some public information from common crawl. How do you identify what public data is _unintentionally_ public?

      Assume we solve that problem. Now the open datasets and models developed on them are weaker. They’re specifically weaker at identifying children as things that exist in the world. Do we want that? What if it reduces the performance of cars’ emergency breaking systems? CSAM filters? Family photo organization?