Google has developed a tool called SynthID that can watermark AI-generated images in a way that is imperceptible to humans but detectable by AI. The watermark is embedded in pixel values without noticeably changing the image. SynthID is being launched for use with Google Cloud’s image generator to verify original photos. While aimed initially at detecting deepfakes, the tool could also help businesses check AI-generated images used for tasks like product descriptions. Google hopes SynthID may become a web-wide standard but recognizes others are working on detection methods too. The launch marks the start of an arms race as hackers will try to circumvent the system, requiring it to continuously improve. Overall, SynthID is a first step toward greater transparency around AI-generated content online.

  • conciselyverbose@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    And of course it will be impossible to remove this watermark programs can see programmatically because humans can’t see it, right?

    I mean, go for it if you want. We’re already, today, past the point where a photo or video in and of itself constitutes reliable evidence due to how close known tools can get. You need to show chain of custody like you would any other forensic evidence, including a credible original source on the record, for it to be actually reliable. Faking anything is absolutely plausible.