• ISOmorph@feddit.de
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    edit-2
    2 months ago

    You’re misunderstanding the post. It’s not about whether or not someone could guess your location from a picture. It’s about the automation thereof. As soon as that is possible it becomes another viable vector to compromise your privacy.

    • taladar@sh.itjust.works
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      2 months ago

      And you misunderstand my point, it always has been a way to compromise your privacy. Privacy matters most in the individual case, with people who know you. If you e.g. share a picture taken at your home (outside or looking out of the window in the background) with a friend online you always had to assume that they could figure out where you lived from that if there were any of those kinds of features in there.

      Sure, companies might be able to do it on a larger scale but honestly, AI is just too inefficient for that right now, as in the energy-cost required to apply it to every picture you share just in case your location might be useful isn’t worth it yet.

      • ISOmorph@feddit.de
        link
        fedilink
        arrow-up
        5
        arrow-down
        4
        ·
        2 months ago

        Privacy matters most in the individual case, with people who know you.

        That statement is subjective at best. My friends and coworkers knowing where I live certainly isn’t my concern. In today’s day and age privacy enthusiasts are definitely more scared of corpos and governments.

        isn’t worth it yet.

        You’re thinking too small. Just in the context of the e2ee ban planned in europe, think what you could do. The new law is set to scan all your messages before/after sending for specific keywords. Imagine you get automatically flagged and now an AI is scanning all your pictures for locations and contacts and what not. Just the thought that might be technically possible is scary as hell.

        • taladar@sh.itjust.works
          link
          fedilink
          arrow-up
          8
          ·
          2 months ago

          Governments won’t scan all your pictures to figure out who you are, they are just going to ask (read: legally force) the website/hoster where you posted that picture for your IP address and/or payment info and then do the same with your ISP/payment provider to convert that into your RL info to figure out who you are.

          And you might not be worried about your RL friends or coworkers but what about people you meet online? Everyone able to see your post on some social media site?

          Nobody is going to scan all the pictures you post for some information that is going to be valid for a long time after it is discovered once. Governments and corporations have had the means to discover who you are once for a long time.

            • helenslunch@feddit.nl
              link
              fedilink
              arrow-up
              6
              arrow-down
              1
              ·
              2 months ago

              wait long enough and a technique to unblur will be developed.

              You can’t just program data that doesn’t exist into existence.

              • umami_wasabi@lemmy.ml
                link
                fedilink
                arrow-up
                3
                ·
                2 months ago

                I do remember 1-2 years ago there is a paper (or model?) that reverse blured images. It’s similar to how ML based object remover and inpainting works. Granted it only works for specific blurring algo.

                • ricecake@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  2 months ago

                  Some blues are reversible, and some aren’t. Some of them do a statistical rearrangement of the data in the area being blurred that’s effectively reversible.

                  Think shredding a document. It’s a pain and it might take a minute, but it’s feasible to get the original document back, give or take some overlapping edges and tape.

                  Other blurs combine, distort, and alter the image contents such that there’s nothing there to recombine to get the original.

                  A motion blur or the typical “fuzzy” blur can be directly reversed for the former, and statistical techniques and AI tools can be used on the later to reconstruct, because the original data is still there, or there enough that you can make guesses based on what’s there and context.
                  Pixelating the area does a better job because it actually deletes information as opposed to just smearing it around, but tools can still pick out lines and shapes well enough to make informed guesses.

                  Some blurs however create a random noise over the area being blurred, which is then tweaked to fit the context of whatever was being blurred.

                  Something like that is impossible to reverse because the information simply is not there.
                  It’s like using generative AI to “recover” data cropped from an image. At that point it’s no longer recovery, but creation of possible data that would fit there.

                  The tools aren’t magical, they’re still ultimately bound by the rules of information storage.

                  • umami_wasabi@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    2 months ago

                    Yeah, for the pic you used as example, the tool will just create something that fits. Not really “unblur” the image but guess what it would be with the info it have. It will be very likely not the same face versus the original.

                    However, recreating background maybe easier and accurate enough for a geo guesser or a ML model to figure out roughly where the image was taken.

              • onlinepersona@programming.dev
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                4
                ·
                2 months ago

                You do realize that a lot of image recognition was done on scaled down images? Some techniques would even blur the images on purpose to reduce the chance of confusion. Hell, anti-aliasing makes text seem more readable by adding targeted blur.

                Deblurring is guessing and if you have enough computing power with some brain power (or AI), you can reduce the number of required guesses by erasing improbable guesses.

                Anti Commercial-AI license