• SPRUNT@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    ·
    10 months ago

    Is there a similar tool that will “poison” my personal tracked data? Like, I know I’m going to be tracked and have a profile built on me by nearly everywhere online. Is there a tool that I can use to muddy that profile so it doesn’t know if I’m a trans Brazilian pet store owner, a Nigerian bowling alley systems engineer, or a Beverly Hills sanitation worker who moonlights as a practice subject for budding proctologists?

    • Ghostalmedia@lemmy.world
      link
      fedilink
      English
      arrow-up
      58
      arrow-down
      1
      ·
      10 months ago

      The only way to taint your behavioral data so that you don’t get lumped into a targetable cohort is to behave like a manic. As I’ve said in a past comment here, when you fill out forms, pretend your gender, race, and age is fluid. Also, pretend you’re nomadic. Then behave erratic as fuck when shopping online - pay for bibles, butt plugs, taxidermy, and PETA donations.

      Your data will be absolute trash. You’ll also be miserable because you’re going to be visiting the Amazon drop off center with gag balls and porcelain Jesus figurines to return every week.

      • Bonehead@kbin.social
        link
        fedilink
        arrow-up
        18
        ·
        10 months ago

        Then behave erratic as fuck when shopping online - pay for bibles, butt plugs, taxidermy, and PETA donations.

        …in the same transaction. It all needs to be bought and then shipped together. Not only to fuck with the algorithm, but also to fuck with the delivery guy. Because we usually know what you ordered. Especially when it’s in the soft bag packaging. Might as well make everyone outside your personal circle think you’re a bit psychologically disturbed, just to be safe.

        • Neato@ttrpg.network
          link
          fedilink
          English
          arrow-up
          6
          ·
          10 months ago

          How? Aren’t most items in boxes even in the bags? It’s not like they just toss a butt plug into a bag and ship it…right?

      • TexasDrunk@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        Other than buying the erratic shit, that’s pretty much who I am. I’ve never been honest on a form and I’m a bit of a nomad.

        I know that back in the day when Facebook guessed your political affiliation and other things they got everything about me hilariously wrong. I’m not a Republican with 3 kids and a hummer. Google seems to get closer with the targeted ads, but honestly not that much closer.

        I can be individually identified, but my identity is garbage and not useful.

    • Australis13@fedia.io
      link
      fedilink
      arrow-up
      13
      ·
      10 months ago

      The browser addon “AdNauseum” can help with that, although it’s not a complete solution.

    • TropicalDingdong@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      Is there a similar tool that will “poison” my personal tracked data? Like, I know I’m going to be tracked and have a profile built on me by nearly everywhere online. Is there a tool that I can use to muddy that profile so it doesn’t know if I’m a trans Brazilian pet store owner, a Nigerian bowling alley systems engineer, or a Beverly Hills sanitation worker who moonlights as a practice subject for budding proctologists?

      Have you considered just being utterly incoherent, and not making sense as a person? That could work.

    • Ilovethebomb@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      There are programs and plugins you can download that will open a bunch of random websites to throw off tracking programs.

    • sbv@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I guess it depends what your threat model is.

      If you don’t like advertising, then you’re just piling a bunch of extra interests/demographics in there. It’ll remain roughly as valuable as it was before.

      If you’re concerned about privacy and state actors, your activity would just increase. Anything that would trigger state interest would remain, so you’d presumably receive the same level of interest. Worse, if you aren’t currently of interest, there’s a possibility randomly generated traffic would be flagged by your adversary and increase their level of interest in you.

    • Buttons@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Mbyae try siunlhffg the mldide lterets of ervey wrod? I wnedor waht taht deos to a luaangge medol?

      • hperrin@lemmy.world
        link
        fedilink
        English
        arrow-up
        42
        ·
        10 months ago

        You don’t follow the license that it was distributed under.

        Commonly, if you use open source code in your project and that code is under a license that requires your project to be open source if you do that, but then you keep yours closed source.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        12
        ·
        10 months ago

        He took GPLv3 code, which is a copyleft license that requires you share your source code and license your project under the same terms as the code you used. You also can’t distribute your project as a binary-only or proprietary software. When pressed, they only released the code for their front end, remaining in violation of GPLv3.

  • Telodzrum@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    3
    ·
    10 months ago

    Fascinating that they develop this tool and then only release Windows and MacOS versions.

    • Dizzy Devil Ducky@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      10 months ago

      To be fair, windows and macos are the 2 biggest computer operating systems in the world. It makes a lot more sense to focus on building tools for people using the biggest platforms rather than focus on people using something with a user base fragmented across multiple versions of the same OS.

      Though I do agree a version for Linux would be nice. Even if we have the mac equivalent of wine, darling, I don’t know enough about it to say whether it’s up to the task or not.

    • cybersandwich@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      4
      ·
      10 months ago

      It’s simple math. 97% of the population uses those two operating systems.

      There isn’t much more incentive to go after the 3% Linux users. You know the population that loves free and open source software and isn’t exactly known for dropping a bunch of cash on software. Not to mention it’s a fragmented 3%. Even the flatpak, snap, app images of the world that were supposed to make devs lives easier are fragmented across distros.

      • Mango@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        10 months ago

        Android called. They want their representation in your statistics. Android is Linux.

          • Mango@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Meme aside, that’s a good question… I wonder how much GNU made it into Google’s implementation. Someone here probably knows.

        • KazuyaDarklight@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          10 months ago

          No one’s doing this kind of work on their android phone, so you’re argument is pedantic at best in this context.

          • Mango@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            If I can patch a ROM on my phone, you can patch your picture just fine. You don’t have to make it with your phone.

            Also, you’d be surprised at how excellent some drawing apps are on Android. Particularly Ibis.

      • zarkanian@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 months ago

        Why develop for developers?

        Why wouldn’t you?

        It’s not like developers get off on reinventing the wheel or something. If somebody has a working solution, I’d rather use that than spend time coming up with code on my own. I’m busy enough as it is.

      • Daxtron2@startrek.website
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 months ago

        what does being a developer have anything to do with it? Do you really think we only use things we develop ourselves?

      • Mango@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        10 months ago

        Omg, I can’t believe you actually just said that. 🤣🤣🤣

        Do you know what a library is? How about a language?

  • pavnilschanda@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    10 months ago

    Apparently people who specialize in AI/ML have a very hard time trying to replicate the desired results when training models with ‘poisoned’ data. Is that true?

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      10 months ago

      I’ve only heard that running images through a VAE just once seems to break the Nightshade effect, but no one’s really published anything yet.

      You can finetune models on known bad and incoherent images to help it to output better images if the trained embedding is used in the negative prompt. So there’s a chance that making a lot of purposefully bad data could actually make models better by helping the model recognize bad output and avoid it.

      • lad@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        So there’s a chance that making a lot of purposefully bad data could actually make models better by helping the model recognize bad output and avoid it.

        This would be truly ironic

    • Miaou@jlai.lu
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Until they come with some preprocessing step, or some better feature extractors etc. This is an arms race like there are many of

  • webghost0101@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    10 months ago

    I bet that before the end of this year this tool will be one of the things that helped improve the performance and quality of AI.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      10 months ago

      Excited to see the guys that made Nightshade get sued in a Silicon Valley district court, because they’re something something mumble mumble intellectual property national security.

          • hansl@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 months ago

            Because the people producing code with GPL are completely unrelated to the AI issues.

            You’re asking why you can’t shoot your neighbor if Russians are shooting Ukrainians.

            • DragonTypeWyvern@literature.cafe
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              10 months ago

              No, I’m asking why they’re held to a standard the AI makers are not when they’re not even charging for the completely optional tool.

              • hansl@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                10 months ago

                That’s a strawman. AI makers should definitely respect FOSS licenses.

                You’re just looking for excuses for their shitty behavior.

                • DragonTypeWyvern@literature.cafe
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  10 months ago

                  The entire premise of the AIs are based on stealing intellectual property, which you’ll find cover far more ground than FOSS, but sure, whatever you say, person who definitely knows what a straw man is.

  • gapbetweenus@feddit.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    The tool’s creators are seeking to make it so that AI model developers must pay artists to train on data from them that is uncorrupted.

    That’s not something a technical solution will work for. We need copyright laws to be updated.

    • Marcbmann@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      The issue is simply reproduction of original works.

      Plenty of people mimic the style of other artists. They do this by studying the style of the artist they intend to mimic. Why is it different when a machine does the same thing?

      • teichflamme@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        It’s not. People are just afraid of being replaced, especially when they weren’t that original or creative in the first place.

        • Even_Adder@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          They’re playing both sides. Who do you think wins when model training becomes prohibitively expensive to for regular people? Mega corporations already own datasets, and have the money to buy more. And that’s before they make users sign predatory ToS allowing them exclusive access to user data, effectively selling our own data back to us.

          Regular people, who could have had access to a competitive, corporate-independent tool for creativity, education, entertainment, and social mobility, would instead be left worse off and with less than where we started.

          • UnderpantsWeevil@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            10 months ago

            Who do you think wins when model training becomes prohibitively expensive to for regular people?

            We passed that point at inception. Its always been more efficient for Microsoft to do its training at a 10,000 Petaflop giga-plant in Iowa than for me to run Stable Diffusion on my home computer.

            Regular people, who could have had access to a competitive, corporate-independent tool for creativity, education, entertainment, and social mobility

            Already have that. It’s called a $5 art kit from Michael’s.

            This isn’t about creation, its about trade and propagation of the finished product within the art market. And its here that things get fucked, because my beautiful watercolor that took me 20 hours to complete isn’t going to find a buyer that covers half a week’s worth of living expenses, so long as said market place is owned and operated by folks who want my labor for free.

            AI generation serves to mine the market at near-zero cost and redistribute the finished works for a profit.

            Copyright/IP serves to separate the creator of a work from its future generative profits.

            But all this ultimately happens within the context of the market itself. The legal and financial mechanics of the system are designed to profit publishers and distributors at the expense of creatives. That’s always been true and the latest permutation in how creatives get fucked is merely a variation on a theme.

            instead be left worse off and with less than where we started.

            AI Art does this whether or not its illegal, because it exists to undercut human creators of content by threatening them with an inferior-but-vastly-cheaper alternative.

            The dynamic you’re describing has nothing to do with AI’s legality and everything to do with Disney’s ability to operate as monopsony buyer of bulk artistic product. The only way around this is to break Disney up as a singular mass-buyer of artwork, and turn the component parts of the business over to the artists (and other employees of the firm) as an enterprise that answers to and profits the people generating the valuable media rather than some cartel of third-party shareholders.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        10 months ago

        Truly a “Which Way White Man” moment.

        I’m old enough to remember people swearing left, right, and center that copyright and IP law being aggressively enforced against social media content has helped corner the market and destroy careers. I’m also well aware of how often images from DeviantArt and other public art venues have been scalped and misappropriated even outside the scope of modern generative AI. And how production houses have outsourced talent to digital sweatshops in the Pacific Rim, Sub-Saharan Africa, and Latin America, where you can pay pennies for professional reprints and adaptations.

        It seems like the problem is bigger than just “Does AI art exist?” and “Can copyright laws be changed?” because the real root of the problem is the exploitation of artists generally speaking. When exploitation generates an enormous profit motive, what are artists to do?

  • M0oP0o@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    10 months ago

    They clam a credit to using AI to make the thumbnail… The same people who did nothing more then ask Chat GPT to make a picture to represent the article on a tool that poisons AI models to protect people who make pictures for a living from having Chat GPT use their work to make; say a picture to represent an article on a tool that poisons AI models…

  • bonus_crab@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    big companies already have all your uncorrupted artwork, all this does is eliminate any new competition from cropping up.

        • BoneALisa@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          Whoda thunk one word isnt enough to describe my feelings lol.

          Good as in startups shoukd be allowed to be founded around stolen data.

          • ASeriesOfPoorChoices@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 months ago

            so, established companies should be allowed to steal from start ups and release their products for less than startups could ever make them, effectively shutting out all competition forever?

            or are you just a fucking hypocrite?

            • BoneALisa@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              10 months ago

              No lol, no one should. Me saying AI tech startups shouldnt be allowed to use stolen data means i endorse existing companies who have already stolen it.

              But just because companies have already done it also doesnt mean we should be allowing new companies to also do the same thing.

                • BoneALisa@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  9 months ago

                  Lmao what? Please, explain to me how thinking neither new companies or existing companies should be allowed to be doing what their doing, is hypocritical.

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Explanation of how this works.

    These “AI models” (meaning the free and open Stable Diffusion in particular) consist of different parts. The important parts here are the VAE and the actual “image maker” (U-Net).

    A VAE (Variational AutoEncoder) is a kind of AI that can be used to compress data. In image generators, a VAE is used to compress the images. The actual image AI only works on the smaller, compressed image (the latent representation), which means it takes a less powerful computer (and uses less energy). It’s that which makes it possible to run Stable Diffusion at home.

    This attack targets the VAE. The image is altered so that the latent representation is that of a very different image, but still roughly the same to humans. Say, you take images of a cat and of a dog. You put both of them through the VAE to get the latent representation. Now you alter the image of the cat until its latent representation is similar to that of the dog. You alter it only in small ways and use methods to check that it still looks similar for humans. So, what the actual image maker AI “sees” is very different from the image the human sees.

    Obviously, this only works if you have access to the VAE used by the image generator. So, it only works against open source AI; basically only Stable Diffusion at this point. Companies that use a closed source VAE cannot be attacked in this way.


    I guess it makes sense if your ideology is that information must be owned and everything should make money for someone. I guess some people see cyberpunk dystopia as a desirable future. I wonder if it bothers them that all the tools they used are free (EG the method to check if images are similar to humans).

    It doesn’t seem to be a very effective attack but it may have some long-term PR effect. Training an AI costs a fair amount of money. People who give that away for free probably still have some ulterior motive, such as being liked. If instead you get the full hate of a few anarcho-capitalists that threaten digital vandalism, you may be deterred. Well, my two cents.

    • LadyAutumn@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 months ago

      Yeah. Not that it’s the fault of artists that capitalism exists in its current form. Their art is the fruit of their labor, and therefore, means should be taken to ensure that their labor is properly compensated. And I’m a marxist anarchist, no part of me agrees with any part of the capitalist system. But artists are effectively workers, and we enjoy the fruits of their labor. They are rarely fairly compensated for their work. In this particular instance, under the system we live in, artists rights should be prioritized over

      I’m all for janky (getting less janky as time goes on) AI images, but I don’t understand why it’s so hard to ask artists permission first to use their data. We already maintain public domain image databases, and loads of artists have in the past allowed their art to be used freely for any purpose. How hard is it to gather a database of art who’s creators have agreed to let it be used for AI? All the time we’ve (the collective we) been arguing over thise could’ve been spent implementing a system to create such a database.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        That’s not quite right. A traditional worker is someone who operates machines, they don’t own, to make products, they don’t own. Artists, who are employed, do not own the copyrights to what they make. These employed artists are like workers, in that sense.

        Copyrights are “intellectual property”. If one needed permission (mostly meaning, pay for it), then the money would go to the property owners. These worker-artists would not receive anything. Note that, on the whole, the owners already made what profit they could expect. Say, if it’s stills from a movie, then that movie already made a profit (or not).

        People who use their own tools and own their own product (EG artisans in Marx’s time) are members of the Petite Bourgeoisie. I think a Marxist analysis of the class dynamics would be fruitful here, but it’s beyond me.

        The spoilered bit is something I have written about the NYT lawsuit. I think it’s illuminating here, too.

        spoiler

        The NYT wants money for the use of its “intellectual property”. This is about money for property owners. When building rents go up, you wouldn’t expect construction workers to benefit, right?

        In fact, more money for property owners means that workers lose out, because where else is the money going to come from? (well, “money”)

        AI, like all previous forms of automation, allows us to produce more and better goods and services with the same amount of labor. On average, society becomes richer. Whether these gains go to the rich, or are more evenly distributed, is a choice that we, as a society, make. It’s a matter of law, not technology.

        The NYT lawsuit is about sending these gains to the rich. The NYT has already made its money from its articles. The authors were paid, in full, and will not get any more money. Giving money to these property owners will not make society any richer. It just moves wealth to property owners for being property owners. It’s about more money for the rich.

        If OpenAI has to pay these property owners for no additional labor, then it will eventually have to increase subscription fees to balance the cash flow. People, who pay a subscription, probably feel that it benefits them, whether they use it for creative writing, programming, or entertainment. They must feel that the benefit is worth, at least, that much in terms of money.

        So, the subscription fees represent a part of the gains to society. If a part of these subscription fees is paid to property owners, who did not contribute anything, then that means that this part of the social gains is funneled to property owners, IE mainly the ultra-rich, simply for being owners/ultra-rich.


        why it’s so hard to ask artists permission first to use their data.

        SD was trained on images from the internet. Anything. There are screenshots, charts and pure text jpgs in there. There’s product images from shopping sites and also just ordinary snapshots that someone posted. The people with the biggest individual contribution are almost certainly professional photographers. SD is not built on what one usually calls art (with apologies to photographers). An influencer who has a lot of good, well tagged images on the net has made a more positive contribution than someone who makes abstract art or stick figure comics. And let’s not forget the labor of those who tagged those images.

        You could not practically get permission from these tens or hundreds of millions of people. It would really be a shame, because the original SD reveals a lot about the stereotypes and biases on the net.

        Using permissively licensed images wouldn’t have helped a lot. I have seen enough outrage over datasets with exactly such material. People say, that’s not what they had in mind when they gave these wide permissions.

        Practically, look at wikimedia. There are so many images there which are “pirated”. Wikimedia can just take them down in response to a DMCA notice. Well, you can’t remove an image from a trained AI model. It’s not in there (if everything has worked). So what now? If that means that the model becomes illegal, then you just can’t have a model trained on such a database.

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          10 months ago

          People who use their own tools and own their own product (EG artisans in Marx’s time) are members of the Petite Bourgeoisie. I think a Marxist analysis of the class dynamics would be fruitful here, but it’s beyond me.

          Please don’t. Marxists, at least Marxist-Leninists, tend to start talking increasing amounts of nonsense once the Petite Bourgeoisie and Lumpen get involved.

          In any case the whole thing is (as Marx would tell you, but Marxist ignore) a function of one’s societal relations, not of the individual person, or job. That relation might change from hour to hour (e.g. if you have a dayjob), and “does not have an employment contract” doesn’t imply “does not depend on capital for survival” – it’s perfectly possible as an artist, or pipe fitter, to own your own means of production (computer, metal tongs) and be, as a contractor, in a very similar relationship to capital as the Lumpen day-labourer: To have no say in the greater work that gets created, to be told “do this, or starve”, to be treated as an easily replaceable cog. That may even be the case if you have employees of your own. The question is, and that’s why Anarchist analysis >>> Marxist analysis, is whether you’re beholden to an unjust hierarchy, in this case, that created by capital ownership, not whether you happen to own a screw driver. As e.g. a farmer you might own millions upon millions in means of production, doesn’t mean that supermarket chains aren’t squeezing your bones dry and you can barely afford your utility bills. Capitalism is unjust hierarchy all the way up and down.

          Well, you can’t remove an image from a trained AI model. It’s not in there (if everything has worked). So what now? If that means that the model becomes illegal, then you just can’t have a model trained on such a database.

          I also can’t possibly unhear this, doesn’t mean that my mind or any music I might compose is illegal. If it is overfitted in my mind and I want to compose music and publish that then I’ll have to pay attention that my stuff is sufficiently different, have to run an adversarial model against myself, so to speak, if I don’t want to end up having to pay royalties. If I just want to have it bouncing around my head and sing it in the shower then I might be singing copyrighted material, but there’s no obligation for me to pay royalties either as many aspects of copyright necessitate things such as publishing or ability to damage the original author’s income.

          • General_Effort@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Well, Marx believed that the Petite Bourgeoisie would disappear. Their members, unable to economically compete, would become employed workers. Hasn’t happened, though. He also observed that this class emulated the outlook of the Haute Bourgeoisie, the rich. IDK more about that. I find it interesting how vocally in favor of right-wing economic policies some artists are, even though these policies massively favor the rich. The phrase temporarily embarrassed millionaire comes to mind. I’m curious about that, is all.

            I like how empathic your anarchist take is but I’m not really sure what to do with it.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    10 months ago

    This doesn’t work outside of laboratory conditions.

    It’s the equivalent of “doctors find cure for cancer (in mice).”