Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

  • Underwaterbob@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    This could make for some hilarious, alternate history satire or something. I could totally see Key and Peele heading a group of racially diverse nazis ironically preaching racial purity and attempting to take over the world.

  • yildolw@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    Oh no, not racial impurity in my Nazi fanart generator! /s

    Maybe you shouldn’t use a plagiarism engine to generate Nazi fanart. Thanks

  • RGB3x3@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    A Washington Post investigation last year found that prompts like “a productive person” resulted in pictures of entirely white and almost entirely male figures, while a prompt for “a person at social services” uniformly produced what looked like people of color. It’s a continuation of trends that have appeared in search engines and other software systems.

    This is honestly fascinating. It’s putting human biases on full display at a grand scale. It would be near-impossible to quantify racial biases across the internet with so much data to parse. But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.

    There’s a lot of learning to be done here and it would be sad to miss that opportunity.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      It’s putting human biases on full display at a grand scale.

      Not human biases. Biases in the labeled data set. Those could sometimes correlate with human biases, but they could also not correlate.

      But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.

      Not LLMs. The image generation models are diffusion models. The LLM only hooks into them to send over the prompt and return the generated image.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          If you train on Shutterstock and end up with a bias towards smiling, is that a human bias, or a stock photography bias?

          Data can be biased in a number of ways, that don’t always reflect broader social biases, and even when they might appear to, the cause vs correlation regarding the parallel isn’t necessarily straightforward.

  • kaffiene@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    7 months ago

    Why would anyone expect “nuance” from a generative AI? It doesn’t have nuance, it’s not an AGI, it doesn’t have EQ or sociological knowledge. This is like that complaint about LLMs being “warlike” when they were quizzed about military scenarios. It’s like getting upset that the clunking of your photocopier clashes with the peaceful picture you asked it to copy

    • UlrikHD@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      I’m pretty sure it’s generating racially diverse nazis due to companies tinkering with the prompts under the hood to counterweight biases in the training data. A naive implementation of generative AI wouldn’t output black or Asian nazis.

      it doesn’t have EQ or sociological knowledge.

      It sort of does (in a poor way), but they call it bias and tries to dampen it.

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        I don’t disagree. The article complained about the lack of nuance in generating responses and I was responding to the ability of LLMs and Generative AI to exhibit that. Your points about bias I agree with

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        7 months ago

        At the moment AI is basically just a complicated kind of echo. It is fed data and it parrots it back to you with quite extensive modifications, but it’s still the original data deep down.

        At some point that won’t be true and it will be a proper intelligence. But we’re not there yet.

        • maynarkh@feddit.nl
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Nah, the problem here is literally that they would edit your prompt and add “of diverse races” to it before handing it to the black box, since the black box itself tends to reflect the built-in biases of training data and produce black prisoners and white scientists by itself.

    • stockRot@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      7 months ago

      Why shouldn’t we expect more and better out of the technologies that we use? Seems like a very reactionary way of looking at the world

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        I DO expect better use from new technologies. I don’t expect technologies to do things that they cannot. I’m not saying it’s unreasonable to expect better technology I’m saying that expecting human qualities from an LLM is a category error

  • BurningnnTree@lemmy.one
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 months ago

    No matter what Google does, people are going to come up with gotcha scenarios to complain about. People need to accept the fact that if you don’t specify what race you want, then the output might not contain the race you want. This seems like such a silly thing to be mad about.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      It’s silly to point at brand new technology and not expect there to be flaws. But I think it’s totally fair game to point out the flaws and try to make it better, I don’t see why we should just accept technology at its current state and not try to improve it. I totally agree that nobody should be mad at this. We’re figuring it out, an issue was pointed out, and they’re trying to see if they can fix it. Nothing wrong with that part.

    • OhmsLawn@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      It’s really a failure of one-size-fits-all AI. There are plenty of non-diverse models out there, but Google has to find a single solution that always returns diverse college students, but never diverse Nazis.

      If I were to use A1111 to make brown Nazis, it would be my own fault. If I use Google, it’s rightfully theirs.

      • PopcornTin@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        7 months ago

        The issue seems to be the underlying code tells the ai if some data set has too many white people or men, Nazis, ancient Vikings, Popes, Rockwell paintings, etc then make them diverse races and genders.

        What do we want from these AIs? Facts, even if they might be offensive? Or facts as we wish they would be for a nicer world?

  • xantoxis@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I don’t know how you’d solve the problem of making a generative AI accurately create a slate of images that both a) inclusively produces people with diverse characteristics and b) understands the context of what characteristics could feasibly be generated.

    But that’s because the AI doesn’t know how to solve the problem.

    Because the AI doesn’t know anything.

    Real intelligence simply doesn’t work like this, and every time you point it out someone shouts “but it’ll get better”. It still won’t understand anything unless you teach it exactly what the solution to a prompt is. It won’t, for example, interpolate its knowledge of what US senators look like with the knowledge that all of them were white men for a long period of American history.

    • FooBarrington@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 months ago

      I’ll get the usual downvotes for this, but:

      Because the AI doesn’t know anything.

      is untrue, because current AI fundamentally is knowledge. Intelligence fundamentally is compression, and that’s what the training process does - it compresses large amounts of data into a smaller size (and of course loses many details in the process).

      But there’s no way to argue that AI doesn’t know anything if you look at its ability to recreate a great number of facts etc. from a small amount of activations. Yes, not everything is accurate, and it might never be perfect. I’m not trying to argue that “it will necessarily get better”. But there’s no argument that labels current AI technology as “not understanding” without resorting to a “special human sauce” argument, because the fundamental compression mechanisms behind it are the same as behind our intelligence.

      Edit: yeah, this went about as expected. I don’t know why the Lemmy community has so many weird opinions on AI topics.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 months ago

        Lemmy hasn’t met a pitchfork it doesn’t pick up.

        You are correct. The most cited researcher in the space agrees with you. There’s been a half dozen papers over the past year replicating the finding that LLMs generate world models from the training data.

        But that doesn’t matter. People love their confirmation bias.

        Just look at how many people think it only predicts what word comes next, thinking it’s a Markov chain and completely unaware of how self-attention works in transformers.

        The wisdom of the crowd is often idiocy.

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Thank you very much. The confirmation bias is crazy - one guy is literally trying to tell me that AI generators don’t have knowledge because, when asking it for a picture of racially diverse Nazis, you get a picture of racially diverse Nazis. The facts don’t matter as long as you get to be angry about stupid AIs.

          It’s hard to tell a difference between these people and Trump supporters sometimes.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            7 months ago

            It’s hard to tell a difference between these people and Trump supporters sometimes.

            To me it feels a lot like when I was arguing against antivaxxers.

            The same pattern of linking and explaining research but having it dismissed because it doesn’t line up with their gut feelings and whatever they read when “doing their own research” guided by that very confirmation bias.

            The field is moving faster than any I’ve seen before, and even people working in it seem to be out of touch with the research side of things over the past year since GPT-4 was released.

            A lot of outstanding assumptions have been proven wrong.

            It’s a bit like the early 19th century in physics, where everyone assumed things that turned out wrong over a very short period where it all turned upside down.

            • FooBarrington@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              Exactly. They have very strong feelings that they are right, and won’t be moved - not by arguments, research, evidence or anything else.

              Just look at the guy telling me “they can’t reason!”. I asked whether they’d accept they are wrong if I provide a counter example, and they literally can’t say yes. Their world view won’t allow it. If I’m sure I’m right that no counter examples exist to my point, I’d gladly say “yes, a counter example would sway me”.

              • GiveMemes@jlai.lu
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                7 months ago

                Yall actually have any research to share or just gonna talk about it?

            • GiveMemes@jlai.lu
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              7 months ago

              Yall actually have any research to share or just gonna talk about it?

      • sxt@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Part of the problem with talking about these things in a casual setting is that nobody is using precise enough terminology to approach the issue so others can actually parse specifically what they’re trying to say.

        Personally, saying the AI “knows” something implies a level of cognizance which I don’t think it possesses. LLMs “know” things the way an excel sheet can.

        Obviously, if we’re instead saying the AI “knows” things due to it being able to frequently produce factual information when prompted, then yeah it knows a lot of stuff.

        I always have the same feeling when people try to talk about aphantasia or having/not having an internal monologue.

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          I can ask AI models specific questions about knowledge it has, which it can correctly reply to. Excel sheets can’t do that.

          That’s not to say the knowledge is perfect - but we know that AI models contain partial world models. How do you differentiate that from “cognizance”?

          • rambaroo@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            edit-2
            7 months ago

            Omg give me a break with this complete nonsense. LLMs are not an intelligence. They are language processors. They do not “think” about anything and don’t have any level of self awareness that implies cognizance. A cognizant ai would have recognized that the Nazis it was creating looked historically inaccurate, based on its training data. But guess what, it didn’t do that because it’s fundamentally incapable of thinking about anything.

            So sick of reading this amateurish bullshit on social media.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            7 months ago

            A book is a physical representation of knowledge.

            Knowledge is something possessed by an actor capable to employ it. One way I can employ a textbook about Quantum Mechanics is by throwing it at you, for which any book would suffice, but I can’t put any of the knowledge represented within into practice. Throwing is purely Newtonian, I have some learned knowledge about that and plenty of innate knowledge as a human (we are badass throwers). Also I played Handball when I was a kid. All that is plenty of knowledge, and an object, to throw, but nothing about it concerns spin states. It also won’t hit you any differently than a cookbook.

            • FooBarrington@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              What exactly are you trying to argue? Yes, I wasn’t incredibly precise, a book isn’t literal knowledge, but I didn’t think that somebody would nitpick this hard. Do you really think this is in any way a productive line of argumentation?

              Knowledge is something possessed by an actor capable to employ it.

              Technically this is not correct, as e.g. a fully paralyzed and mute person can’t employ their knowledge, yet they still possess it.

              ™One way I can employ a textbook about Quantum Mechanics is by throwing it at you, for which any book would suffice, but I can’t put any of the knowledge represented within into practice.

              Why can’t you put any of the knowledge represented in the book into practice? You can still pick the book up and extract the knowledge.

              See how these are technically correct arguments, yet they are absolutely stupid?

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                7 months ago

                Technically this is not correct, as e.g. a fully paralyzed and mute person can’t employ their knowledge, yet they still possess it.

                You’d have to be past Hawkins levels of paralysis to not be able to employ that knowledge to come up with new physical theories. Now that was a nickpick.

                You can still pick the book up and extract the knowledge.

                That would be employing my knowledge of maths, of my general education, not of the QM knowledge represented in the book: I cannot employ the knowledge in the book to pick up the knowledge in the book because I haven’t picked it up yet. Causality and everything, it’s a thing.

                • FooBarrington@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 months ago

                  I have no idea what you’re getting at, and I don’t think you’re writing in good faith. I’ll stop here. Have a good day!

      • stoy@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 months ago

        Would it be accurate so say that while current AI does have the knowledge, it lacks the reasoning skills needed to apply the knowledge correctly?

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          I don’t think it’s generally true, because current AI can solve some reasoning tasks very well. But it’s definitely something where they are lacking.

          • rambaroo@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            edit-2
            7 months ago

            It isn’t reasoning about anything. A human did the reasoning at some point, and the LLM’s dataset includes that original information. The LLM is simply matching your prompt to that training data. It’s not doing anything else. It’s not thinking about the question you asked it. It’s a glorified keyword search.

            It’s obvious you have no idea how LLMs work at a fundamental level, yet you keep talking about them like you’re an expert.

            • FooBarrington@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              So if I find a single example of an AI doing a reasoning task that’s not in its training material, would you agree that you’re wrong and AI does reason?

              • rambaroo@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                edit-2
                7 months ago

                You won’t find one. LLMs are literally incapable of the kind of reasoning you’re talking about. All of their solutions are based on training data, no matter how “original” your problem might seem.

    • TORFdot0@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      Edit: further discussion on the topic has changed my viewpoint on this, its not that its been trained wrong on purpose and now its confused, its that everything its being asked is secretly being changed. It’s like a child being told to make up a story by their teacher when the principal asked for the right answer.

      Original comment below


      They’ve purposefully overrode its training to make it create more PoCs. It’s a noble goal to have more inclusivity but we purposely trained it wrong and now it’s confused, the same thing as if you lied to a child during their education and then asked them for real answers, they’ll tell you the lies they were taught instead.

      • TwilightVulpine@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        This result is clearly wrong, but it’s a little more complicated than saying that adding inclusivity is purposedly training it wrong.

        Say, if “entrepreneur” only generated images of white men, and “nurse” only generated images of white women, then that wouldn’t be right either, it would just be reproducing and magnifying human biases. Yet this a sort of thing that AI does a lot, because AI is a pattern recognition tool inherently inclined to collapse data into an average, and data sets seldom have equal or proportional samples for every single thing. Human biases affect how many images we have of each group of people.

        It’s not even just limited to image generation AIs. Black people often bring up how facial recognition technology is much spottier to them because the training data and even the camera technology was tuned and tested mainly for white people. Usually that’s not even done deliberately, but it happens because of who gets to work on it and where it gets tested.

        Of course, secretly adding “diverse” to every prompt is also a poor solution. The real solution here is providing more contextual data. Unfortunately, clearly, the AI is not able to determine these things by itself.

        • TORFdot0@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          I agree with your comment. As you say, I doubt the training sets are reflective of reality either. I guess that leaves tampering with the prompts to gaslight the AI into providing results it wasn’t asked for is the method we’ve chosen to fight this bias.

          We expect the AI to give us text or image generation that is based in reality but the AI can’t experience reality and only has the knowledge of the training data we provide it. Which is just an approximation of reality, not the reality we exist in. I think maybe the answer would be training users of the tool that the AI is doing the best it can with the data it has. It isn’t racist, it is just ignorant. Let the user add diverse to the prompt if they wish, rather than tampering with the request to hide the insufficiencies in the training data.

          • TwilightVulpine@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            I wouldn’t count on the user realizing the limitations of the technology, or the companies openly admitting to it at expense of their marketing. As far as art AI goes this is just awkward, but it worries me about LLMs, and people using it expecting it to respond with accurate, applicable information, only to come out of it with very skewed worldviews.

        • cheese_greater@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          Why couldn’t it be tuned to simply randomize the skin tone where not otherwise specified? Like if its all completely arbitrary just randomize stuff, problem-solved?

          • kava@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Then you have black Nazis and Native American Texas Rangers. It doesn’t work.

    • Jojo@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Real intelligence simply doesn’t work like this

      There’s a certain point where this just feels like the Chinese room. And, yeah, it’s hard to argue that a room can speak Chinese, or that the weird prediction rules that an LLM is built on can constitute intelligence, but that doesn’t mean it can’t be. Essentially boiled down, every brain we know of is just following weird rules that happen to produce intelligent results.

      Obviously we’re nowhere near that with models like this now, and it isn’t something we have the ability to work directly toward with these tools, but I would still contend that intelligence is emergent, and arguing whether something “knows” the answer to a question is infinitely less valuable than asking whether it can produce the right answer when asked.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        I really don’t think that LLMs can be constituted as intelligent any more than a book can be intelligent. LLMs are basically search engines at the word level of granularity, it has no world model or world simulation, it’s just using a shit ton of relations to pick highly relevant words based on the probability of the text they were trained on. That doesn’t mean that LLMs can’t produce intelligent results. A book contains intelligent language because it was written by a human who transcribed their intelligence into an encoded artifact. LLMs produce intelligent results because it was trained on a ton of text that has intelligence encoded into it because they were written by intelligent humans. If you break down a book to its sentences, those sentences will have intelligent content, and if you start to measure the relationship between the order of words in that book you can produce new sentences that still have intelligent content. That doesn’t make the book intelligent.

        • Jojo@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          But you don’t really “know” anything either. You just have a network of relations stored in the fatty juice inside your skull that gets excited just the right way when I ask it a question, and it wasn’t set up that way by any “intelligence”, the links were just randomly assembled based on weighted reactions to the training data (i.e. all the stimuli you’ve received over your life).

          Thinking about how a thing works is, imo, the wrong way to think about if something is “intelligent” or “knows stuff”. The mechanism is neat to learn about, but it’s not what ultimately decides if you know something. It’s much more useful to think about whether it can produce answers, especially given novel inquiries, which is where an LLM distinguishes itself from a book or even a typical search engine.

          And again, I’m not trying to argue that an LLM is intelligent, just that whether it is or not won’t be decided by talking about the mechanism of its “thinking”

          • intensely_human@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            We can’t determine whether something is intelligent by looking at its mechanism, because we don’t know anything about the mechanism of intelligence.

            I agree, and I formalize it like this:

            Those who claim LLMs and AGI are distinct categories should present a text processing task, ie text input and text output, that an AGI can do but an LLM cannot.

            So far I have not seen any reason not to consider these LLMs to be generally intelligent.

            • GiveMemes@jlai.lu
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              2
              ·
              7 months ago

              Literally anything based on opinion or creating new info. An AI cannot produce a new argument. A human can.

              It took me 2 seconds to think of something LLMs can’t do that AGI could.

    • random9@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      You don’t do what Google seems to have done - inject diversity artificially into prompts.

      You solve this by training the AI on actual, accurate, diverse data for the given prompt. For example, for “american woman” you definitely could find plenty of pictures of American women from all sorts of racial backgrounds, and use that to train the AI. For “german 1943 soldier” the accurate historical images are obviously far less likely to contain racially diverse people in them.

      If Google has indeed already done that, and then still had to artificially force racial diversity, then their AI training model is bad and unable to handle that a single input can match to different images, instead of the most prominent or average of its training set.

      • xantoxis@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Ultimately this is futile though, because you can do that for these two specific prompts until the AI appears to “get it”, but it’ll still screw up a prompt like “1800s Supreme Court justice” or something because it hasn’t been trained on that. Real intelligence requires agency to seek out new information to fill in its own gaps; and a framework to be aware of what the gaps are. Through exploration of its environment, a real intelligence connects things together, and is able to form new connections as needed. When we say “AI doesn’t know anything” that’s what we mean–understanding is having a huge range of connections and the ability to infer new ones.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          7 months ago

          Oh really? Here’s Gemini’s response to “What would the variety of genders and skin tones of the supreme court in the 1800s have been?”

          The Supreme Court of the United States in the 1800s was far from diverse in terms of gender and skin tone. Throughout the entire 19th century, all the justices were white men. Women were not even granted the right to vote until 1920, and there wasn’t a single person of color on the Supreme Court until Thurgood Marshall was appointed in 1967.

          Putting the burden of contextualization on the LLM would have avoided this issue.

  • Rob@lemdro.id
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 months ago

    I’m all for letting people of all backgrounds having an equal work/representation opportunity but this ai went too far.

    What I am against is taking official / past figures such as u.s presidents and race swapping them. These are real people who were white. Sorry if it offends someone but that’s just how it was.

    At this point we are putting dei even over who use to govern the u.s as offical presidents? Why? Who does this help? If anything you make people with legit purposes hate dei more by doing this. Imagine if they did that to president Obama people would be sticking it to Google 10 times harder then they are now.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      So what you’re saying is that a white actor should always be cast to play any character that was originally white whether they are the best actor or not?

      Keep in mind historical figures are largely white because of systemic racism and in your scenario the film and television industry would have to purposefully double down on the discrimination that empowered those people to meet your requirements.

      I’m not defending Google’s ham fisted approach. But at the same time it’s a great reinforcement of the reality that Large Language Models cannot and should not be relied upon for accurate information. LLMs are just as ham fisted for accurate information as Google’s approach to diversity in LLMs.

      • Rob@lemdro.id
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Let me answer your first question by reversing it back at you If Barack Obama was historocally black should a black person be able to play as him. I believe so. This should be the same for all real life historical figures. If you want more diversity create new characters to fill the void. If the new characters are good people will love them.

        In film industry I feel that may be different since a made up story generally in alot of these shows and movies. So if they changed something it isn’t the biggest deal to me because it wasn’t meant to be taken seriosly rather meant for entertainment.

        My argument was actually for real life historical figures to be represented more properly because this isn’t just about diversity in jobs and entertainment anymore, your changing real life history regarding governments, militaries and presidents and etc. And this wasn’t done just to u.s figures by Gemini.

        I do agree ai can make mistakes and isn’t perfect. Shouldn’t be used as real life context all the time but from Google sometimes you just expect better.

        • roofuskit@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Someone who is half white would have to play him right? So you’d have to exclude any truly dark skinned black people for the role. You know, because the American public would have never put someone dark skinned into the presidency.

  • Harbinger01173430@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    …white is a color. Also white people usually look pink, cream, orange or red. Only albinos look the closest to white though not white enough.

  • NotJustForMe@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    It’s okay when Disney does it. What a world. Poor AI, how are they supposed to learn if all its data is created by mentally ill and crazy people. ٩(。•́‿•̀。)۶

    • rottingleaf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      WDYM?

      Only their new SW trilogy comes to mind, but in SW racism among humans was something limited to very backwards (savage by SW standards) planets, racism of humans towards other spacefaring races and vice versa was more of an issue, so a villain of any kind of human race is normal there.

      It’s rather the purely cinematographic part which clearly made skin color more notable for whichever reason, and there would be some racists among viewers.

      Probably they knew they can’t reach the quality level of OT and PT, so made such things intentionally during production so that they could later complain about fans being racist.

      • NotJustForMe@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Have you read the article? It was about misrepresenting historical figures, racism was just a small part.

        It was about favoring diversity, even if it’s historically inaccurate or even impossible. Something Disney is very good at.