Brin’s “We definitely messed up.”, at an AI “hackathon” event on 2 March, followed a slew of social media posts showing Gemini’s image generation tool depicting a variety of historical figures – including popes, founding fathers of the US and, most excruciatingly, German second world war soldiers – as people of colour.

    • Daxtron2@startrek.website
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      The issue is not that it can generate the images, it’s that the filtering a pre prompt for Gemini was coercing the images to include forced diversity into the gens. So asking for 1940s German soldier would give you multiracial Nazis, even though that obviously doesn’t make sense and it’s explicitly not what was asked for.

        • Daxtron2@startrek.website
          link
          fedilink
          arrow-up
          0
          ·
          8 months ago

          It is a pretty silly scenario lol, I personally don’t really care but I can understand why they implemented the safeguard but also why it’s overly aggressive and needs to be tuned more.

            • entropicdrift@lemmy.sdf.org
              link
              fedilink
              arrow-up
              1
              ·
              8 months ago

              Corporations making AI tools available to the general public are under a ton of scrutiny right now and are kinda in a “damned if you do, damned if you don’t” situation. At the other extreme, if they completely uncensored it, the big controversial story would be that pedophiles are generating images of child porn or some other equally heinous shit.

              These are the inevitable growing pains of a new industry with a ton of hype and PR behind it.

            • Kichae@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 months ago

              If you create an image generator that always returns clean cut white men whenever you ask it to produce a “doctor” or a “business man”, but only ever spits out black when when you ask for a picture of someone cleaning, your PR department is going to have a bad time.

  • GadgeteerZA@beehaw.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    It’s not just historical. I’m a white male and I prompted Gemini to create images for me if a middle aged white man building a Lego set etc. Only one image was a white male and two of the others wrecan Indian and a Black male. Why when I asked for a white male. It was an image I wanted to share to my family. Why would Gemini go off the prompt? I did not ask for diversity, nor was it expected for that purpose, and I got no other options for images which I could consider so it was a fail.

    • Ephera@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      The problem is that the training data is biased and these AIs pick up on biases extremely well and reinforce them.

      For example, people of color tend to post fewer pictures of themselves on the internet, mostly because remaining anonymous is preferable to experiencing racism.
      So, if you’ve then got a journalistic picture, like from the food banks mentioned in the article, suddenly there will be relatively many people of color there, compared to what the AI has seen from its other training data.
      As a result, it will store that one of the defining features of how a food bank looks like, is that it has people of color there.

      To try to combat these biases, the bandaid fix is to prefix your query with instructions to generate diverse pictures. As in, literally prefix. They’re simply putting words in your mouth (which is industry standard).

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Nah, in this case I think it’s a classic case of over correction and prompt manipulation. The bus you’re talking about is right, so to try to combat that they and other ai companies manipulate your prompt before feeding it to the llm. I’m very sure they are stripping out white male and or subbing in different ethnicities to try to cover the bias

      • frogmint@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        For example, people of color tend to post fewer pictures of themselves on the internet, mostly because remaining anonymous is preferable to experiencing racism.

        That is quite the bold statement. Source?

        • Ephera@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          I don’t think I came up with that myself, but yeah, I’ve got nothing. Would have been multiple years, since I’ve read about that.
          Maybe strike the “mostly”, but then it seemed logical enough to me that this would be a factor, similar to how some women will avoid revealing their gender (in certain contexts on the internet) to steer clear from sexual harassment.
          For that last part, I can refer you to a woman from which I’ve heard first-hand that she avoids voice chat in games, because of that.

    • TheAlbatross@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      Could you elaborate on the use case you’re describing? You were trying to make an image of a middle aged white man building Lego for your family?

      • GadgeteerZA@beehaw.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Yes, but it does not really matter what the rest of the prompt detail was? The point was, it was supposed to me an image of me doing an activity. I’d clearly prompted for a white man, but it gave me two other images that were completely not that. Why was Gemini deviating from specific prompts like that? Seems the identical issue to the case with the Nazis, just introducing variations completely of its own.

          • GadgeteerZA@beehaw.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            That is really just not relevant at all to the discussion here, but to satisfy your curiosity, I’m busy building a Lego model that a family member sent me, so the generated AI photo was supposed to depict someone that looked vaguely like me building such a Lego model. I used Bing in the past, and it has usually delivered 4 usable choices. Fact that Google gave me something that was distinctly NOT what I asked for, means it is messing with the specifics that are asked for.

              • memfree@beehaw.org
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 months ago

                I’m not the lego person, but I am not taking that selfie because: 1) I don’t want to clean the house to make it look all nice before judgey relatives critique the pic, 2) my phone is old and all its pics are kinda fish-eyed, 3) I don’t actually want to spend the time doing the task right now when AI can get me an image in seconds.