• really@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    It’s amazing to see all these memes and meme templates the last few days. That was peak Reddit back in 2009-10.

    I feel good for lemmy if this continues.

  • mob@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    Realistically, what are the dangers of AI?

    I get that it might change employment around like I assume calculators and computers did, but what are the other reasonable things to worry about?

      • CyanFen@lemmy.one
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        And what is the danger in this? At this point everyone knows AI can make realistic fake content. It’s unlikely that someone, say, in a position of power, would do anything rash after seeing an AI video considering the technology exists. No wars were started over photoshopped images.

        • qisope@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          the target isn’t people in power, the target of these tools is the general population. disinformation combined lack of critical thinking is already bad enough just with the posting of carefully cropped, edited, or out of context media, when the new tools can create realistic video with voice matched audio, more people will be fooled - plenty of them will happily believe whatever reinforces their existing position.

          • Catsrules@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Also if someone in power gets caught doing something bad they could muddy the waters saying it was AI and fake content.

        • rockSlayer@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Imagine the president giving a very important message to the people. Using content generation, a bad actor could insert minor alterations that change the meaning of an important sentence and then spread it naturally on social media. That could have dramatic implications for disinformation campaigns.

            • rockSlayer@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Not nearly as well as it can now. Frame generation and speech imitators get better every day. Our AI is far better than it was 5 years ago, and 10 years ago algorithms like chatGPT and stable diffusion were things of science fiction.

              • mob@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                I mean it makes it more accessible to the general public, but for anyone was attempting to seriously deepfake a presidential announcement… the resources have been out there.

                Like 5 years ago, Jordan Peele did an Obama deepfake

                It’s not perfect, but it was pretty close. Someone with real motivation probably could have made it 100% believable

      • mob@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Yeah I can see that, but that’s already been happening for a long time now. Might actually work in our favor though, having to actually do something to combat misinformation.

        Maybe we will finally come up with a universally accepted way to verify things.

    • Freesoftwareenjoyer@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      The thing I worry about is that people who misunderstand AI will try to block progress and slow down the adoption of useful technology. There is already plenty of misinformation about it in the media. Same with cryptocurrencies.