The majority of U.S. adults don’t believe the benefits of artificial intelligence outweigh the risks, according to a new Mitre-Harris Poll released Tuesday.

  • Wolf_359@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Prime example. Atomic bombs are dangerous and they seem like a bad thing. But then you realize that, counter to our intuition, nuclear weapons have created peace and security in the world.

    No country with nukes has been invaded. No world wars have happened since the invention of nukes. Countries with nukes don’t fight each other directly.

    Ukraine had nukes, gave them up, promptly invaded by Russia.

    Things that seem dangerous aren’t always dangerous. Things that seem safe aren’t always safe. More often though, technology has good sides and bad sides. AI does and will continue to have pros and cons.

    • Hexagon@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Atomic bomb are also dangerous because if someone end up launching one by mistake, all hell is gonna break loose. This has almost happened multiple times:

      https://en.wikipedia.org/wiki/List_of_nuclear_close_calls

      We’ve just been lucky so far.

      And then there are questionable state leaders who may even use them willingly. Like Putin, or Kim, maybe even Trump.

      • gravitas_deficiency@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        …and the development and use of nuclear power has been one of the most important developments in civil infrastructure in the last century.

        Nuclear isn’t categorically free from the potential to harm, but it can also do a whole hell of a lot for humanity if used the right way. We understand it enough to know how to use it carefully and safely in civil applications.

        We’ll probably get to the same place with ML… eventually. Right now, everyone’s just throwing tons of random problems at it to see what sticks, which is not what one could call responsible use - particularly when outputs are used in a widespread sense in production environments.