• orca@orcas.enjoying.yachts
    link
    fedilink
    arrow-up
    15
    ·
    10 months ago

    Everybody wants to get in on that US military money. They probably saw how haphazardly the US sends billions of dollars to Israel for genocide and wondered how they could get an endless faucet of cash for themselves.

    • Overzeetop@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      Oh, it didn’t take Israel for the venture capitalists among them to want to harvest defense money. AI support has always had a contingent that intends to use it for military purposes. Same with remote vehicles. Same with robots. Same with, well, pretty much every advance in science - chemical agents, biological agents, lasers, space, nuclear power. Practically anything we create has a military use if you’re morally bankrupt or thirsty for power or money. Nearly every project starts out with “to serve mankind” as its goal.

      • orca@orcas.enjoying.yachts
        link
        fedilink
        arrow-up
        2
        ·
        10 months ago

        One hundred percent. Part of why I ditched Spotify was because of the CEO investing in military AI. That was quite a while ago now.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    On Tuesday, ChatGPT developer OpenAI revealed that it is collaborating with the United States Defense Department on cybersecurity projects and exploring ways to prevent veteran suicide, reports Bloomberg.

    OpenAI revealed the collaboration during an interview with the news outlet at the World Economic Forum in Davos.

    The AI company recently modified its policies, allowing for certain military applications of its technology, while maintaining prohibitions against using it to develop weapons.

    OpenAI removed terms from its service agreement that previously blocked AI use in “military and warfare” situations, but the company still upholds a ban on its technology being used to develop weapons or to cause harm or property damage.

    The shift in policy appears to align OpenAI more closely with the needs of various governmental departments, including the possibility of preventing veteran suicides.

    “We’ve been doing work with the Department of Defense on cybersecurity tools for open-source software that secures critical infrastructure,” Makanju said in the interview.


    Saved 48% of original text.