Archive link

What is the view of Frenchman Arthur Mensch, the co-founder of Mistral AI, on the warnings about the extreme risks of artificial intelligence that have been issued by leaders of major American tech firms such as Sam Altman and Dario Amodei? At the AI summit in India, held from February 16 to February 20, OpenAI CEO Altman raised the idea of creating a kind of “[International Atomic Energy Agency] for international coordination of AI,” in response to the emergence of “true superintelligence,” which he said could appear within “a couple of years.” Meanwhile, Anthropic founder Amodei published a lengthy essay at the end of January, “The Adolescence of Technology,” in which he outlined the risks of advanced AI systems or their use to create biological weapons.

“These are mostly distraction tactics,” responded Mensch, who was interviewed on Friday, February 20, by Le Monde and by the radio station France Inter at the New Delhi AI summit. “In reality, the real risk of artificial intelligence in the near future is [that] of massive influence on how people think and how they vote,” he argued, taking a position contrary to his American counterparts. The head of the French AI start-up had already raised concerns about the risk of an “information oligopoly” forming with AI assistants such as ChatGPT (OpenAI) or Grok (xAI). He described them as potential “thought control instruments” and expressed fears about manipulation attempts during elections.

“It just so happens that the tools capable of exerting this influence are in the hands of the very people who are talking about extreme risks,” the entrepreneur continued. He downplayed the dangers often labeled “existential” or “catastrophic,” which refer to scenarios in which advanced AI could wipe out humanity. “Those extreme risks are still science fiction,” he said. “So these speeches are largely diversions, very deliberately crafted.”

  • affenlehrer@feddit.org
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    3 days ago

    I have to agree with you and with the Mistral guy.

    Regarding the Mistral guy:

    He speaks about AI but means LLMs, because that’s what he’s developing and that’s what’s hot right now.

    A super intelligence that will wipe out humanity is indeed science fiction at this point. And one of the most effective evil way of using them right now is for political influence. See: https://www.youtube.com/watch?v=AjQNDCYL5Rg

    Regarding your points:

    I totally agree that profit is definitely more important than social stability and safety for most of big tech.

    Also, whatever the AI as a tool is capable of, it will definitely be used for ruthless profit and power gains by powerful people.

    So while I see the current AI situation as very problematic, I don’t think it will turn into a Terminator scenario anytime soon. However, if AI gets close to this point, the powerful people, including the Big Tech CEOs will most likely prioritize profits and power over safety again and make it a reality.