Archive link

What is the view of Frenchman Arthur Mensch, the co-founder of Mistral AI, on the warnings about the extreme risks of artificial intelligence that have been issued by leaders of major American tech firms such as Sam Altman and Dario Amodei? At the AI summit in India, held from February 16 to February 20, OpenAI CEO Altman raised the idea of creating a kind of “[International Atomic Energy Agency] for international coordination of AI,” in response to the emergence of “true superintelligence,” which he said could appear within “a couple of years.” Meanwhile, Anthropic founder Amodei published a lengthy essay at the end of January, “The Adolescence of Technology,” in which he outlined the risks of advanced AI systems or their use to create biological weapons.

“These are mostly distraction tactics,” responded Mensch, who was interviewed on Friday, February 20, by Le Monde and by the radio station France Inter at the New Delhi AI summit. “In reality, the real risk of artificial intelligence in the near future is [that] of massive influence on how people think and how they vote,” he argued, taking a position contrary to his American counterparts. The head of the French AI start-up had already raised concerns about the risk of an “information oligopoly” forming with AI assistants such as ChatGPT (OpenAI) or Grok (xAI). He described them as potential “thought control instruments” and expressed fears about manipulation attempts during elections.

“It just so happens that the tools capable of exerting this influence are in the hands of the very people who are talking about extreme risks,” the entrepreneur continued. He downplayed the dangers often labeled “existential” or “catastrophic,” which refer to scenarios in which advanced AI could wipe out humanity. “Those extreme risks are still science fiction,” he said. “So these speeches are largely diversions, very deliberately crafted.”

  • Jesus_666@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    If the perceived threat is a model going rogue, nobody pays attention to the model operating as intended.