AI may cause Human Extinction- ChatGPT Creator Sam Altman and Microsoft CTO Kevin Scott

ChatGPT Creator Sam Altman
x

ChatGPT Creator Sam Altman

Highlights

Tech leaders, including OpenAI founder Sam Altman and Microsoft CTO Kevin Scott, have warned that artificial intelligence (AI) technology should be viewed as a significant risk to society.

AI tools like ChatGPT are expanding at an alarming rate. While designed to keep things simple, top tech leaders worldwide have warned of the dire consequences of AI. Tech leaders, including OpenAI founder Sam Altman, Microsoft CTO Kevin Scott, among others, have issued a warning that artificial intelligence (AI) technology should be viewed as a significant risk to society. It can be as dangerous for humanity as are pandemics and nuclear wars. The Center for AI Safety issued a statement that has been signed by hundreds of executives and academics, emphasizing the need to prioritize AI regulation and address the risks it poses to humanity.

The statement highlights concerns about the potential for AI to impact labor markets, negatively affect public health, and enable the "weaponization of misinformation," discrimination, and spoofing. Industry figures, including the leaders of Google's DeepMind, OpenAI (the developer of ChatGPT), and AI startup Anthropic, have called for regulations due to existential fears associated with AI.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement said.

Geoffrey Hinton, a leading figure in AI, recently left Google because he believes that AI poses a significant risk to humanity. This comes as the UK government, which previously had a different view on AI, now recognizes its potential risks.

The multi-expert signed statement is significant because of the wide range of signatories and its focus on existential concerns. A large number of signatories reflects a growing understanding within the AI community of the genuine threats posed by AI technology, according to Michael Osborne, a professor of machine learning at the University of Oxford and co-founder of the Mind Foundry.

"It really is remarkable that so many people signed up to this letter," he said. "That does show that there is a growing realization among those of us working in AI that existential risks are a real concern."

Concerns about AI regulation stem from its rapid growth and widespread use, which has exceeded industry expectations. The complex and not fully understood nature of AI demands proactive measures to reduce potential risks. The statement focuses on the immediate need to address AI-related societal risks and draws attention to concerns expressed by a diverse group of experts. It urges governments, organizations and the world community to treat this problem seriously.

"Because we don't understand AI very well, there is a prospect that it might play a role as a kind of new competing organism on the planet, so a sort of invasive species that we've designed that might play some devastating role in our survival as a species, "Osborne told Guardian about risks of AI.

Show Full Article
Print Article
Next Story
More Stories
ADVERTISEMENT
ADVERTISEMENTS