Live
- Allu Arjun's Family Appearance on Unstoppable with NBK Breaks Viewership Records
- Unity of hearts & minds essential for peace & progress, says J&K Lt Governor
- IPL 2025 Auction: I deserve Rs 18 cr price, says Chahal on being acquired by Punjab Kings
- EAM Jaishankar inaugurates new premises of Indian embassy in Rome
- Sailing vessel INSV Tarini embarks on second leg of expedition to New Zealand
- Over 15,000 people affected by rain-related disasters in Sri Lanka
- IPL 2025 Auction: RCB acquire Hazlewood for Rs 12.50 cr; Gujarat Titans bag Prasidh Krishna at Rs 9.5 crore
- Maharashtra result reflects the outcome of Congress' destructive politics: BJP's Shazia Ilmi
- 13 killed, 18 injured as landslides, flash floods hit Indonesia's North Sumatra province
- Israeli PM seeks additional delay of testimony in court for criminal trial
Just In
A multilateral response needed to fend off AI threat
British physicist Stephen Hawking warned, “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the...
British physicist Stephen Hawking warned, “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization.”
Drawing attention to pernicious use of Artificial Intelligence, which evolved from resolving complex tasks in a jiffy to rapidly growing applications in healthcare, finance, robotics, gaming, virtual assistants, autonomous vehicles to fraud detection, NLP etc, India’s External Affairs Minister S Jaishankar hit the nail on the head, when he cautioned the world on Sunday, saying ‘AI is just as dangerous as nuclear weapons’. Today, AI is evolving so fast, teaching itself, be it playing video games, reconstructing images or diagnosing diseases.
To the unversed, AI is a collection of technologies that allow machines to learn, think, act, and comprehend like humans. It can perform complex tasks easily and fast. Many industries are scrambling to harness its utilities. It not only mimics human intelligence, its pace and accuracy are leaving users baffled. Tamlyn Hunt, a researcher affiliated with the University of California, Santa Barbara, wrote for The Scientific American, that Artificial Intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses great potential risk to humanity. He quotes Geoffrey Hinton, ‘the godfather of AI,’ as saying that, ‘It is hard to see how you can prevent the bad actors from using [AI] for bad things.’
In 1950, Alan Turing published “Computing Machinery and Intelligence,” introducing the Turing test and opening the doors to what would be known as AI. After lying low for five decades, research caught on in the 21st century, delving deep into the unfathomable super intelligence tool. 2023 saw major developments. Google fired Blake Lemoine for leaks claiming its LaMDA was sentient i.e., capable of perceiving or feeling things. DeepMind unveiled AlphaTensor, and Intel claimed its FakeCatcher real-time deepfake detector was 96% accurate. OpenAI released ChatGPT on November 30 enabling mass-use of AI.
What is dismaying is that AI can improve itself exponentially. It is already seen to inadvertently perpetuate societal biases. It has been found to return lower accuracy results for black patients than white patients. Amazon gave up an algorithm after it was found more in favour of applicants based on words like “executed” or “captured,” in their resumes. Predictive policing tools are often found to rely on historical arrest data, which can reinforce existing patterns of racial profiling and disproportionate targeting of minority communities. These are but a tip of the iceberg.
After the generative AI-based phishing attacks in 2023 and the 2018 hack of Facebook’s user data, the potential uses of AI by cybercriminals are alarming. AI has already become a key tool in information warfare. Deep fake technology using AI is already a formidable challenge. For all its multifarous uses, AI is also a disruptive force. It can be used not only to manage but also cripple critical infrastructure in be it defence or power.
How have we come to such a pass, to start worrying over what simply started as a machine learning tool? Now, governments and security experts are debating on the potential devastating that AI could be capable of, unless the nations coordinate to put in a place a code of ethics for its responsible use. Unleashed and sans any limitations, it can threaten global order, experts worry. It is time nations came together to enact legislations and coordinate to prevent illegal and criminal activities with the use ICT. Bodies like the United Nations should seize the initiative as few nations have wherewithal to stop the march from algorithms to armaments.
© 2024 Hyderabad Media House Limited/The Hans India. All rights reserved. Powered by hocalwire.com