AI could pose ‘extinction-level’ threat to humans: Report

World+Biz

TBS Report
14 March, 2024, 04:40 pm
Last modified: 14 March, 2024, 04:52 pm
The report commissioned by the State Department recommends US intervention 

A new report commissioned by the US State Department depicts a worrisome scenario of "catastrophic" national security threats stemming from the swift advancements in artificial intelligence. 

It emphasises the urgency for the federal government to take action swiftly to prevent the disaster, CNN reports. 

The findings were based on interviews with more than 200 people conducted over a span of more than a year. 

This diverse group included top executives from prominent AI firms, cybersecurity experts, specialists in weapons of mass destruction, and government national security officials.

The report, released this week by Gladstone AI, flatly states that the most advanced AI systems could, in a worst case, "pose an extinction-level threat to the human species."

A spokesperson from the US State Department confirmed to CNN that the agency commissioned the report as part of its ongoing evaluation of how AI aligns with its mission to safeguard US interests domestically and internationally. 

However, the spokesperson emphasised that the report does not necessarily reflect the views of the US government.

The warning in the report serves as yet another reminder that while the promise of AI may attract investors and the public, there are also significant real-world risks to consider.

"AI is already an economically transformative technology. It could allow us to cure diseases, make scientific discoveries, and overcome challenges we once thought were insurmountable," Jeremie Harris, CEO and co-founder of Gladstone AI, told CNN on Tuesday.

"But it could also bring serious risks, including catastrophic risks, that we need to be aware of," Harris said. "And a growing body of evidence — including empirical research and analysis published in the world's top AI conferences — suggests that above a certain threshold of capability, AIs could potentially become uncontrollable."

White House spokesperson Robyn Patterson said US President Joe Biden's executive order on AI is the "most significant action any government in the world has taken to seize the promise and manage the risks of artificial intelligence."

"The President and Vice President will continue to work with our international partners and urge Congress to pass bipartisan legislation to manage the risks associated with these emerging technologies," Patterson said.

Gladstone AI said it asked AI experts at frontier labs to privately share their personal estimates of the chance that an AI incident could lead to "global and irreversible effects" in 2024. The estimates ranged between 4% and as high as 20%, according to the report, which noes the estimates were informal and likely subject to significant bias.

One of the biggest wildcards is how fast AI evolves – specifically AGI, which is a hypothetical form of AI with human-like or even superhuman-like ability to learn.

The report says AGI is viewed as the "primary driver of catastrophic risk from loss of control" and notes that OpenAI, Google DeepMind, Anthropic and Nvidia have all publicly stated AGI could be reached by 2028 – although others think it's much, much further away.

A related document published by Gladstone AI warns that the development of AGI and capabilities approaching AGI "would introduce catastrophic risks unlike any the United States has ever faced," amounting to "WMD-like risks" if and when they are weaponised.

Comments

While most comments will be posted if they are on-topic and not abusive, moderation decisions are subjective. Published comments are readers’ own views and The Business Standard does not endorse any of the readers’ comments.