Threat level AI: NSA encourages use of AI to keep up with foreign adversaries

The intelligence community is mulling over how AI can pose a threat to national security.
Sejal Sharma
NSA: U.S spies should tap private AI models
NSA: U.S spies should tap private AI models

MysteryShot/iStock 

The world is captivated by the rise of artificial intelligence (AI) tools like ChatGPT. And they have proved their worth in providing human-like answers to complex questions or even writing a research paper. While there are issues like ‘hallucination’ or grabbing and spouting out incorrect information from the internet, nations are concerned with a more significant issue when it comes to AI.

The intelligence agencies are now mulling over how AI can pose a threat to national security. In a recent interview with Bloomberg, a top U.S. spy official said intelligence agencies should use commercially available AI to keep up with foreign adversaries because their opponents will be doing the same.

Keeping up with the times

“The intelligence community needs to find a way to take benefit of these large models without violating privacy,” said Gilbert Herrera, director of research at the National Security Agency (NSA). “If we want to realize the full power of artificial intelligence for other applications, then we’re probably going to have to rely on a partnership with industry.”

Referring to companies like Meta, Microsoft, and Alphabet that can access massive amounts of data, Herrera wants NSA to use large commercial AI models trained on the open market. But, intelligence agencies like the NSA must be super careful in employing such models, often based on biased algorithms, and be sure of the data that feeds them. The same goes for their adversaries as well. For example, a terrorist organization might use similar AI engines to create disinformation or corrupt open-source data.

“The most immediate one is that AI can now help the infamous Nigerian prince and other phishers to make more credible English-sounding attacks,” said Herrera in an interview with The Record.

This warning came days after U.S. vice-president Kamala Devi Harris met with several CEOs of companies leading in AI development. The meet was held to address the risks associated with AI and the responsibility these companies share to ensure their products are safe and secure before they are deployed or made public. Those in attendance were Sam Altman - CEO of OpenAI, Dario Amodei - CEO of Anthropic, Satya Nadella - Chairman and CEO of Microsoft, and Sundar Pichai - CEO of Google and Alphabet. 

Vice President Harris said in a statement, “AI is one of today’s most powerful technologies, with the potential to improve people’s lives and tackle some of society’s biggest challenges. At the same time, AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy.”

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board