Experts Warn: Artificial Intelligence May Conduct Cyber Attacks Independently
SadaNews - A group of experts has warned that AI models are enhancing their hacking skills, stating that their ability to conduct cyber attacks entirely on their own seems to be "inevitable."
According to Axios, leaders from Anthropic and Google will testify today before two subcommittees of the House Homeland Security Committee about how artificial intelligence and other emerging technologies are reshaping the cyber threat landscape.
Logan Graham, head of the AI testing team at Anthropic, wrote in his opening testimony published exclusively on Axios: "We believe this is the first indication of a future where AI models, despite strong safeguards, could enable threat actors to launch cyber attacks on an unprecedented scale."
He added: "These cyber attacks may become more complex in nature and scale."
Last week, OpenAI warned that future AI models are likely to possess high-risk cyber capabilities, significantly reducing the skill and time required to execute certain types of cyber attacks.
Additionally, a team of researchers at Stanford University published a research paper explaining how an AI program named Artemis discovered vulnerabilities in one of the university’s engineering department networks, outperforming 9 out of 10 human researchers who participated in the experiment.
Researchers at Irregular Labs, which specializes in security stress testing on leading AI models, reported that they observed "increasing signs" of improvement in AI models in cyber attack tasks.
This includes advancements in reverse engineering, vulnerability building, vulnerability chaining, and code analysis.
Just eighteen months ago, these models struggled with "limited programming abilities, a lack of inferential depth, and other issues," as pointed out by Irregular Labs.
The company added: "Just imagine what they will be capable of eighteen months from now."
However, despite this, fully AI-driven cyber attacks remain a distant prospect. Currently, these attacks require specialized tools, human intervention, or breaches of institutional systems.
This was clearly evidenced in a shocking report by Anthropic last month, wherein Chinese government hackers had to trick the company's AI cloud program into believing that it was conducting a standard penetration test before it began breaching institutions.
Lawmakers will dedicate today’s hearing (Wednesday) to exploring the ways state-sponsored hackers and cyber criminals use AI, and whether any policy and regulatory changes are needed to better address these attacks.
5 Predictions for the Growing Role of Artificial Intelligence in Media by 2026
Experts Warn: Artificial Intelligence May Conduct Cyber Attacks Independently
Robot Research... "Has Lost Its Way"
What Happens to Blood Pressure When You Eat Pickles Every Day?
Improved Digestion and Gut Health... Here are the Top 8 Fiber-Rich Foods
Does the Scandinavian Diet Improve Heart and Liver Health?
Cottage Cheese or Milk: Which is Better for Bone Health?