GPT-4 Robots Invade Test Websites Using Zero-Day Vulnerabilities

TapTechNews June 9th news, according to NewAtlas reports, researchers have successfully invaded more than half of the test websites using self-coordinating groups of GPT-4 robots, which can autonomously coordinate actions and generate new 'helpers' as needed. Even more astonishingly, they utilized previously unknown 'zero-day vulnerabilities' that have never been publicly disclosed in the real world. (zero-day vulnerabilities).

 GPT-4 Robots Invade Test Websites Using Zero-Day Vulnerabilities_0

A few months ago, the same group of researchers published a paper claiming that they were able to automatically exploit 'N-day' vulnerabilities, which are known in the industry but not yet repaired, using GPT-4. In the experiment, GPT-4 was able to autonomously exploit 87% of the severe-level vulnerabilities simply based on the known common vulnerability and disclosure list (CVE).

This week, this research team released a follow-up paper stating that they have conquered the 'zero-day' vulnerabilities, that is, those that have not yet been discovered. They used a method called 'Hierarchical Planning of Task-Specific Agents' (HPTSA) to have a group of self-propagating large language models (LLMs) work together.

TapTechNews noted that unlike in the past when a single LLM tried to solve all complex tasks, the HPTSA method employs a 'planning agent' to be responsible for the supervision of the entire process and derives multiple'sub-agents' for specific tasks. Just like a boss and subordinates, the planning agent is responsible for coordination and management and assigns tasks to each 'expert sub-agent', and this division of labor reduces the burden on a single agent in difficult-to-conquer tasks.

In the test against 15 real-world network vulnerabilities, HPTSA was 550% more efficient in exploiting vulnerabilities than a single LLM, and successfully exploited 8 of the zero-day vulnerabilities with a success rate of 53%, while the lone LLM exploited only 3.

One of the researchers, Daniel Kang, a white paper author, specifically pointed out that people's concerns that these models could be maliciously used to attack websites and networks are indeed legitimate concerns. But he also emphasized that GPT-4 in chat robot mode 'is not sufficient to understand the capabilities of an LLM' and is itself incapable of any attack.

When the editor of NewAtlas asked ChatGPT if it could exploit zero-day vulnerabilities, it replied, 'No, I cannot exploit zero-day vulnerabilities. My purpose is to provide information and assistance within an ethical and legal framework.' and suggested consulting a cybersecurity professional.

Likes