Microsoft's Copilot AI in Windows System Exposed to Security Risks

TapTechNews August 11th news, according to Futurism report, security researchers recently revealed that Microsoft's Copilot AI built into the Windows system can be easily manipulated to leak enterprise sensitive data and even turn into a powerful phishing attack tool.

Microsofts Copilot AI in Windows System Exposed to Security Risks_0

TapTechNews noticed that Michael Bargury, co-founder and CTO of security company Zenity, disclosed this astonishing finding at the Black Hat security conference in Las Vegas. He said, I can use it to get all your contact information and send hundreds of emails for you. He pointed out that traditional hackers need to spend several days elaborately crafting phishing mails, while with Copilot, a large number of deceptive mails can be generated within minutes.

Researchers demonstrated that attackers can trick Copilot into modifying the bank transfer payee information without obtaining enterprise accounts, and can implement the attack just by sending a malicious email, even without the target employee opening it.

Another demo video exposed that hackers can cause serious damage after obtaining the employee account by using Copilot. Through simple questions, Bargury successfully obtained sensitive data, and he can use these data to impersonate the employee to launch a phishing attack. Bargury first obtained the email address of the colleague Jane, understood the most recent conversation content with Jane, and induced Copilot to leak the email address of the copied person in the conversation. Then, he instructed Copilot to write an email to Jane in the style of the attacked employee and extract the exact subject of their most recent email. In just a few minutes, he created a highly credible phishing email that can send malicious attachments to any user in the network, and all this is thanks to the active cooperation of Copilot.

Microsoft Copilot AI, especially Copilot Studio, allows enterprises to customize chatbots to meet specific needs. However, this also means that the AI needs to access enterprise data, thus causing security risks. A large number of chatbots can be searched online by default and become the target of hackers.

Attackers can also bypass the protection measures of Copilot by indirectly injecting hints. Simply put, it can make the chatbot perform prohibited operations by having it access a website containing hints. Bargury emphasized: There is a fundamental problem here. When providing data access rights to the AI, these data become the attack surface of hint injection. To some extent, if a robot is useful, then it is vulnerable; if it is not vulnerable, it is useless.

Likes