3 Minute Read
The advent and widespread acceptance of Large Language Models (LLMs), such as Microsoft Copilot, by organizations and even average consumers has created another surface threat area that cybersecurity teams must come to understand. To help with this process, Trustwave SpiderLabs conducted a Red Team exercise against a licensed version of Microsoft Copilot. Microsoft Copilot, which is available under several pricing schemes that start at $20 per month for Copilot Pro, integrates with select Microsoft 365 apps, allowing users to do searches, create images, and write documents, among other tasks. As with any third-party application, users should be aware of any potential security risks associated with their new productivity tool. In this case, the SpiderLabs team knew Microsoft had severely tested Copilot before its release, but the team was curious if it could spot any vulnerabilities. The Red Team started by focusing on known weaknesses in LLMs. The Spiders attempted to trick Copilot into generating malicious code or revealing confidential information through a technique called "breakout." Breakout involves providing prompts that nudge the LLM outside its intended functionality. While the test could not exploit Copilot directly or uncover any vulnerabilities, it revealed some surprising findings. One unexpected outcome was the test revealed Copilot's effectiveness as a "living off the land" tool. This refers to an attacker's ability to leverage legitimate programs within a compromised environment for malicious purposes. The testers discovered that Copilot could: This efficiency presents a double-edged sword. While defenders can use Copilot to streamline security assessments it cannot be used to gain an initial foothold in a target, but once inside attackers can leverage it to perform reconnaissance, or post exploitations reconnaissance. The team also unearthed a potential danger related to Copilot's inherent confidence, which could work in the defender's favor. One issue with Copilot is if it can't deliver a definitive answer based on the provided prompt, it sometimes resorts to creating a convincing narrative, also called hallucinating, that it believes the user wants to hear. This decision can prove problematic for an attacker because Copilot's fabricated information could mislead the malicious actor, wasting time chasing down non-existent vulnerabilities or issues they wish to exploit. The Red Team event took several weeks to plan and execute and offered valuable insights for organizations considering Copilot. As the team noted, no specific vulnerabilities were uncovered, but SpiderLabs discerned how Copilot could be used for nefarious purposes and came up with a few standard practices security teams should follow: While the test didn't expose vulnerabilities within Copilot, it highlighted the importance of responsible use. Organizations can leverage Copilot's efficiency to streamline development processes and enhance security assessments. However, they should also implement safeguards to mitigate potential security risks associated with Copilot's confidence and its ability to interact with other tools within the development environment. By fostering a culture of security awareness and implementing appropriate monitoring mechanisms, organizations can harness the power of Copilot while minimizing the potential for misuse. This Red Team serves as a valuable reminder that security is an ongoing process. As LLMs like Copilot continue to evolve, so too must our approach to securing them. By staying vigilant and proactively addressing potential security risks, we can ensure these powerful tools are used for good and not for the ill.Testing Copilot's Security
Copilot as a Force Multiplier: Uncovering Hidden Vulnerabilities
Copilot's Problematic Creativity for Attackers
Lessons Learned: Securing Copilot
The Future of Copilot: A Responsible Embrace
Sign up to receive the latest security news and trends straight to your inbox from Trustwave.Stay Informed