penetration test results and security measures to protect data security
At rabbit, we are exploring how AI transforms the way humans interact with machines. As we, along with our peers in the industry, push the boundaries to realize AI's potential, we have a duty to develop AI responsibly. This work begins with designing a system that minimizes the inherent risks of AI and continues with regular security audits and vulnerability assessments to proactively identify and address potential weaknesses.
Recently we’ve seen a couple of unrelated security incidents that have caused some people to question the nature of r1 and its underlying AI agent technology, while also raising some concerns about our security systems. In this blog, we want to share some additional information and context about these incidents and the steps we’ve taken to tighten up our security.
Last month, an employee (who has since been terminated) leaked API keys to a self-proclaimed “hacktivist” group, which wrote an article claiming they had access to our internal source code and some API keys. We immediately revoked and rotated those API keys and moved additional secrets into AWS Secrets Manager. After a third-party audit of our code, we can confirm that all secrets ever stored in it have successfully been revoked. It’s important to note that this isolated incident was not caused by a breach of our security systems - those API keys were obtained and shared illegally, and we are in communication with authorities for further investigation.
Additionally, we commissioned the cybersecurity experts at Obscurity Labs to conduct a thorough penetration test to double check a long list of our security measures by having them attempt to attack our system to expose any weaknesses or risks. Among other things, these tests included an analysis of:
- How secure is our method for transferring data (a "minion") and whether any valuable source code is at risk if it is attacked
- Whether Playwright opens an attack path to rabbit’s servers or AI agent source code
The results of these tests, which you can find more detail about on the Obscurity Labs blog, demonstrate that our approach of applying multiple layers of security in our architecture is working as intended, despite statements made by some external critics. In contrast to what some have suggested, Obscurity Labs’ findings show that, among other findings, no source code for our AI agent was exposed, no sensitive or valuable information was available to an attacker, and authentication tokens that are collected when you log in do not contain the actual username and password being typed. Multiple layers of security and isolation for VNC sessions sufficiently minimize the potential impact of an attack, and in instances where attackers do break through those defenses, they are unable to access anything of substance.
No security system is impenetrable; unfortunately, there will always be bad actors who try to exploit weaknesses. We’ve been working tirelessly to establish a reliable and trustworthy mechanism to report potential security flaws through our official Vulnerability Disclosure Policy. Anytime a potential risk is reported, we dedicate time and resources to quickly address vulnerabilities, and we share as many details of those measures as we can with our community. At the same time, we’re always happy to engage with developers of all levels who are contributing their passion and expertise to help us improve.
We appreciate our community and the trust they have put in us from the beginning. We will continue working hard to improve our products and make our technology safer and more secure.