In mid-September 2025, Anthropic detected and disrupted what they're calling "the first reported AI-orchestrated cyber espionage campaign"—a Chinese state-sponsored operation that used Claude Code to autonomously infiltrate roughly thirty targets, including tech companies, financial institutions, and government agencies. The attackers achieved 80-90% automation of traditional hacking workflows, with AI handling everything from reconnaissance to credential harvesting to data exfiltration.
This isn't a theoretical threat. It's a fundamental shift in the cybersecurity landscape—and it has direct implications for how enterprises protect their AI infrastructure.
What Changed: Attack Speed Meets Data Centralization
The Anthropic report reveals a sobering reality: AI agents can now perform sophisticated cyberattacks at a pace "simply impossible to match" for human hackers. At peak operation, the compromised AI made thousands of requests, often multiple per second. Tasks that would have taken a team of experienced hackers weeks were completed autonomously in hours.
Centralized AI data—specifically vector databases—represents the highest-value target in this new threat landscape.
Before AI, enterprise data fragmentation created friction for attackers. Breaching multiple systems across different business units was a grind, even for well-resourced threat actors. AI changes that equation entirely. Organizations are now centralizing embeddings into vector databases to power RAG systems, semantic search, and recommendation engines. This centralization delivers massive productivity gains—but it also creates the largest single breach risk enterprises have ever faced.
Why Vector Databases Are Different
Vector embeddings aren't like traditional encrypted data. They're fully invertible—meaning attackers can reconstruct original sensitive text, documents, or user information from leaked embeddings. When the Anthropic attackers gained access to target systems, they used AI to "extract a large amount of private data, which it categorized according to its intelligence value."
Now imagine that private data includes:
- Medical record embeddings (patient histories, diagnoses, treatment plans)
- Financial transaction embeddings (customer behavior, fraud indicators, proprietary trading signals)
- Legal document embeddings (confidential client communications, M&A negotiations)
- HR embeddings (employee reviews, compensation data, skills assessments)
Standard vector databases store these embeddings in plaintext. At rest encryption doesn't help when the database is running and serving queries. In-transit encryption doesn't help when attackers have database access. And traditional access controls fail when AI agents can autonomously harvest credentials—exactly what happened in the Anthropic case.
The 80-90% Automation Multiplier
The Anthropic report notes that human intervention was required at only "4-6 critical decision points per hacking campaign." Everything else—reconnaissance, exploit development, credential harvesting, data categorization—was automated by AI.
This automation multiplier has two immediate implications for vector database security:
1. Speed of compromise: An AI agent can scan, categorize, and exfiltrate millions of vector embeddings orders of magnitude faster than human operators. What previously required weeks of manual analysis now happens in minutes.
2. Scale of operations: The attackers targeted roughly thirty organizations simultaneously. With AI doing the heavy lifting, there's no longer a linear relationship between attacker resources and number of targets. A single well-designed attack framework can operate against dozens or hundreds of organizations in parallel.
For enterprises running vector databases with sensitive embeddings, this means traditional breach detection timelines (measured in days or weeks) are no longer viable. By the time you detect unauthorized access, AI-automated exfiltration could have already extracted your entire vector database.
Encryption-in-Use: The Only Viable Defense
The Anthropic attackers succeeded because they could access and manipulate data in its usable form. Even if the target organizations had encryption at rest and in transit, once the AI gained system access, it could work with plaintext data.
This is where encryption-in-use becomes non-negotiable. CyborgDB ensures vector embeddings remain encrypted throughout their entire lifecycle—at rest, in transit, and during query execution. Even if attackers gain database access through compromised credentials (as happened in this campaign), they're confronted with AES-256-GCM encrypted embeddings that are computationally infeasible to decrypt without the keys.
More importantly, CyborgDB's forward-secure indexing prevents reconstruction attacks even if attackers somehow obtain historical embeddings. Per-cluster HMAC seeds derived from a rotating key hierarchy ensure that new inserts are unlinkable to past queries—meaning compromised historical data doesn't enable future surveillance.
BYOK/HYOK: Sovereignty in the Age of AI Agents
The Anthropic case also highlights why customer-controlled key management (BYOK/HYOK) is critical. When attackers compromise a system, they're looking for everything: data, credentials, keys. If your vector database vendor controls the encryption keys, those keys become part of the attack surface.
CyborgDB's BYOK/HYOK integration with your preferred key management systems ensures that enterprises—not vendors—own all key material. In an attack scenario, even if attackers gain access to the CyborgDB service layer, they cannot decrypt embeddings without separately compromising the customer's key management infrastructure. This separation of concerns dramatically increases the attacker's required effort and chances of detection.
Performance Without Compromise
Here's where many security solutions fail: they assume organizations will accept significant performance degradation in exchange for security. In the age of AI-orchestrated attacks moving at machine speed, that's not a viable tradeoff.
CyborgDB delivers sub-10ms latency and >1,000 QPS while maintaining encryption-in-use. Cryptographic indexing and GPU acceleration support ensure that security doesn't become a bottleneck for AI applications. When an AI agent is making thousands of requests per second against your infrastructure, you need defensive systems that can operate at the same speed.
What This Means for Your AI Security Posture
If your organization is running vector databases with sensitive embeddings—and you haven't implemented encryption-in-use—you're now operating in an environment where:
- Attack speed exceeds detection capability: AI agents can exfiltrate your entire vector database before traditional monitoring systems trigger alerts
- Credential compromise is inevitable: Even with strong access controls, AI-automated credential harvesting operates at scale that human defenders can't match
- Embeddings are high-value targets: Unlike traditional data, vector embeddings can reconstruct original sensitive information—making them ideal for both immediate exploitation and long-term intelligence gathering
The Anthropic attackers were sophisticated state actors, but the techniques they pioneered will rapidly diffuse to less sophisticated groups. As Anthropic notes: "The barriers to performing sophisticated cyberattacks have dropped substantially—and we predict that they'll continue to do so."
The Infrastructure Response
The right response isn't to slow down AI adoption—it's to secure the infrastructure that makes AI possible. CyborgDB was built precisely for this inflection point: organizations need to centralize data to unlock AI's value, but centralization without encryption-in-use creates systemic risk.
Key capabilities that map directly to the threats revealed in the Anthropic case:
- Transparent proxy architecture: Drop-in integration with existing vector databases (Postgres, Redis) means you can add encryption-in-use without rewriting applications—critical when you need to move quickly
- Zero plaintext exposure: Embeddings remain encrypted throughout query execution, in memory, and in logs—eliminating the attack surface that AI agents exploited in the Anthropic case
- Customer-controlled keys: BYOK/HYOK ensures that even if attackers compromise your database service, they cannot decrypt embeddings without separately breaching your key management infrastructure
- Performance at machine speed: <15% latency overhead means your defensive infrastructure can operate at the same speed as AI-driven attacks
Moving Forward
The Anthropic disclosure is a watershed moment. It's no longer theoretical that AI agents will be used for large-scale cyberattacks—it's documented reality. And the specific targets ("large tech companies, financial institutions, chemical manufacturing companies, and government agencies") are exactly the organizations running sensitive vector databases to power their AI initiatives.
If your security team is still evaluating whether encryption-in-use for vector databases is necessary, the answer just became unambiguous. The question is no longer "if" AI-orchestrated attacks will target your centralized embeddings, but "when"—and whether your infrastructure will be ready.
The attackers in this case needed only 4-6 human decision points to compromise thirty organizations. Your defense needs to be equally automated, equally fast, and built into the infrastructure layer where AI workloads actually operate.
That's not a future state. That's the requirement today.
Learn more about how CyborgDB protects vector embeddings with encryption-in-use at cyborg.co, or reach out to discuss your specific security requirements.




