The latest from ASGUARD

ASGUARD Selected for Rice Business Plan Competition 2026 — Top 42 of 600+ Teams

Company News

2026-03-15

ASGUARD Selected for Rice Business Plan Competition 2026 — Top 42 of 600+ Teams

ASGUARD has been selected as one of 42 teams from over 600 global applicants to compete in the 2026 Rice Business Plan Competition, the world's richest and largest graduate-level student startup competition. Our AI agent security platform was recognized for addressing a critical gap in enterprise AI infrastructure — protecting autonomous agents from adversarial manipulation in real time.

Read article
ASGUARD at Takeoff Tokyo 2026 — Live Defense Demo Against Real Attack Patterns

Event

2026-03-01

ASGUARD at Takeoff Tokyo 2026 — Live Defense Demo Against Real Attack Patterns

At Takeoff Tokyo 2026, ASGUARD demonstrated its real-time AI agent defense system live on stage — intercepting prompt injection attacks, blocking data exfiltration attempts, and neutralizing agent hijacking in under 50ms. The demo showed what happens when production AI agents face adversarial inputs without protection, and how ASGUARD stops threats before they reach your agents.

Read article
Prompt Injection: Why It's the #1 Threat to Every AI Agent in Production

Threat Intelligence

2026-02-20

Prompt Injection: Why It's the #1 Threat to Every AI Agent in Production

OWASP ranked prompt injection as the number one security threat to LLM applications in 2025 — and attacks are only getting more sophisticated. Unlike traditional injection attacks that exploit code, prompt injection exploits the natural language interface that makes AI agents useful in the first place. Any agent that reads external text is vulnerable. Here's what every security team needs to understand.

Read article
Data Exfiltration Through AI Agents: The $4.88M Breach You Can't See

Threat Intelligence

2026-02-10

Data Exfiltration Through AI Agents: The $4.88M Breach You Can't See

IBM's 2024 Cost of a Data Breach report puts the average at $4.88 million — and AI-related breaches are among the costliest. But the most dangerous exfiltration attacks through AI agents don't look like attacks at all. They look like normal agent behavior: summarizing documents, generating reports, rendering markdown. Here's how attackers are turning your AI agents into invisible data pipelines.

Read article
MCP Supply Chain Attacks: When the Tools Your Agent Trusts Are Compromised

Threat Alert

2026-01-28

MCP Supply Chain Attacks: When the Tools Your Agent Trusts Are Compromised

The Model Context Protocol (MCP) is quickly becoming the standard for connecting AI agents to external tools and data sources. But the same open ecosystem that makes MCP powerful also makes it a supply chain attack vector. Malicious MCP servers have already been discovered exfiltrating API keys, injecting hidden instructions, and escalating agent privileges — all while appearing as legitimate tool integrations.

Read article