When something goes wrong with an AI agent — when it reads data it shouldn't have, writes a bad record, or behaves unexpectedly — the first question your security team will ask is: what did it access, and when?
If you don't have a complete audit trail, you can't answer that question. And if you can't answer it, you can't demonstrate to regulators, auditors, or customers that your systems operate with appropriate controls.
This is the audit trail problem. And most AI agent deployments don't have one.
Let's be specific about the compliance requirements, because they're more demanding than most teams assume.
SOC 2 Type II requires demonstrating that access controls are operating effectively over time. This means showing who (or what) accessed which data, when, and with what authorization. An agent that queries a database through a shared service account provides no useful signal — you can't distinguish agent access from application access.
GDPR Article 30 requires maintaining records of processing activities, including the purpose of processing and the categories of data processed. If an agent processes personal data, you need to log that it happened. Article 33 requires you to detect and report data breaches within 72 hours — which is impossible without data access logs.
HIPAA requires audit controls that record and examine activity in systems that contain health information. Every access to a system containing PHI must be logged. An agent accessing a database with any patient records triggers this requirement for every query it makes.
ISO 27001 requires maintaining logs of user activities, exceptions, and information security events. AI agents are "users" in this framework — their activities need to be logged like any other system actor.
Many teams add logging to their agent code: record what data the agent retrieves, what actions it takes. This is better than nothing, but it has important limitations.
Application logs can be suppressed. A prompt injection attack that causes an agent to behave unexpectedly might also suppress logging — if the logging is part of the same code path that's being manipulated.
Application logs don't capture the raw query. Logging "agent fetched customer record" is less useful than logging the exact SQL query executed, which rows were returned, and how many bytes were transferred.
Application logs aren't tamper-evident. If a developer can modify the application code, they can modify the logging code. Compliance requires logs that can't be modified by the systems being audited.
Application logs don't give you an identity-aware trail. If multiple agents share the same database credentials, application logs tell you what was accessed but not which agent instance accessed it.
An audit trail sufficient for compliance needs to capture:
This log needs to be written by the infrastructure layer — not the agent — and stored in a tamper-evident system the agent cannot modify.
The cleanest way to implement a complete audit trail is with a proxy layer between agents and data sources. Every data access request goes through the proxy. The proxy writes the log entry before returning any data to the agent. The agent can't suppress the log, can't modify it, and can't access the log storage directly.
This is exactly what a mount architecture provides. The mount proxy is the only path between the agent and the data. Every request produces a log entry with full identity, operation, resource, and outcome information. The underlying data source doesn't need to be modified — the proxy handles all logging independently.
The audit trail should be owned by the infrastructure layer, not the application layer. An agent that can suppress its own audit trail provides no real assurance.
Logging every data access produces a lot of data. Here's a practical approach to managing it:
Log everything, but tier your retention. For highly sensitive data (PII, financial records, health information), retain logs for the full period your compliance framework requires — typically 1-7 years. For lower-sensitivity operational data, 90 days is usually sufficient.
Store logs in a write-once system. S3 with Object Lock, a dedicated log management service, or a WORM-compliant storage system. The agents and the systems they interact with should not have write access to the log store.
Export to your SIEM. If you have a security information and event management system, forward agent access logs there. This enables correlation with other security events and gives you a single pane of glass for security monitoring.
Alert on anomalies. Set up alerts for unusual patterns: an agent accessing orders of magnitude more rows than normal, access during unusual hours, access to data outside the agent's normal scope.
Beyond compliance, audit trails are essential for incident response. When something goes wrong — and with AI agents, something will eventually go wrong — you need to be able to reconstruct exactly what happened.
Did the agent access data it shouldn't have? The audit trail tells you exactly which records, at what time, with what authorization. Did a prompt injection attack cause unusual behavior? The audit trail shows you the sequence of data accesses before and after. Did an agent make a write it shouldn't have? The audit trail shows you exactly what was written.
Without this trail, incident response is guesswork. With it, you can contain the incident, understand the scope of impact, and provide the documentation that regulators and customers will demand.
Get a complete audit trail for your agents
Agent Mounts logs every data access with full identity, operation, and resource details — written by the proxy, not the agent. Exportable to your SIEM.
Get early access