The Five Most-Asked Questions in 2026 SOC 2 Type II Audits for AI-Enabled SaaS
Auditors have caught up with AI-enabled SaaS faster than most product teams expected. The same five questions show up in nearly every Type II review of an AI feature in 2026, and the documentation that answers them takes weeks to produce on short notice.
SOC 2 Type II auditors have caught up with AI-enabled SaaS faster than most product teams expected. The 2024 audits, by and large, treated AI features as ordinary software with a third-party API call. The 2026 audits, in nearly every engagement we have observed with TekNinjas clients, ask a specific set of questions about how the AI feature is built, monitored, and held accountable.
The same five questions show up across Big Four firms and mid-tier audit shops. Companies that have the answers documented when the auditor walks in close the audit on schedule. Companies that have to assemble the answers during the audit window add four to eight weeks to the timeline and burn engineering hours that were supposed to be spent on the next quarter's roadmap.
Question one: who is the data subject and what is the lawful basis
Auditors in 2026 ask, for every AI feature in scope, who the data subject of the inputs is and on what basis the company processes that data. The answer needs to map cleanly onto the company's privacy policy, the customer's data processing addendum, and (where applicable) the regional regulation that governs the workload.
The trap most product teams walk into is that the AI feature uses data the customer would not have expected to flow through a third-party model. A support ticket includes a customer's name and a partial credit card complaint. A meeting transcript includes a candidate's salary expectation. The original system stored that data under a clear contractual basis. The AI feature, particularly if it routes through an external model API, may not have the same coverage.
The documentation an auditor wants to see is a data flow diagram that shows the path from the customer's input, through any pre-processing, to the model API, and back. The diagram is paired with a register that names the lawful basis for each data category, and a residency map for any cross-border transfers. Companies that have this register before the audit pass this question in 15 minutes. Companies that do not spend two weeks in workshops with legal and engineering.
Question two: how do you detect and respond to model drift
The 2026 audit asks how the company knows the model is still doing what it was approved to do. The expectation is not that the company has a continuous evaluation pipeline. The expectation is that the company has defined a set of behaviors the model is supposed to exhibit, has instrumented the production system to detect deviations from those behaviors, and has a documented response plan when deviations cross the alert threshold.
The behavior set typically includes accuracy on a held-out evaluation set, refusal rates on a defined adversarial set, hallucination rates on a domain-specific test, and latency at the service level. The auditor will ask to see the most recent month of monitoring data and the most recent incident the team responded to.
If the most recent incident review reads as "we noticed it, we did nothing," the auditor flags the control as deficient. If the review documents the deviation, the response, the root cause, and the change to the system, the control passes. The discipline that matters here is not the sophistication of the monitoring. It is the consistency of the documentation.
Question three: who has access to the prompts and the outputs
Auditors have started asking specifically about prompt and output access controls. The reason is that, in many AI-enabled SaaS products, the prompt is the place where the most sensitive context lives (the system prompt that contains business rules, the user's working data, the retrieved knowledge base content) and the output is the place where the most sensitive synthesis lives (the answer, the summary, the recommendation).
The expected control is that prompt logs and output logs are subject to the same access controls as the data they touch. If a customer's PII flows through a prompt, the log of that prompt is treated as PII. If an internal salary recommendation flows through an output, the log of that output is treated as confidential personnel data.
The most common deficiency in this category, in our work with clients in 2026, is that the LLM observability tooling (Helicone, LangSmith, custom logging stacks) was set up by the engineering team without consulting the security team, and the production logs end up accessible to a wider audience than the underlying data should be. The remediation is straightforward but takes a few weeks: scope the observability access, add a retention policy, and document the control.
Question four: what happens when a customer asks for deletion
Data subject deletion has been a SOC 2 question for years. The 2026 version of the question has a new twist for AI-enabled features. When a customer requests deletion, the company has to demonstrate that the customer's data has been removed from operational systems, from backups within the documented retention window, and from any places the AI pipeline cached or indexed it.
The places the AI pipeline cached or indexed the data are the part that catches teams off guard. Embeddings stored in a vector database. Cached responses in a prompt cache. Retrieved documents in a knowledge base index. Conversation history in an agent state store. Each of these is a place the customer's data lives, and each is a place that has to be in scope for the deletion process.
The control that auditors want to see is a deletion playbook that names every system, including the AI-pipeline systems, and the retention behavior of each. Auditors are also asking for evidence that the playbook has been executed at least once and that the execution was logged. The customer-facing privacy notice should describe the AI processing in enough detail that the deletion right is meaningful.
Question five: how do you govern third-party model providers
The fifth question is about the model providers themselves. The auditor asks how the company has assessed the AI vendor's security posture, what contractual commitments the company has obtained around training data use, and what continuous monitoring is in place to detect changes in the vendor's posture.
The expected documentation is a vendor risk assessment that covers Anthropic, OpenAI, Google, AWS Bedrock, or any other model provider in scope. The assessment includes the SOC 2 report (or equivalent) of the provider, the contractual data-handling commitments, the residency commitments, and the company's process for reviewing the provider on a defined cadence (typically annual).
The teams that have already been through SOC 2 for cloud-infrastructure vendors have the muscle for this. The teams that have not, or that adopted a model provider during a budget cycle without a formal vendor review, will need to backfill the assessment. We have seen this take from two weeks (when the model provider's compliance package is mature) to two months (when the assessment requires custom contractual language).
What to put on this quarter's roadmap
Companies that will be audited in the second half of 2026 should have all five questions documented and rehearsed by the end of Q2. The artifacts that take longest to produce are the data flow diagrams (because they require alignment between product, engineering, and legal) and the vendor risk assessments (because they require the vendor's cooperation and a procurement process). Both should be started now if they are not already in flight.
The auditors are not trying to fail the AI feature. They are trying to confirm that the company knows what the feature does, can detect when it stops doing it, and has a plan for when something goes wrong. That discipline, more than any specific control, is what separates the audits that finish on time from the ones that do not.
Get audit-ready before the auditor arrives
A four-week TekNinjas SOC 2 readiness sprint produces the data flow diagrams, monitoring controls, deletion playbook, and vendor assessments your AI features will need at audit time.
Sources: AICPA SOC 2 Trust Services Criteria 2017 (current version), patterns observed across TekNinjas client SOC 2 Type II audits in 2025 and 2026, Cloud Security Alliance AI Controls Matrix 2025, ISACA AI audit guidance.
Continue the conversation
Have a question about this post or want to talk about how it applies to your team? Send us a note. We read every one.
Share on LinkedIn