AI Security & SOC 2 Readiness Assessment

Discover where your organization stands on AI security maturity. Answer these questions honestly to receive personalized recommendations for your journey.

Please answer all questions before calculating your score.

1

Discovery & Inventory

Do you maintain a complete inventory of all AI systems in use across your organization?

Not Started - We don’t track AI tools systematically Initial - Teams informally mention what they use Developing - We have a central list, but it’s often outdated Established - We maintain a living inventory with owners, purposes, and risk ratings

Have you categorized your AI agents by their potential impact and autonomy level?

Not Started - All AI is treated the same Initial - We discuss high-risk vs. low-risk informally Developing - We have documented risk tiers with some criteria Established - Each AI system has a clear risk rating tied to specific controls

Do you have a clear “authority matrix” defining what each AI agent can access and modify?

Not Started - Permissions are ad-hoc or overly broad Initial - We know generally what agents “should” access Developing - We document permissions but rarely verify them Established - Every agent has defined, minimum-necessary permissions that are regularly audited

2

Governance & Accountability

Is there clear executive accountability for AI security and compliance?

Not Started - AI security is handled reactively by different teams Initial - One person “keeps an eye on it” informally Developing - We’ve assigned ownership but without clear authority or budget Established - Executive leadership owns AI governance with defined budgets and decision rights

Have you created and communicated AI-specific policies?

Not Started - We rely on general IT policies Initial - We’ve drafted AI guidelines but haven’t rolled them out Developing - Policies exist and are somewhat enforced Established - Clear, accessible policies that teams reference regularly and leadership reviews annually

Are your teams trained on AI-specific security risks?

Not Started - No AI-focused training exists Initial - We’ve mentioned AI risks in general security training Developing - We provide AI security training, but coverage is inconsistent Established - Regular, role-specific training with measurable comprehension

3

Technical Defenses

Are AI agents treated as high-privilege identities with appropriate authentication?

Not Started - AI uses shared credentials or overly permissive service accounts Initial - We’ve discussed treating AI as non-human identities Developing - Some agents have dedicated credentials with basic controls Established - All agents use cryptographically secure authentication with Zero Trust principles

Do your AI agents operate with minimum necessary permissions?

Not Started - Agents often have broad “admin” or “power user” access Initial - We’ve identified that permissions are too broad Developing - We’re working toward least privilege but haven’t fully implemented it Established - Dynamic, context-aware permissions that grant only what’s needed for the current task

Are you protecting sensitive data from exposure through AI systems?

Not Started - Minimal data protection specific to AI Initial - We’ve identified sensitive data concerns Developing - Some masking or encryption in place Established - Automated PII detection, masking, and encryption for all AI interactions

Have you implemented controls to detect and block malicious or inappropriate AI interactions?

Not Started - No specialized guardrails for AI Initial - We’ve researched guardrail solutions Developing - Basic input validation or output filtering in place Established - Comprehensive, auditable guardrails scanning inputs and outputs in real-time

4

Monitoring & Response

Are you actively monitoring AI agent behavior for anomalies?

Not Started - We review AI systems only when problems occur Initial - Basic logging of AI activities Developing - Some automated monitoring with manual review Established - Real-time behavioral analytics with automated alerting

Do you monitor your AI models for drift, bias, or performance degradation?

Not Started - Models run until they obviously fail Initial - Periodic manual checks of model performance Developing - Regular model evaluation with documented metrics Established - Automated drift detection with predefined thresholds and alerts

How frequently do you test AI systems for vulnerabilities?

Not Started - No AI-specific security testing Initial - General security testing that touches some AI components Developing - Annual penetration testing including AI systems Established - Regular pen testing, red team exercises, and prompt injection simulations

Do you have an incident response plan specifically for AI security incidents?

Not Started - We’d figure it out if something happened Initial - General incident response that could apply to AI Developing - AI incidents are mentioned in our IRP Established - Detailed, tested AI incident playbooks with clear escalation paths

5

Vendor & Supply Chain

Do you assess third-party AI providers for security and compliance?

Not Started - We trust vendor marketing materials Initial - Basic vendor questionnaires Developing - Security reviews for major vendors Established - Comprehensive vendor risk assessments with SOC 2 reports and ongoing monitoring

Can you trace the components and dependencies of your AI systems?

Not Started - We don’t track AI supply chain components Initial - We know the primary vendors we use Developing - Documentation of major dependencies Established - Complete SBOM (Software Bill of Materials) for AI systems with regular updates

Calculate My Score

Get Your Personalized Action Plan

Enter your email to receive a detailed PDF report with specific recommendations for your maturity stage, along with templates and resources to help you move forward.

Send My Report

We respect your privacy. Your email will only be used to send your assessment results and occasional AI insights.