AI Security Maturity Assessment
AI
Security Maturity Index · SMB Edition
Self-Assessment Tool · v2.0

How Mature Is Your AI Security Program?

19 controls across 5 dimensions. Takes about 10 minutes. Get a scored maturity profile with prioritized recommendations built for SMBs.

⯑️
Governance & Policy
25% weight · 4 controls
Data & Model Security
25% weight · 4 controls
⚙️
Operational Controls
20% weight · 4 controls
⚖️
Ethical & Responsible AI
15% weight · 3 controls
Human Readiness
15% weight · 3 controls
Dimension 1/5
0 / 19
⯑️

Governance & Policy

WEIGHT: 25% · 4 CONTROLS
G-01
AI Ownership & Accountability
1 – Ad HocNo one is responsible for AI use decisions; tools are adopted without oversight.
2 – ReactiveA person or team steps in only after a problem surfaces.
3 – DefinedA named owner (person or committee) has documented AI responsibilities.
4 – ManagedAI ownership is integrated with IT, legal, and business operations with defined escalation paths.
5 – OptimizedAI governance effectiveness is measured, reviewed, and updated regularly.
G-02
Acceptable Use Policy for AI
1 – Ad HocNo policy exists; employees use AI tools however they choose.
2 – ReactiveInformal guidance drafted after a misuse incident or compliance question.
3 – DefinedWritten AUP for AI is published and all staff are required to acknowledge it.
4 – ManagedPolicy is reinforced through training and enforced via technical controls (e.g., approved tool lists, DLP).
5 – OptimizedPolicy is reviewed at least annually and updated based on incidents, new tools, and compliance changes.
G-03
AI Use Case Approval Process
1 – Ad HocAny employee can adopt any AI tool with no review or approval.
2 – ReactiveHigh-visibility AI tools get informal approval; most others do not.
3 – DefinedA documented intake process exists for evaluating and approving AI tools before use.
4 – ManagedRisk tiering determines the depth of review; higher-risk tools require security and legal sign-off.
5 – OptimizedApproval thresholds are refined based on outcomes; metrics track the health of the approval pipeline.
G-04
Regulatory & Legal Awareness
1 – Ad HocNo awareness of how AI intersects with GDPR, CCPA, HIPAA, or industry-specific obligations.
2 – ReactiveLegal or compliance is consulted only after a concern is raised.
3 – DefinedPrivacy and legal review is required before deploying AI systems that handle customer or employee data.
4 – ManagedA compliance mapping is maintained and actively used when evaluating new AI tools.
5 – OptimizedThe organization proactively monitors AI regulations and updates controls ahead of enforcement deadlines.

Data & Model Security

WEIGHT: 25% · 4 CONTROLS
D-01
Data Classification for AI Workflows
1 – Ad HocSensitive business or customer data is entered into AI tools without any rules or restrictions.
2 – ReactiveStaff receive informal warnings about sensitive data but there's no enforcement mechanism.
3 – DefinedData classification policies explicitly cover what can and cannot be used in AI prompts and outputs.
4 – ManagedTechnical controls restrict sensitive data (PII, financial records, IP) to approved AI paths only.
5 – OptimizedAutomated classification tools enforce data handling rules across AI workflows in near real time.
D-02
Prompt & Output Data Protection
1 – Ad HocAI prompts and model outputs are not retained, logged, or protected — no plan exists.
2 – ReactiveLogging and retention are inconsistent; practices vary by team or tool.
3 – DefinedPrompts and outputs are logged, retained per policy, and protected in storage.
4 – ManagedMonitoring and retention practices are aligned to security and compliance requirements; reviewed periodically.
5 – OptimizedLogs are actively analyzed for anomalies, data leakage trends, and used to reduce risk over time.
D-03
AI Tool Vetting & Approved Sources
1 – Ad HocAny model or AI tool is used without vetting — browser extensions, free tiers, unverified SaaS all equally acceptable.
2 – ReactiveStaff tend toward "known" tools but there's no formal review or approval list.
3 – DefinedAn approved AI tool/model list exists with baseline security and privacy vetting criteria.
4 – ManagedFormal model risk assessment is performed; data handling, provenance, and training practices are documented for approved tools.
5 – OptimizedApproved tools undergo ongoing evaluation; the list has a defined lifecycle and deprecation process.
D-04
Vendor & Third-Party AI Risk
1 – Ad HocYou don't know which of your vendors are using AI or what data they're feeding into it.
2 – ReactiveAI-related questions are added to vendor conversations only after an incident or concern surfaces.
3 – DefinedAI risk questions are included in the standard vendor security assessment process.
4 – ManagedContracts include AI-specific data handling provisions; vendor AI use is periodically reviewed.
5 – OptimizedContinuous monitoring of vendor AI risk; changes in vendor AI practices trigger a review cycle.
⚙️

Operational Controls

WEIGHT: 20% · 4 CONTROLS
O-01
AI Tool & Asset Inventory
1 – Ad HocNo inventory exists; the organization has little idea what AI tools are in use or by whom.
2 – ReactiveA partial or informal list exists but is not maintained or verified.
3 – DefinedA central register of AI tools and systems is maintained and periodically reviewed.
4 – ManagedThe AI inventory is linked to the asset register and risk management processes.
5 – OptimizedDiscovery is at least partially automated; the inventory updates continuously and triggers alerts for unvetted tools.
O-02
AI Monitoring & Logging
1 – Ad HocNo monitoring of AI usage; no logs collected or reviewed.
2 – ReactiveLogs exist within individual tools but are rarely reviewed proactively.
3 – DefinedDefined monitoring policy with alerts for anomalous AI usage patterns.
4 – ManagedAI-related alerts are integrated into existing security monitoring or managed services workflows.
5 – OptimizedBehavioral analytics surface AI-related threats and misuse; monitoring is continuously tuned.
O-03
Incident Response for AI Events
1 – Ad HocAI-related incidents are handled on an ad hoc basis with no defined process.
2 – ReactiveAI scenarios are discussed informally but not documented in any IR plan or runbook.
3 – DefinedAI incident scenarios are documented in IR plans and playbooks with clear response steps.
4 – ManagedAI-specific tabletop exercises are conducted; roles and responsibilities are tested and validated.
5 – OptimizedLessons learned from AI incidents feed directly into updated controls, policies, and training.
O-04
Change Management for AI Systems
1 – Ad HocAI tools and configurations are changed without any change control process.
2 – ReactiveChanges are tracked informally or after the fact in some teams but not consistently.
3 – DefinedFormal change management applies to AI systems; changes require review before implementation.
4 – ManagedTesting and approval gates are required before production changes to AI-integrated systems are released.
5 – OptimizedChange impact is measured post-deployment; outcomes inform future change policies.
⚖️

Ethical & Responsible AI

WEIGHT: 15% · 3 CONTROLS
E-01
Bias & Fairness Assessment
1 – Ad HocNo consideration of bias in AI outputs that affect customers, employees, or business decisions.
2 – ReactiveBias concerns are discussed informally or addressed only when a complaint surfaces.
3 – DefinedDocumented bias and fairness assessments are required for AI systems used in customer-facing or HR decisions.
4 – ManagedMitigation steps are implemented and verified; results are tracked over time.
5 – OptimizedOngoing monitoring detects model drift or bias changes; the feedback loop is automated.
E-02
Human Review & Override Controls
1 – Ad HocAI outputs are acted on directly with no human review — for hiring, pricing, communications, or other key decisions.
2 – ReactiveHuman review happens inconsistently depending on the individual.
3 – DefinedHuman-in-the-loop is required for a defined set of high-stakes decision types; roles are documented.
4 – ManagedOverride and escalation paths are defined, documented, and used in practice.
5 – OptimizedReview effectiveness is measured; thresholds for automated vs. human-reviewed decisions are refined over time.
E-03
Transparency & Customer Disclosure
1 – Ad HocCustomers and employees have no idea AI is being used in decisions or communications that affect them.
2 – ReactiveDisclosure happens in some areas but is inconsistent and reactive.
3 – DefinedClear internal transparency standards exist; basic explanations of AI use are available where required.
4 – ManagedExternal disclosures are made where legally required or contractually expected; communication standards are documented.
5 – OptimizedTransparency practices are regularly assessed and improved using feedback and stakeholder input.

Human Readiness

WEIGHT: 15% · 3 CONTROLS
H-01
AI Training & Awareness
1 – Ad HocNo AI-specific training exists; staff learn by trial and error or from YouTube.
2 – ReactiveOptional or informal guidance is available but not systematically delivered or tracked.
3 – DefinedRequired AI security and responsible use training is delivered to all staff with completion tracking.
4 – ManagedTraining is role-differentiated: basic awareness for all, advanced modules for developers, admins, and decision-makers.
5 – OptimizedTraining effectiveness is measured (e.g., quiz scores, incident rates); content is updated at least annually.
H-02
AI Skills & Capability Development
1 – Ad HocAI knowledge is concentrated in one or two individuals; no structured capability development.
2 – ReactiveInformal knowledge sharing happens within teams or through Slack/email; no structure.
3 – DefinedAn AI champion, community of practice, or designated go-to resource is established.
4 – ManagedSkills development is tied to roles and business strategy; learning paths exist for key positions.
5 – OptimizedA continuous capability program is in place with measurement, benchmarking, and regular refresh cycles.
H-03
Feedback & Continuous Improvement
1 – Ad HocNo formal channel for staff to report AI issues, near-misses, or concerns.
2 – ReactiveFeedback happens informally through conversations or chat; nothing is tracked.
3 – DefinedA formal reporting channel exists for AI concerns with a triage and response process.
4 – ManagedFeedback is collected, tracked, and demonstrably drives improvements to policy, training, or controls.
5 – OptimizedFeedback metrics are reviewed regularly; the improvement cycle is measured and reported to leadership.
✓ Assessment Complete

One Last Step

Enter your details to unlock your full AI Maturity Profile.
We'll also send you a copy by email.

Get Your Results

Your personalized score, dimension breakdown, insights, and improvement roadmap are ready.

Assessment Complete

Your AI Maturity Profile

out of 5.0

Dimension Breakdown
Control-Level Heatmap
Key Insights & Recommendations
Improvement Roadmap

News & Updates

APPALACHIA IN THE NEWS: Appalachia Technologies Cited in Case Study to Improve Efficiencies and Service Delivery   Improve and Evolve - this is one of the five Core Values of Appalachia Technologies and one we believe helps us to stay at the forefront of our industry.  Our Technical Assistance Center (TAC), while performing well and delivering quality service, was being challenged by processes for documentation that were manual and outdated.  Not satisfied with the current way of doing this, Chris Swecker, Manager of TAC, began to explore IT Glue.  IT Glue centralizes information, allowing for efficiencies in response time, accuracy, and client satisfaction.  As he explains, "IT Glue became our source of truth."  Chris and his team built on the success by incorporating additional tools to assist with password rotation and a client-side tool for password management and shared documentation.  

Contact Us

Learn more about what Appalachia Technologies can do for your business.

Appalachia Technologies
5000 Ritter Road Suite 104
Mechanicsburg, Pennsylvania 17055