AI Safety Infrastructure at THiNK
A structured, standards-grounded framework helping developers and organisations build responsible, safe, and trustworthy AI systems from conformity assessment to LLM analytics.
Presented at THiNK Lab as part of THiNK's AI safety and standards engagement.

What Is CAP?
The Conformity Assessment Process is a structured evaluation tool developed through THiNK's implementation of KS 3007 and ISO/IEC 42001 for chatbots and conversational AI systems.
Conformity Assessment Process (CAP)
A guided pathway from technical review through conformity checks to certification readiness.
LLM Analytics
Model analytics and client analytics that show performance, usage trends, and interaction flows.
Built on Recognised Standards
Grounded in Kenya Standard KS 3007 and ISO/IEC 42001 for practical and accountable AI governance.
Governance Requirements
Data governance and quality, accountability mechanisms, and risk management practices.
Human Oversight
Transparency and explainability requirements with clear human oversight and control points.
Bot-of-Bots Environment
THiNKiT hosts multiple AI systems in a shared platform with monitoring and support tools.
The CAP Pathway
CAP progresses through three cumulative phases: Bot Review, Formal Conformity Assessment, and Certification. Each phase builds toward formal trust signals.
Bot Review & Testing
Functional testing, safety principles checks, and internal design compliance.
Formal Conformity Assessment
Documentation review, architecture assessment, and standards evaluation.
Certification
Cumulative report and formal recognition pathway with KEBS collaboration.
- System Ownership Evidence
- Documentation Baseline
- Responsible Development Intent
- Architecture Assessment
- Standards Evaluation
- Governance Documentation Review
- Functional Testing
- Safety Principles Check
- Internal Design Compliance
- Cumulative CAP Report
- KEBS Recognition Pathway
- Independent Trust Signal
- Model Analytics
- Client Interaction Analytics
- Unified Bot Management

ISO/IEC 42001
Requirements for establishing and maintaining an AI Management System (AIMS).
Core Governance Principles
Transparency, accountability, risk management, and human oversight
Beyond the Assessment Process
CAP is part of a wider AI safety ecosystem including maturity assessments, risk evaluations, and LLM analytics bundled in Bot in a Box.
- AI Safety Playbook and AI Safety Addendum
- AI Data Maturity Model and cybersecurity maturity checks
- AI-specific risk assessments for model and deployment fit
- Model analytics, client analytics, and unified bot management
Start Your CAP Assessment
Want to know more about our conformity assessment process? Tell us about your AI system.
