AI Safety Infrastructure at THiNK

A structured, standards-grounded framework helping developers and organisations build responsible, safe, and trustworthy AI systems from conformity assessment to LLM analytics.

Presented at THiNK Lab as part of THiNK's AI safety and standards engagement.

THiNK Logo
AI SAFETY INFRASTRUCTURE

What Is CAP?

The Conformity Assessment Process is a structured evaluation tool developed through THiNK's implementation of KS 3007 and ISO/IEC 42001 for chatbots and conversational AI systems.

Conformity Assessment Process (CAP)

A guided pathway from technical review through conformity checks to certification readiness.

LLM Analytics

Model analytics and client analytics that show performance, usage trends, and interaction flows.

Built on Recognised Standards

Grounded in Kenya Standard KS 3007 and ISO/IEC 42001 for practical and accountable AI governance.

Governance Requirements

Data governance and quality, accountability mechanisms, and risk management practices.

Human Oversight

Transparency and explainability requirements with clear human oversight and control points.

Bot-of-Bots Environment

THiNKiT hosts multiple AI systems in a shared platform with monitoring and support tools.

The CAP Pathway

CAP progresses through three cumulative phases: Bot Review, Formal Conformity Assessment, and Certification. Each phase builds toward formal trust signals.

1

Bot Review & Testing

Functional testing, safety principles checks, and internal design compliance.

2

Formal Conformity Assessment

Documentation review, architecture assessment, and standards evaluation.

3

Certification

Cumulative report and formal recognition pathway with KEBS collaboration.

OWNERSHIP VERIFICATION
Foundational review for legal ownership, accountability, and safe development intent
  • System Ownership Evidence
  • Documentation Baseline
  • Responsible Development Intent
DATA EVALUATION
Assessing governance and architecture requirements for standards-grounded deployment
  • Architecture Assessment
  • Standards Evaluation
  • Governance Documentation Review
MODEL ASSESSMENT
Technical review and safety testing of chatbot behaviour and expected use cases
  • Functional Testing
  • Safety Principles Check
  • Internal Design Compliance
CERTIFICATION READINESS
Consolidating findings into recognised trust signals and certification pathways
  • Cumulative CAP Report
  • KEBS Recognition Pathway
  • Independent Trust Signal
LLM ANALYTICS & MONITORING
Ongoing visibility into performance, user behaviour, and bot management after deployment
  • Model Analytics
  • Client Interaction Analytics
  • Unified Bot Management
THiNK Logo
STANDARDS COMPLIANCE
Built on recognised standards and governance requirements

ISO/IEC 42001

Requirements for establishing and maintaining an AI Management System (AIMS).

Core Governance Principles

Transparency, accountability, risk management, and human oversight

Beyond the Assessment Process

CAP is part of a wider AI safety ecosystem including maturity assessments, risk evaluations, and LLM analytics bundled in Bot in a Box.

  • AI Safety Playbook and AI Safety Addendum
  • AI Data Maturity Model and cybersecurity maturity checks
  • AI-specific risk assessments for model and deployment fit
  • Model analytics, client analytics, and unified bot management

Start Your CAP Assessment

Want to know more about our conformity assessment process? Tell us about your AI system.

This helps us understand your AI system and prepare for the assessment process.

CAP Assistant Logo

THiNK Safety Assistant

THiNK Safety. Simplified.

Hello there, I’m the Think Safety Assistant. I can help you find safety processes, policies, and guidance. What do you need today?