Project Overview
Validaitor is a former research project and is now a start-up being incubated by TECO. The purpose of the project is to develop a comprehensive system for the auditing and certification of AI systems. The platform aims to support trustworthy, safe, robust, and compliant AI operation in organisations.
Our Goal
Provide tools and workflows to audit and certify AI systems with respect to regulatory, ethical, security and quality criteria. Enable organisations to adopt AI while maintaining compliance with evolving regulation (such as the EU AI Act) and best-practice standards in trustworthiness. Bridge the gap between AI development (performance, innovation) and operational governance (safety, robustness, accountability).
Highlights
Originated from TECO’s research environment and spun out as a start-up. Offers an all-in-one platform for AI testing, compliance management, risk management, bridging both governance and technical assurance. Designed to integrate into existing processes of organisations rather than impose entirely new workflows — intended to make AI governance more natural and attainable.
Impact
Helps organisations mitigate risk associated with AI systems—especially in sensitive sectors or critical infrastructure—by enabling auditing, certification and governance. Supports compliance with regulatory frameworks (e.g., EU AI Act) and contributes to the operationalisation of trustworthy AI practices. By commercialising research outcomes from TECO, Validaitor exemplifies the translation of academic innovation into practical, industry-ready tools for responsible AI.


