Can language be trusted?
Three active research programmes and a fourth in development — each led by named researchers, each producing evidence that can be inspected, reproduced, or challenged by the people reading the output under regulatory oversight.
Multilingual data as evidence.
Two Knowledge Transfer Partnerships with Sheffield Hallam University, recognised among the top 50 KTPs in UK history. The first produced GAI Translate. The second develops agentic AI methods for multilingual dataset labelling in sensitive, regulated domains.
Domain beats scale.
Domain-specialised small and medium language models, trained on a human-verified data lake and deployed on secure Microsoft Azure private cloud. Our thesis: for regulated sectors, a small model trained on the right corpus outperforms a general-purpose LLM under audit.
Stress-test language before it ships.
Multi-persona agent panels that read translated or generated language the way a Japanese pharma regulator, a German finance counsel, or a Gulf compliance officer would — surfacing precision, tone, and compliance failures before the output reaches a real reader.
A fourth programme is being formalised — Agentic AI research with our internal AI team.
Scope, named leads, and first outputs will be published later in 2026. Write to [email protected] to be notified when the programme goes live.
Joint work with Guildhawk’s AI team. Formal programme launch, named leads, and first evaluation set in a future update.