7 Days Left

ARIA Safeguarded AI

Fund Name

ARIA Safeguarded AI

Project Value

£200k

Deadline

28.05.2024

As AI becomes more capable, it has the potential to power scientific breakthroughs, enhance global prosperity, and safeguard us from disasters. But only if it’s deployed wisely. To date, very little R&D effort has gone into approaches that provide quantitative safety guarantees for AI systems, because they’re considered impossible or impractical.

By combining scientific world models and mathematical proofs ARIA aims to construct a ‘gatekeeper’, an AI system tasked with understanding and reducing the risks of other AI agents.

Check Your Eligibility

Fund details

This programme is split into three technical areas (TAs), each with its own distinct solicitations. The first solicitation within TA1 – TA1.1 Theory – is open for applications now. ARIA is looking for R&D Creators, individuals and teams that they will fund and support to research and construct computationally practicable mathematical representations and formal semantics to support world models, specifications about state-trajectories, neural systems, proofs that neural outputs validate specifications, and “version control” (incremental updates or “patches”) thereof.

Applicants that are shortlisted following a full proposal review, will be invited to meet with the Programme Director to discuss any critical questions/concerns prior to final selection. Successful/unsuccessful applicants will be notified on 10 July 2024.

TA1 – Scaffolding 

Building an extendable, interoperable language and platform to maintain real-world models/specifications + check proof certificates.

TA2 – Machine Learning

Using frontier AI to help domain experts build best-in-class mathematical models of real-world complex dynamics + leverage frontier Ai to train autonomous systems.

TA3 – Applications

Unlocking significant economic value with quantitative safety guarantees by deploying a gatekeeper-safeguarded autonomous AI system in a critical cyber-physical operating context.

Contact us