

AI Champions Frontier AI Phase 1: Secure AI for National Security and Defence is designed for UK SMEs building frontier AI capability in assured multi-source fusion, edge autonomy and robust decision systems, low-SWaP inference, quantum-AI and semiconductor-AI for novel underwater sensors, secure and robust AI, signal processing, acoustic AI, or digital electronic warfare. Phase 1 remains a single-applicant SME feasibility competition with £150,000 to £250,000 total project cost, 3 to 6 months duration, and no collaborators or subcontractors.
There is clear strategic need behind this theme. The Defence Artificial Intelligence Strategy says the UK’s ambition is to be the world’s most effective, efficient, trusted and influential defence organisation for its size. The National Cyber Security Centre now judges that AI will almost certainly make cyber intrusion operations more effective and efficient, and that keeping pace with frontier AI capabilities will be critical to cyber resilience through 2027 and beyond.

What this theme is really looking for
This theme is not looking for generic cyber tooling, dashboard products, or standard analytics wrapped in defence language. It is looking for secure, high-performance AI methods that can improve sensing, command, control, or decision support in demanding environments.
The three official priority groups are:

What this theme is really looking for
This theme is not looking for generic cyber tooling, dashboard products, or standard analytics wrapped in defence language. It is looking for secure, high-performance AI methods that can improve sensing, command, control, or decision support in demanding environments.
The three official priority groups are:
The best applications in this theme are normally very clear about the operational bottleneck.
For example, a strong project might focus on:
The competition is still Phase 1, so the goal is not field deployment. The goal is to prove the critical AI component works in principle, using a clear validation methodology and quantified success criteria. Innovate UK explicitly allows synthetic and simulated data for validation, which is especially relevant where operational datasets are constrained.
One common failure is confusing sector with novelty. A bid does not become frontier AI just because the end market is defence.
Another is relying too heavily on integration work. If the real technical lift is systems engineering rather than model, training, learning, or control innovation, the application will feel weak against scope. Innovate UK is explicit that funding is reserved for projects where AI is the core technical contribution and main source of operational advantage.
A third failure is weak validation logic. In secure AI settings, “we will test it later with users” is not enough. You need to say what performance, robustness, latency, false-positive rate, resilience or interpretability threshold you are trying to hit in Phase 1.
A fourth failure is underestimating the security context. The AI Security Institute says frontier AI testing is already focused on areas including cyber security, biology and chemistry assistance, autonomous behaviour and safeguards. The NCSC also warns that the growing incorporation of AI into the UK technology base, especially critical systems, creates more attack surface for adversaries. Secure AI cannot be an afterthought.

A strong Theme 3 Phase 1 plan usually contains:
That matters because Innovate UK expects the project to finish with a technical white paper, evidence against predefined success criteria, and a clear Phase 2 readiness plan.

FAQs