• Highlights
    • AI Safety across categories
    • Defence Military solutions as exhibited at DSEI 2023.
    • ID Verification Benchmark identity verification providers with world-first research methods.
    • LLM Alignment Assuring Large Language Models.
    • Solving AI Adoption Problems How we improve AI success rates.
    • Products
    • How to work with Advai
    • Advai Advance Intensive discovery process.
    • Advai Versus Developer workbench.
    • Advai Insight Better AI decision making.
    • Company
    • Company Who we are and what we're trying to achieve.
    • Careers Help us build a world with trustworthy AI.
    • Our Approach A little glimpse under the hood.
    • Company News Explore our news, mentions and other Advai happenings.
    • Thinking
    • Journal Thought leadership and editorial relating to AI Assurance.
    • AI Ethics AI Ethics from an Assurance perspective.
    • AI Adoption Playbook A must-have download. An assurance-engineered guide to AI adoption.
    • Solve LLM Adoption A focus on how Advai's services solve LLM adoption challenges.
  • FAQs
  • contact@advai.co.uk
  • LinkedIn
  • Twitter
  • Book Call
Contact Us

Journal

Image Copy
Learn Article News

11 Mar 2025

The AI Revolution: Turning Promise into Reality

The AI revolution is here, but in high-stakes sectors like security and policing, adoption depends on trust, rigorous testing, and assurance. Without clear evaluation frameworks, AI’s potential remains untapped—and the risks outweigh the rewards.

We explore how the UK’s AI Opportunities Action Plan lays the groundwork for progress, but also why testing and assurance must be the priority. From defining evaluation metrics to embedding standards into procurement, the key to unlocking AI’s potential is ensuring it works safely, ethically, and reliably.

AI Safety AI Robustness Language Models Trends and Insights Adversarial Attacks AI Ethics Police and Security AI Risk

Assurance Techniques
Learn Article

11 Sep 2024

A Look at Advai’s Assurance Techniques as Listed on CDEI

In lieu of standardisation, it is up to the present-day adopters of #ArtificialIntelligence systems to do their best to select the most appropriate assurance methods themselves.

Here's an article about a few of our approaches, with some introductory commentary about the UK Government's drive to promote transparency across the #AISafety
sector.

AI Safety AI Robustness Language Models Trends and Insights AI Assurance Adversarial Attacks AI Governance AI Ethics AI Compliance AI Risk Case Study

Ant Inspiration
Learn Article

26 Jun 2024

Ant Inspiration in AI Safety: Our Collaboration with the University of York

What do ants have to do with AI Safety? Could the next breakthrough in AI Assurance come from the self-organising structures found in ecological systems?

The UK Research and Innovation funded a Knowledge Transfer Partnership between Advai and the University of York.

This led to the hire of Matthew Lutz "AI Safety Researcher / Behavioural Ecologist".

In this blog, we explore Matt's journey from architecting, through the study of Collective Intelligence in Army Ant colonies, and how this ended up with him joining as our 'KTP Research Associate in Safe Assured AI Systems'.

AI Safety AI Robustness Adversarial Attacks Language Models Trends and Insights

Advai Day Out With Military Cover
Learn Article

14 May 2024

Advai’s Day Out Teaching the Military how to Exploit AI Vulnerabilities

"It’s in this moment where the profound importance of adversarial AI really clicks. The moment when a non-technical General can see a live video feed, with a small bounding box following their face, identifying them, and pictures the enemy use-case for such a technology.

Then, a small amount of code is run and in a heartbeat the box surrounding their face disappears.

Click."

Read more about our day with the UK Ministry of Defence…

AI Safety AI Robustness Adversarial Attacks Computer Vision Defence Case Study

Ncsc List Section
Learn Article

18 Apr 2024

Uncovering the Vulnerabilities of Object Detection Models: A Collaborative Effort by Advai and the NCSC

Object detectors can be manipulated. -The car is no longer recognised as a car. -The person is no longer there. ...As the use of these detection systems becomes increasingly widespread, their resilience to manipulation becomes increasingly important.

The purpose of this work is to both demonstrate vulnerabilities of these systems and to showcase how manipulations might be detected and ultimately prevented.

In this blog, we retell of our technical examination of five advanced object detectors' vulnerabilities, with sponsorship and strategic oversight from the National Cyber Security Centre (NCSC).

AI Safety AI Robustness Adversarial Attacks Computer Vision Defence Case Study

When Computers Beat Us Listing Icon
Learn Article

05 Dec 2023

When Computers Beat Us at Our Own Game

You’ve probably seen the Q* rumours surrounding the OpenAI-Sam-Altman debacle. We can’t comment on the accuracy of these rumours, but we can provide some insight by interpreting Q* in the context of reinforcement learning.

It's fun, inspiring and daunting to consider that we may be approaching another one of 'those moments', where the world’s breath catches and we're forced to contemplate a world where computers beat us at our own game.

Language Models AI Robustness Adversarial Attacks AI Safety Trends and Insights

Achilles Website
Learn Article

23 Nov 2023

The Achilles Heel of Modern Language Models

Maths can scare many non-technical business managers away. However, a brief look at the maths is great to remind you quite how 'inhuman' artificial intelligence is, and how inhuman their mistakes can be.

Read our short post on why the inclusion of 'sight' makes language models like Chat GPT-4 so vulnerable to adversarial attack and misalignment.

Language Models AI Robustness Computer Vision Adversarial Attacks AI Safety Trends and Insights

Criminal Instruction Manual
Article Learn

20 Nov 2023

Securing the Future of LLMs

Exploring generative AI for your business? Discover how Advai contributes to this domain by researching Large Language Model (LLM) alignment, to safeguard against misuse or mishaps, and prevent the unlocking of criminal instruction manuals!

AI Safety Adversarial Attacks AI Robustness AI Assurance Language Models Trends and Insights AI Risk

AITF Announcement
Article News

19 Oct 2023

Press Release: Frontier AI Taskforce

Press release announcing our partnership with the UK government's Frontier AI Taskforce.

AI Safety AI Robustness Adversarial Attacks AI Assurance AI Governance AI Compliance Trends and Insights AI Regulation

LLM Listing Image 3
Article Learn

18 Oct 2023

In-between memory and thought: How to wield Large Language models. Part III.

With so much attention on Large Language Models (LLMs), many organisations are wondering how to take advantage of LLMs.

This is the first in a series of three articles geared towards non-technical business leaders.

We aim to shed light on some of the inner workings of LLMs and point out a few interesting quirks along the way.

AI Safety AI Robustness Adversarial Attacks AI Assurance AI Governance AI Ethics Language Models Trends and Insights

LLM Listing Image 2
Article Learn

11 Oct 2023

In-between memory and thought: How to wield Large Language models. Part II.

This is the second part of a series of articles geared towards non-technical business leaders. We aim to shed light on some of the inner workings of LLMs and point out a few interesting quirks along the way.


"Language models are designed to serve as general reasoning and text engines, making sense of the information they've been trained on and providing meaningful responses. However, it's essential to remember that they should be treated as engines and not stores of knowledge."

AI Safety AI Robustness AI Assurance Adversarial Attacks AI Governance AI Ethics Language Models Trends and Insights AI Risk

Adversarial Ai
Article Learn

11 Oct 2023

Assurance through Adversarial Attacks

This blog explores adversarial techniques to explain their value in detecting hidden vulnerabilities. Adversarial methods offer insight into strengthening AI against potential threats, safeguarding its use in critical sectors and underpinning AI trustworthiness for end users.

Adversarial Attacks AI Robustness AI Assurance AI Safety Language Models Trends and Insights

LLM Listing Image 1
Article Learn

04 Oct 2023

In-between memory and thought: How to wield Large Language models. Part I.

With so much attention on Large Language Models (LLMs), many organisations are wondering how to take advantage of LLMs.

This is the first in a series of three articles geared towards non-technical business leaders.

We aim to shed light on some of the inner workings of LLMs and point out a few interesting quirks along the way.

Language Models Trends and Insights AI Safety AI Robustness Adversarial Attacks AI Assurance AI Governance AI Ethics AI Compliance

Alignment Technology Being Kept Under Control
Article General

03 Oct 2023

Superintelligence alignment and AI Safety

OpenAI recently unveiled their 'Introducing Superalignment' initiative 📚, with the powerful statement: “We need scientific and technical breakthroughs to steer and control AI systems much smarter than us.”

We couldn’t agree more.  As our Chief Researcher, @Damian Ruck says, “No one predicted Generative AI would take off quite as fast as it has. Things that didn’t seem possible even a few months ago are very much possible now.”

We’re biased though; AI Safety and Alignment represents everything we believe and have been working on for the last few years. It may be to prevent bias, to ensure security, to maintain privacy. Or it could be a totally different and unforeseen consequence that you avoid.

How do we meet the challenge of steering AI systems? With AI.

“There’s no point if your guardrail development isn’t happening at the same speed as your AI development”. 

AI Safety AI Robustness AI Assurance Adversarial Attacks AI Compliance AI Ethics Language Models Trends and Insights

Cyber Privacy Ai
Article Learn

13 Sep 2023

AI Powered Cybersecurity: Leveraging Machine Learning for Proactive Threat Detection 

Every day the attack surface of an organisation is changing and most likely growing.


An environment where petabytes of both traditional and AI enhanced data are transferred across private and public networks creates a daunting landscape for cybersecurity professionals.  This data rich world is now even more accessible to cyber criminals as new AI-enabled strategies facilitated by open-source tooling become available to them.


How is the modern CISO, IT manager or cybersecurity professional meant to keep up? The answer, probably unsurprisingly, is that to detect, and deal with these new threats, AI is also the solution. 

AI Safety AI Governance AI Compliance AI Investment Trends and Insights AI Robustness Adversarial Attacks

Autonomous
Article News

02 Sep 2023

Autonomous Military Systems Must Operate as Commanded

To protect our way of life, the primary purpose of military innovation is to achieve and maintain operational advantage over our adversaries.

 

In the modern age, this practically translates to advanced operational systems and hyperintelligent software to manage these systems in an increasingly autonomous way.


But we must be able to trust these systems...

Defence Trends and Insights AI Robustness Adversarial Attacks AI Assurance AI Governance AI Compliance AI Ethics Computer Vision

PECB Webinar
Article Learn

31 Aug 2023

Risk Framework for AI Systems

In 2023, AI has become a pivotal business tool, posing both opportunities and risks. Understanding AI, its regulatory landscape, and integrating it into risk management frameworks are essential. This involves staying informed about global regulations, recognising AI-specific threats, and adopting a structured approach to risk management. Stress testing AI systems is crucial for assessing performance and reliability. Businesses must continually adapt, leveraging risk assessments and monitoring to safely harness AI's potential.

AI Safety AI Robustness Adversarial Attacks AI Governance AI Assurance Trends and Insights

Security Vision
Article Learn

15 Mar 2023

Assuring Computer Vision in the Security Industry

Advai assessed an AI's performance, security, and robustness in object detection, identifying imbalances in data and model vulnerabilities to adversarial attacks. Recommendations included training data augmentation, edge case handling, and securing the AI's physical container.

Computer Vision AI Governance AI Assurance Adversarial Attacks AI Robustness Case Study AI Risk

Missing Pieces Jigsaw
Article General

05 May 2021

AI: Unknown Decision Space is holding us back

Decision Space sits at the core of AI and helps explain issues such as bias and why AI performs poorly in the wild compared to the lab.  It also helps shed light on recent trends towards slower adoption of AI.  Here, we run through the basics.

AI Safety AI Robustness AI Assurance AI Governance Trends and Insights Adversarial Attacks

Markus Spiske Xekxe Vr0ec Unsplash Scaled
Article Learn

17 Dec 2020

Machine Learning: Automated Dev Ops and Threat Detection

Getting to grips with ML Ops and ML threats.

AI Safety AI Robustness Adversarial Attacks AI Assurance AI Risk Trends and Insights AI Governance

Lo Lo Cevj8lpbjsc Unsplash
Article General

31 Oct 2020

Tricking The Trade

How High Frequency Trading AI Can Be Manipulated by Adversarial AI

AI Safety AI Investment AI Regulation AI Governance Adversarial Attacks AI Robustness

Contact

Contact

Join the Team

Address

20-22 Wenlock Road
London N1 7GU

Social

LinkedIn

Twitter

Company Journal
© 2025 Advai Limited.

Cookie Policy

|

Privacy Policy

|

Terms and Conditions