• Highlights
    • AI Safety across categories
    • Defence Military solutions as exhibited at DSEI 2023.
    • ID Verification Benchmark identity verification providers with world-first research methods.
    • LLM Alignment Assuring Large Language Models.
    • Solving AI Adoption Problems How we improve AI success rates.
    • Products
    • How to work with Advai
    • Advai Advance Intensive discovery process.
    • Advai Versus Developer workbench.
    • Advai Insight Better AI decision making.
    • Company
    • Company Who we are and what we're trying to achieve.
    • Careers Help us build a world with trustworthy AI.
    • Our Approach A little glimpse under the hood.
    • Company News Explore our news, mentions and other Advai happenings.
    • Thinking
    • Journal Thought leadership and editorial relating to AI Assurance.
    • AI Ethics AI Ethics from an Assurance perspective.
    • AI Adoption Playbook A must-have download. An assurance-engineered guide to AI adoption.
    • Solve LLM Adoption A focus on how Advai's services solve LLM adoption challenges.
  • FAQs
  • contact@advai.co.uk
  • LinkedIn
  • Twitter
  • Book Call
Contact Us

Journal

Image Copy
Learn Article News

11 Mar 2025

The AI Revolution: Turning Promise into Reality

The AI revolution is here, but in high-stakes sectors like security and policing, adoption depends on trust, rigorous testing, and assurance. Without clear evaluation frameworks, AI’s potential remains untapped—and the risks outweigh the rewards.

We explore how the UK’s AI Opportunities Action Plan lays the groundwork for progress, but also why testing and assurance must be the priority. From defining evaluation metrics to embedding standards into procurement, the key to unlocking AI’s potential is ensuring it works safely, ethically, and reliably.

AI Safety AI Robustness Language Models Trends and Insights Adversarial Attacks AI Ethics Police and Security AI Risk

DWP Law Change For PIP Claimants 'Increases Payments' For Pensioners Yorkshirelive
Learn Article

28 Feb 2025

AI Bias: The Hidden Flaws Shaping Our Future

Our own Michaela Coetsee dives deep into the pressing issue of bias in AI systems. While AI has the potential to revolutionise industries, its unchecked development can perpetuate harmful societal biases.

From data to algorithm design, and even human processes, understanding and mitigating bias is crucial for ensuring AI serves humanity equitably.

As we continue to build and refine AI, it's vital that fairness, diversity, and inclusivity remain at the forefront of development. Read more on how we can shape the future of AI responsibly.

AI Safety AI Robustness Language Models Trends and Insights AI Ethics

Apple Logo 3
Learn Article

03 Feb 2025

Apple’s AI News Debacle: How Assurance-Driven Evaluation Could Have Prevented It

A few weeks ago, Apple News made headlines for all the wrong reasons. Its AI summarisation tool generated inaccurate—and sometimes offensive—summaries of news articles. While some errors were laughable, others seriously damaged trust in the platform.

AI Safety AI Robustness Language Models Trends and Insights

Screenshot 2025 02 10 At 12.06.42
Learn Article

03 Feb 2025

Aye Aye AI Podcast

Our very own Chris Jefferson and Matt Sutton were guests on the latest episode of the Aye Aye AI podcast!


AI Safety AI Robustness Language Models Trends and Insights

Assurance Techniques
Learn Article

11 Sep 2024

A Look at Advai’s Assurance Techniques as Listed on CDEI

In lieu of standardisation, it is up to the present-day adopters of #ArtificialIntelligence systems to do their best to select the most appropriate assurance methods themselves.

Here's an article about a few of our approaches, with some introductory commentary about the UK Government's drive to promote transparency across the #AISafety
sector.

AI Safety AI Robustness Language Models Trends and Insights AI Assurance Adversarial Attacks AI Governance AI Ethics AI Compliance AI Risk Case Study

Synthetic Listing
Learn Article

16 Jul 2024

Authentic is Overrated: Why AI Benefits from Synthetic Data.

When assuring AI systems, we look at a number of things. The model, the people, the supply chain, the data, and so on. In this article, we zoom into a small aspect of this you might not have come across -#SyntheticData 

We explain how 'fake' data can improve model accuracy, enhance robustness to real world conditions, and strengthen adversarial resilience. And why it might be critical for the next step forwards in #ArtificialIntelligence

AI Safety AI Robustness Language Models Trends and Insights

Ant Inspiration
Learn Article

26 Jun 2024

Ant Inspiration in AI Safety: Our Collaboration with the University of York

What do ants have to do with AI Safety? Could the next breakthrough in AI Assurance come from the self-organising structures found in ecological systems?

The UK Research and Innovation funded a Knowledge Transfer Partnership between Advai and the University of York.

This led to the hire of Matthew Lutz "AI Safety Researcher / Behavioural Ecologist".

In this blog, we explore Matt's journey from architecting, through the study of Collective Intelligence in Army Ant colonies, and how this ended up with him joining as our 'KTP Research Associate in Safe Assured AI Systems'.

AI Safety AI Robustness Adversarial Attacks Language Models Trends and Insights

AI Orchestra
Learn Article

09 Jan 2024

Welcome to the Era of AI 2.0

The paradigm has shifted: AI 2.0 is the amalgamation of intelligent language agents capable of collaboration, whose behaviour is guided by natural language, rather than code. 

'AI 2.0' is marked distinctly by the orchestration of LLM-based agents. It is AI language models that are capable of managing, directing and modulating other AI. This not merely an incremental step. It’s a leap in artificial intelligence that redefines what is possible for both business and government.

AI Robustness AI Safety Trends and Insights AI Assurance AI Risk Language Models

When Computers Beat Us Listing Icon
Learn Article

05 Dec 2023

When Computers Beat Us at Our Own Game

You’ve probably seen the Q* rumours surrounding the OpenAI-Sam-Altman debacle. We can’t comment on the accuracy of these rumours, but we can provide some insight by interpreting Q* in the context of reinforcement learning.

It's fun, inspiring and daunting to consider that we may be approaching another one of 'those moments', where the world’s breath catches and we're forced to contemplate a world where computers beat us at our own game.

Language Models AI Robustness Adversarial Attacks AI Safety Trends and Insights

Achilles Website
Learn Article

23 Nov 2023

The Achilles Heel of Modern Language Models

Maths can scare many non-technical business managers away. However, a brief look at the maths is great to remind you quite how 'inhuman' artificial intelligence is, and how inhuman their mistakes can be.

Read our short post on why the inclusion of 'sight' makes language models like Chat GPT-4 so vulnerable to adversarial attack and misalignment.

Language Models AI Robustness Computer Vision Adversarial Attacks AI Safety Trends and Insights

Criminal Instruction Manual
Article Learn

20 Nov 2023

Securing the Future of LLMs

Exploring generative AI for your business? Discover how Advai contributes to this domain by researching Large Language Model (LLM) alignment, to safeguard against misuse or mishaps, and prevent the unlocking of criminal instruction manuals!

AI Safety Adversarial Attacks AI Robustness AI Assurance Language Models Trends and Insights AI Risk

LLM Listing Image 3
Article Learn

18 Oct 2023

In-between memory and thought: How to wield Large Language models. Part III.

With so much attention on Large Language Models (LLMs), many organisations are wondering how to take advantage of LLMs.

This is the first in a series of three articles geared towards non-technical business leaders.

We aim to shed light on some of the inner workings of LLMs and point out a few interesting quirks along the way.

AI Safety AI Robustness Adversarial Attacks AI Assurance AI Governance AI Ethics Language Models Trends and Insights

LLM Listing Image 2
Article Learn

11 Oct 2023

In-between memory and thought: How to wield Large Language models. Part II.

This is the second part of a series of articles geared towards non-technical business leaders. We aim to shed light on some of the inner workings of LLMs and point out a few interesting quirks along the way.


"Language models are designed to serve as general reasoning and text engines, making sense of the information they've been trained on and providing meaningful responses. However, it's essential to remember that they should be treated as engines and not stores of knowledge."

AI Safety AI Robustness AI Assurance Adversarial Attacks AI Governance AI Ethics Language Models Trends and Insights AI Risk

Adversarial Ai
Article Learn

11 Oct 2023

Assurance through Adversarial Attacks

This blog explores adversarial techniques to explain their value in detecting hidden vulnerabilities. Adversarial methods offer insight into strengthening AI against potential threats, safeguarding its use in critical sectors and underpinning AI trustworthiness for end users.

Adversarial Attacks AI Robustness AI Assurance AI Safety Language Models Trends and Insights

LLM Listing Image 1
Article Learn

04 Oct 2023

In-between memory and thought: How to wield Large Language models. Part I.

With so much attention on Large Language Models (LLMs), many organisations are wondering how to take advantage of LLMs.

This is the first in a series of three articles geared towards non-technical business leaders.

We aim to shed light on some of the inner workings of LLMs and point out a few interesting quirks along the way.

Language Models Trends and Insights AI Safety AI Robustness Adversarial Attacks AI Assurance AI Governance AI Ethics AI Compliance

Alignment Technology Being Kept Under Control
Article General

03 Oct 2023

Superintelligence alignment and AI Safety

OpenAI recently unveiled their 'Introducing Superalignment' initiative 📚, with the powerful statement: “We need scientific and technical breakthroughs to steer and control AI systems much smarter than us.”

We couldn’t agree more.  As our Chief Researcher, @Damian Ruck says, “No one predicted Generative AI would take off quite as fast as it has. Things that didn’t seem possible even a few months ago are very much possible now.”

We’re biased though; AI Safety and Alignment represents everything we believe and have been working on for the last few years. It may be to prevent bias, to ensure security, to maintain privacy. Or it could be a totally different and unforeseen consequence that you avoid.

How do we meet the challenge of steering AI systems? With AI.

“There’s no point if your guardrail development isn’t happening at the same speed as your AI development”. 

AI Safety AI Robustness AI Assurance Adversarial Attacks AI Compliance AI Ethics Language Models Trends and Insights

Contact

Contact

Join the Team

Address

20-22 Wenlock Road
London N1 7GU

Social

LinkedIn

Twitter

Company Journal
© 2025 Advai Limited.

Cookie Policy

|

Privacy Policy

|

Terms and Conditions