• Highlights
    • AI Safety across categories
    • Defence Military solutions as exhibited at DSEI 2023.
    • ID Verification Benchmark identity verification providers with world-first research methods.
    • LLM Alignment Assuring Large Language Models.
    • Solving AI Adoption Problems How we improve AI success rates.
    • Products
    • How to work with Advai
    • Advai Advance Intensive discovery process.
    • Advai Versus Developer workbench.
    • Advai Insight Better AI decision making.
    • Company
    • Company Who we are and what we're trying to achieve.
    • Careers Help us build a world with trustworthy AI.
    • Our Approach A little glimpse under the hood.
    • Company News Explore our news, mentions and other Advai happenings.
    • Thinking
    • Journal Thought leadership and editorial relating to AI Assurance.
    • AI Ethics AI Ethics from an Assurance perspective.
    • AI Adoption Playbook A must-have download. An assurance-engineered guide to AI adoption.
    • Solve LLM Adoption A focus on how Advai's services solve LLM adoption challenges.
  • FAQs
  • contact@advai.co.uk
  • LinkedIn
  • Twitter
  • Book Call
Contact Us

Journal

Image Copy
Learn Article News

11 Mar 2025

The AI Revolution: Turning Promise into Reality

The AI revolution is here, but in high-stakes sectors like security and policing, adoption depends on trust, rigorous testing, and assurance. Without clear evaluation frameworks, AI’s potential remains untapped—and the risks outweigh the rewards.

We explore how the UK’s AI Opportunities Action Plan lays the groundwork for progress, but also why testing and assurance must be the priority. From defining evaluation metrics to embedding standards into procurement, the key to unlocking AI’s potential is ensuring it works safely, ethically, and reliably.

AI Safety AI Robustness Language Models Trends and Insights Adversarial Attacks AI Ethics Police and Security AI Risk

Assurance Techniques
Learn Article

11 Sep 2024

A Look at Advai’s Assurance Techniques as Listed on CDEI

In lieu of standardisation, it is up to the present-day adopters of #ArtificialIntelligence systems to do their best to select the most appropriate assurance methods themselves.

Here's an article about a few of our approaches, with some introductory commentary about the UK Government's drive to promote transparency across the #AISafety
sector.

AI Safety AI Robustness Language Models Trends and Insights AI Assurance Adversarial Attacks AI Governance AI Ethics AI Compliance AI Risk Case Study

Ethics Listing
Learn Article

22 Feb 2024

The Unwitting AI Ethicist

If you're curious about the types of ethical decisions AI engineers are faced with, this article is for you. TLDR; AI engineers should take on some ethical responsibilities, others should be left for society. Read to find out more...

AI Robustness AI Safety Trends and Insights AI Assurance AI Risk AI Ethics

AI Orchestra
Learn Article

09 Jan 2024

Welcome to the Era of AI 2.0

The paradigm has shifted: AI 2.0 is the amalgamation of intelligent language agents capable of collaboration, whose behaviour is guided by natural language, rather than code. 

'AI 2.0' is marked distinctly by the orchestration of LLM-based agents. It is AI language models that are capable of managing, directing and modulating other AI. This not merely an incremental step. It’s a leap in artificial intelligence that redefines what is possible for both business and government.

AI Robustness AI Safety Trends and Insights AI Assurance AI Risk Language Models

Ai Bureaucracy
Learn Article

12 Dec 2023

The AI Act-ually Happening

Some strengths, some weaknesses and 3 key implications for businesses seeking to adopt artificial intelligence, now the EU has finalised The AI Act.

Let the regulatory driven transformation commence. 

AI Robustness AI Safety Trends and Insights AI Regulation AI Governance AI Assurance AI Risk

Criminal Instruction Manual
Article Learn

20 Nov 2023

Securing the Future of LLMs

Exploring generative AI for your business? Discover how Advai contributes to this domain by researching Large Language Model (LLM) alignment, to safeguard against misuse or mishaps, and prevent the unlocking of criminal instruction manuals!

AI Safety Adversarial Attacks AI Robustness AI Assurance Language Models Trends and Insights AI Risk

LLM Listing Image 2
Article Learn

11 Oct 2023

In-between memory and thought: How to wield Large Language models. Part II.

This is the second part of a series of articles geared towards non-technical business leaders. We aim to shed light on some of the inner workings of LLMs and point out a few interesting quirks along the way.


"Language models are designed to serve as general reasoning and text engines, making sense of the information they've been trained on and providing meaningful responses. However, it's essential to remember that they should be treated as engines and not stores of knowledge."

AI Safety AI Robustness AI Assurance Adversarial Attacks AI Governance AI Ethics Language Models Trends and Insights AI Risk

Security Vision
Article Learn

15 Mar 2023

Assuring Computer Vision in the Security Industry

Advai assessed an AI's performance, security, and robustness in object detection, identifying imbalances in data and model vulnerabilities to adversarial attacks. Recommendations included training data augmentation, edge case handling, and securing the AI's physical container.

Computer Vision AI Governance AI Assurance Adversarial Attacks AI Robustness Case Study AI Risk

Markus Spiske Xekxe Vr0ec Unsplash Scaled
Article Learn

17 Dec 2020

Machine Learning: Automated Dev Ops and Threat Detection

Getting to grips with ML Ops and ML threats.

AI Safety AI Robustness Adversarial Attacks AI Assurance AI Risk Trends and Insights AI Governance

Contact

Contact

Join the Team

Address

20-22 Wenlock Road
London N1 7GU

Social

LinkedIn

Twitter

Company Journal
© 2025 Advai Limited.

Cookie Policy

|

Privacy Policy

|

Terms and Conditions