• Highlights
    • AI Safety across categories
    • Defence Military solutions as exhibited at DSEI 2023.
    • ID Verification Benchmark identity verification providers with world-first research methods.
    • LLM Alignment Assuring Large Language Models.
    • Solving AI Adoption Problems How we improve AI success rates.
    • Products
    • How to work with Advai
    • Advai Advance Intensive discovery process.
    • Advai Versus Developer workbench.
    • Advai Insight Better AI decision making.
    • Company
    • Company Who we are and what we're trying to achieve.
    • Careers Help us build a world with trustworthy AI.
    • Our Approach A little glimpse under the hood.
    • Company News Explore our news, mentions and other Advai happenings.
    • Thinking
    • Journal Thought leadership and editorial relating to AI Assurance.
    • AI Ethics AI Ethics from an Assurance perspective.
    • AI Adoption Playbook A must-have download. An assurance-engineered guide to AI adoption.
    • Solve LLM Adoption A focus on how Advai's services solve LLM adoption challenges.
  • FAQs
  • contact@advai.co.uk
  • LinkedIn
  • Twitter
  • Book Call
Contact Us

Journal

View All
Assurance Techniques
Learn Article

11 Sep 2024

A Look at Advai’s Assurance Techniques as Listed on CDEI

In lieu of standardisation, it is up to the present-day adopters of #ArtificialIntelligence systems to do their best to select the most appropriate assurance methods themselves.

Here's an article about a few of our approaches, with some introductory commentary about the UK Government's drive to promote transparency across the #AISafety
sector.

AI Safety AI Robustness Language Models Trends and Insights AI Assurance Adversarial Attacks AI Governance AI Ethics AI Compliance AI Risk Case Study

Neuron Versus Neural
Learn Article

13 Mar 2024

Fit for Duty Artificial Intelligence

How do we trust Security and Police? Do we deploy fresh recruits to complex field operations?

No, we don't.

Organic intelligences are put through established cognitive exams, physical tests and medicals. They are then monitored and periodically re-evaluated.

We should also ensure AI systems are Fit for Duty before deploying them. Read the full article below.

AI Robustness AI Safety Trends and Insights AI Assurance Computer Vision Police and Security

Ethics Listing
Learn Article

22 Feb 2024

The Unwitting AI Ethicist

If you're curious about the types of ethical decisions AI engineers are faced with, this article is for you. TLDR; AI engineers should take on some ethical responsibilities, others should be left for society. Read to find out more...

AI Robustness AI Safety Trends and Insights AI Assurance AI Risk AI Ethics

AI Orchestra
Learn Article

09 Jan 2024

Welcome to the Era of AI 2.0

The paradigm has shifted: AI 2.0 is the amalgamation of intelligent language agents capable of collaboration, whose behaviour is guided by natural language, rather than code. 

'AI 2.0' is marked distinctly by the orchestration of LLM-based agents. It is AI language models that are capable of managing, directing and modulating other AI. This not merely an incremental step. It’s a leap in artificial intelligence that redefines what is possible for both business and government.

AI Robustness AI Safety Trends and Insights AI Assurance AI Risk Language Models

Ai Bureaucracy
Learn Article

12 Dec 2023

The AI Act-ually Happening

Some strengths, some weaknesses and 3 key implications for businesses seeking to adopt artificial intelligence, now the EU has finalised The AI Act.

Let the regulatory driven transformation commence. 

AI Robustness AI Safety Trends and Insights AI Regulation AI Governance AI Assurance AI Risk

Criminal Instruction Manual
Article Learn

20 Nov 2023

Securing the Future of LLMs

Exploring generative AI for your business? Discover how Advai contributes to this domain by researching Large Language Model (LLM) alignment, to safeguard against misuse or mishaps, and prevent the unlocking of criminal instruction manuals!

AI Safety Adversarial Attacks AI Robustness AI Assurance Language Models Trends and Insights AI Risk

AITF Announcement
Article News

19 Oct 2023

Press Release: Frontier AI Taskforce

Press release announcing our partnership with the UK government's Frontier AI Taskforce.

AI Safety AI Robustness Adversarial Attacks AI Assurance AI Governance AI Compliance Trends and Insights AI Regulation

LLM Listing Image 3
Article Learn

18 Oct 2023

In-between memory and thought: How to wield Large Language models. Part III.

With so much attention on Large Language Models (LLMs), many organisations are wondering how to take advantage of LLMs.

This is the first in a series of three articles geared towards non-technical business leaders.

We aim to shed light on some of the inner workings of LLMs and point out a few interesting quirks along the way.

AI Safety AI Robustness Adversarial Attacks AI Assurance AI Governance AI Ethics Language Models Trends and Insights

LLM Listing Image 2
Article Learn

11 Oct 2023

In-between memory and thought: How to wield Large Language models. Part II.

This is the second part of a series of articles geared towards non-technical business leaders. We aim to shed light on some of the inner workings of LLMs and point out a few interesting quirks along the way.


"Language models are designed to serve as general reasoning and text engines, making sense of the information they've been trained on and providing meaningful responses. However, it's essential to remember that they should be treated as engines and not stores of knowledge."

AI Safety AI Robustness AI Assurance Adversarial Attacks AI Governance AI Ethics Language Models Trends and Insights AI Risk

Adversarial Ai
Article Learn

11 Oct 2023

Assurance through Adversarial Attacks

This blog explores adversarial techniques to explain their value in detecting hidden vulnerabilities. Adversarial methods offer insight into strengthening AI against potential threats, safeguarding its use in critical sectors and underpinning AI trustworthiness for end users.

Adversarial Attacks AI Robustness AI Assurance AI Safety Language Models Trends and Insights

LLM Listing Image 1
Article Learn

04 Oct 2023

In-between memory and thought: How to wield Large Language models. Part I.

With so much attention on Large Language Models (LLMs), many organisations are wondering how to take advantage of LLMs.

This is the first in a series of three articles geared towards non-technical business leaders.

We aim to shed light on some of the inner workings of LLMs and point out a few interesting quirks along the way.

Language Models Trends and Insights AI Safety AI Robustness Adversarial Attacks AI Assurance AI Governance AI Ethics AI Compliance

Alignment Technology Being Kept Under Control
Article General

03 Oct 2023

Superintelligence alignment and AI Safety

OpenAI recently unveiled their 'Introducing Superalignment' initiative 📚, with the powerful statement: “We need scientific and technical breakthroughs to steer and control AI systems much smarter than us.”

We couldn’t agree more.  As our Chief Researcher, @Damian Ruck says, “No one predicted Generative AI would take off quite as fast as it has. Things that didn’t seem possible even a few months ago are very much possible now.”

We’re biased though; AI Safety and Alignment represents everything we believe and have been working on for the last few years. It may be to prevent bias, to ensure security, to maintain privacy. Or it could be a totally different and unforeseen consequence that you avoid.

How do we meet the challenge of steering AI systems? With AI.

“There’s no point if your guardrail development isn’t happening at the same speed as your AI development”. 

AI Safety AI Robustness AI Assurance Adversarial Attacks AI Compliance AI Ethics Language Models Trends and Insights

Autonomous
Article News

02 Sep 2023

Autonomous Military Systems Must Operate as Commanded

To protect our way of life, the primary purpose of military innovation is to achieve and maintain operational advantage over our adversaries.

 

In the modern age, this practically translates to advanced operational systems and hyperintelligent software to manage these systems in an increasingly autonomous way.


But we must be able to trust these systems...

Defence Trends and Insights AI Robustness Adversarial Attacks AI Assurance AI Governance AI Compliance AI Ethics Computer Vision

PECB Webinar
Article Learn

31 Aug 2023

Risk Framework for AI Systems

In 2023, AI has become a pivotal business tool, posing both opportunities and risks. Understanding AI, its regulatory landscape, and integrating it into risk management frameworks are essential. This involves staying informed about global regulations, recognising AI-specific threats, and adopting a structured approach to risk management. Stress testing AI systems is crucial for assessing performance and reliability. Businesses must continually adapt, leveraging risk assessments and monitoring to safely harness AI's potential.

AI Safety AI Robustness Adversarial Attacks AI Governance AI Assurance Trends and Insights

Biased Age
Article General

07 Jul 2023

Biased Age Estimation Algorithms

Biased age estimation is a great example of algorithmic #discrimination. Such #AI algorithms are therefore unfit for use. Right?

Well, their use is threatening to happen anyway. With multiple US federal bills and the UK’s Online Safety Bill looking to legislate online age verification, improving the robustness of these systems is becoming growingly urgent.

AI Governance AI Assurance AI Regulation AI Compliance AI Ethics Trends and Insights AI Safety AI Robustness

Security Vision
Article Learn

15 Mar 2023

Assuring Computer Vision in the Security Industry

Advai assessed an AI's performance, security, and robustness in object detection, identifying imbalances in data and model vulnerabilities to adversarial attacks. Recommendations included training data augmentation, edge case handling, and securing the AI's physical container.

Computer Vision AI Governance AI Assurance Adversarial Attacks AI Robustness Case Study AI Risk

DALL·E 2023 02 23 22.24.20 Draw Me A Picture Of Artificial Intelligence With Lifecycle Diagrams On It, Digital Art
Article Learn

25 Feb 2023

What is Responsible AI

Welcome to the What is.. series? Bite size blogs on all things AI.

In this instalment we explore the what, where and why of Responsible AI. What is it? Where is it used? Why is it important?

AI Safety AI Robustness AI Assurance AI Ethics

DALL·E 2023 02 23 22.55.30
Article Learn

18 Feb 2023

What is Robust AI

Welcome to the What is.. series? Bite size blogs on all things AI.

In this instalment we explore the what, where and why of Robust AI. What is it? Where is it used? Why is it important?

AI Robustness AI Assurance Trends and Insights

Missing Pieces Jigsaw
Article General

05 May 2021

AI: Unknown Decision Space is holding us back

Decision Space sits at the core of AI and helps explain issues such as bias and why AI performs poorly in the wild compared to the lab.  It also helps shed light on recent trends towards slower adoption of AI.  Here, we run through the basics.

AI Safety AI Robustness AI Assurance AI Governance Trends and Insights Adversarial Attacks

Robot Doctor Image
Article General

21 Apr 2021

Beyond Jeopardy: How Can We Trust Medical AI?

In 2011, just two days after Watson beat two human champions at Jeopardy!, IBM announced that their brilliant Artificial Intelligence (AI) would be turning its considerable brainpower towards transforming medicine. Stand aside, Sherlock: Dr. Watson is on the case.

AI Safety AI Assurance AI Ethics Trends and Insights

Markus Spiske Xekxe Vr0ec Unsplash Scaled
Article Learn

17 Dec 2020

Machine Learning: Automated Dev Ops and Threat Detection

Getting to grips with ML Ops and ML threats.

AI Safety AI Robustness Adversarial Attacks AI Assurance AI Risk Trends and Insights AI Governance

View All

Contact

Contact

Join the Team

Address

20-22 Wenlock Road
London N1 7GU

Social

LinkedIn

Twitter

Company Journal
© 2025 Advai Limited.

Cookie Policy

|

Privacy Policy

|

Terms and Conditions