What is Responsible AI

Feb 25, 2023

Responsible AI is the practice and process used by an organisation to develop and deploy AI in a measured approach that accounts for the ethical, security, legal and cultural challenges that may result from the failure of AI. For a definition of AI please see the “What is … AI?” Post.

Responsible AI covers the application of a governance framework to each stage of the ML Lifecycle. Key points that this aims to address include:

  1. Bias

  2. Fairness

  3. Explainability

  4. Interpretability

  5. Auditability

  6. Security