Responsible AI is the practice and process used by an organisation to develop and deploy AI in a measured approach that accounts for the ethical, security, legal and cultural challenges that may result from the failure of AI. For a definition of AI please see the “What is … AI?” Post.
Responsible AI covers the application of a governance framework to each stage of the ML Lifecycle. Key points that this aims to address include:
Bias
Fairness
Explainability
Interpretability
Auditability
Security
