Auditing Model Bias

Machine learning models used for automating decision making should not perpetuate past and present biases of our society. We examine the data analysis, training and evaluation steps of your projects to identify potential pitfalls that could lead to hidden biases in your models.

Auditing Model Bias

The Challenge

AI is already assisting and making life-changing business and government decisions in tasks such as loan applications and recruiting. As the models used by these systems are getting more complex, the rationale behind their predictions becomes more obscure, hiding the potential bias under an increasing number of calculations.

The Solution

There is no off-the-shelf solution to bias. For this reason, we separately analyse all steps of a machine learning pipeline, from data analysis to training and evaluation, and come up with an audit approach tailored to your project. It is important to identify bias as early as possible and before training the model, as prejudice and confirmation bias can lead you to base your models on data with deficiencies or evaluate them using bias-reinforcing metrics.

As machine learning models cannot separate between meaningful trends in the data and unethical or unintended biases, data scientists need to analyse learned strategies and test if protected variables such as gender and nationality are determining predictions. To this end, we equip your pipeline with cutting-edge techniques such as tests for counterfactual fairness.

Fortunately, the identification of bias does not mean that you have to build a new model from the ground up. We conclude the investigation with a report that describes the types of biases recognised and recommends appropriate steps to address them.



A systematic analysis of the quality of the data used for training and testing of the machine learning model. We build an interactive dashboard where you can examine features that require your attention.


Augmentation of your existing machine learning codebase with scripts that test whether your learned models exhibit bias


An assessment report that analyses biases identified in the pipeline and recommends steps to address them.

The Outcome

A systematic and transparent approach to identifying and mitigating bias


Start a conversation

Take the first step by speaking with one of our data experts today.