What can we do within our control to make a difference?

Systemic inequality is as present as ever. Be it in the US, or around the world, some of society’s underlying biases are more visible now and feature prominently in the news.

Other biases are still not as visible. Whether it's in policing and sentencing, recruitment or financial products, people's lives are impacted by automated decisions on a daily basis. The algorithms that drive these automated decisions are often biased against certain demographic groups. The data that feeds these algorithms is a reflection of society’s past biased decisions and it can easily teach these algorithms the biases it encodes; it can easily be the case that if you are a woman you will get a lower credit credit limit than your husband (even if you have the same financial situation), if you are an African-American your predicted risk of re-offending would be higher than for a white American even if in the past you committed fewer and less serious crimes.

Given the current developments in the US, beyond articulating the problem, the question many of us ask ourselves is what can we do, what actions can we take to drive change. Algorithmic bias is just one kind of bias. It might not be (as) visible but it is pervasive, and deeply hurtful. For the past one year and a half, at ETIQ, we have been building solutions to help companies and institutions tackle the algorithmic bias problem across different aspects of their activities, primarily finance and recruitment. We have been trying hard to take positive action and tackle important issues around race and gender inequity, leading from a place of empathy, economic empowerment, and creating change through ACTION not just words.

We believe that if technology such as machine learning and AI is used correctly, it will provide a fairer process and more equal opportunities to everyone. Algorithms can ultimately help us humans be more objective, they can mitigate the errors in human judgement and lead to more objective decisions, but they too - similar to humans - need to be checked thoroughly for errors of their own.