Bias in AI is a diversity problem. Here's how to solve it

Data scientists ‘train’ AI programs by feeding them with huge sets of data. The AI program can then make judgements, predictions and identify patterns based on this data. The controversy about using AI to make big decisions, such as hiring or criminal justice, is that humans, who are inherently biased, create the algorithms that feed AI. Or the data that’s being processed may itself reflect and amplify biases that play out in the real world. AI, therefore, can also be biased. Let’s take Amazon’s resume screening tool as one example. The AI tool was fed the most successful applicant resumes from the past decade. This seems valid, but over the last ten years, white male applicants have had an unfair advantage because of recruiters’ biases. The AI tool reflected this, and selected resumes that were predominantly white and male, and penalised resumes with the word ‘women’s’, and applicants who went to all women colleges. Therefore, it perpetuated inequity. With AI on track to be a tool used to make important decisions that impact our lives, our workplaces and our society, we need to get it right, and ensure it creates a level playing field, not the technology-enable perpetuation of existing human biases. First, we must conduct regular AI audits, testing AI often to ensure the data and factors used in algorithm decision-making are equitable. For example, COMPAS, a tool used to predict recidivism in Broward County, Florida, incorrectly labeled African-American defendants as “high-risk” at nearly twice the rate it mislabelled white defendants - this should have been picked up. Or look to the previous example of AI used to screen resumes. If you’re trying to diversify your workforce, feeding your AI with data from previous successful candidates who have been historically white and male won’t solve the problem — you need a different data set. This can be enabled by ensuring the creators of AI are themselves diverse. These AI programs are not only affected by the biases found in the data, but the biases of the designers too. A study by the New York Universities AI Now Institute found that AI professors are 80% men. It also found that only 2.5% of Google’s full-time workers are Black and 3.6% Latinx. Recent studies also found only 18% of authors at leading AI conferences are women. This then opens the question of who gets a seat at the table when designing these systems. Machines mirror their makers; it should be our job to ensure the reflection is as fair as possible. That’s why it’s key to make sure there is equitable representation and diverse teams creating AI algorithms. Ultimately, we need to combine ‘The Human Factor’ and Tech to create the best decisions. AI is not the only solution for creating a more equitable world; it must be a combination of mitigating bias in both humans and in the tech we create. AI needs bias management, just as companies and hiring managers do. AI has the potential to be less biased than humans (who are subjective), but AI also has the potential to make these problems worse—and can even advance humanity’s darkest impulses. Just look at how China used AI to track and control a Muslim minority. The beauty of AI is that it has the capability to process huge volumes of data that humans would never be able to, allowing us to find patterns and pinpoint solutions that may otherwise be impossible. The beauty of humans is that, despite our inherent biases, we have morals, values, empathy and hopes for better workplaces and a better world. When AI is used to process the data - and humans can then use that information to act as the final decision-maker on the issues that shape our lives - that is when humans and technology can come together to create real, positive change.

Our awesome partners that makes this platform a posibility.


Want product news and updates?

Sign up for our newsletter to stay up to date.

We care about the protection of your data. Read our Privacy Policy.