Skip to content

Towards a more ethical and equitable AI industry

Contemplate for a moment the ramifications of a facial recognition system that can’t recognise you, or identifies you as someone else, because of your skin colour? Or an algorithm making decisions about your healthcare, education, job prospects, or even predicting the likelihood that you might commit a crime based on the past activities of people who live in your postcode?

Regulatory and law enforcement agencies increasingly rely on artificial intelligence (AI) systems to automate repetitive work and help with making difficult decisions.

The more we as a society grow to rely on AI – and especially if we allow it to make decisions that affect people’s lives – the more we must be vigilant that the AI systems we create don’t amplify stereotypes or perpetuate racism, sexism, and other biases.

Silhouette of a head with glowing lines connected in nodes
The more we rely on AI systems, the more vital it is to consider how they affect everyone. Image: AI generated

AI – male, pale, and stale?

I recently watched this Brookings Institution Center for Technology Innovation panel discussion, Black women in AI: Building a more inclusive and equitable future. Bravo! A poignant and articulate debate about AI equity.

 

The panel covered a range of topics including inclusive AI design, the impact on vulnerable populations, and the need for AI to include leaders who understand the lived experiences of affected consumers. It was hosted by Nicol Turner Lee, a Brookings Senior Fellow and the Director of the Center for Technology Innovation.

The difference between this panel and most about AI should be obvious from the picture below. It’s well known the IT industry is predominantly male, white, and young to middle-aged – and the world of AI is even more so. As a result, AI developers may not recognise the biases in their data or the results of their models because it may not even occur to them to consider other socio-economic or diverse perspectives. It was a refreshing change to see four Black women discussing AI.

At the top, a panel of Black women discussing AI; below, a selection of panels on AI comprised almost entirely of white men.Spot the difference: the Brookings panel (top) compared with a sample of AI panels from YouTube (below).

A broader AI ecosystem

This discussion highlighted the pressing need for us to develop a sustainable, non-biased AI ecosystem that will transform and enrich our lives while protecting us from harm.

To do this, we need more diverse voices to foster dialogue that pushes the envelope about ethical vigilance in an inclusive AI future.

We must not let the high-stakes decisions be made exclusively by technologists, particularly if they tend to have the same demographic profile.

We need social and racial diversity and multidisciplinary perspectives from philosophers, psychologists, sociologists, lawyers, social workers, health care professionals, policy makers, and others. These voices need to come from global communities, outside Silicon Valley, because AI will impact all of us.

We know we need more fairness in predictions powered by AI. This means improved data quality from more reliable and representative data sets. However, we need to obtain inclusive training data without increasing surveillance trauma. Is enhanced anonymisation technology part of the solution?

Building diversity into design

We need robust impact analysis and a diversity mindset at the design stage of AI algorithms and applications.

Bias evaluation should not be an afterthought at the end of the innovation lifecycle. Enlightened software engineers understand privacy by design and security by design. Now it’s time to double down on ethics by design as both a mindset and a methodology.

Panellist Mutale Nkonde, CEO of AI for the People, proposed a harm reduction approach for assessing AI products that would include:

  • Deciding what we want the algorithm to do
  • Deciding what our red lines are (e.g., it should not be used as a weapon, it should not be allowed to make decisions without human review)
  • Rigorously testing and evaluating to ensure the algorithm is not discriminatory or otherwise problematic
  • Finally, when we see the results, deciding if is this something we actually want.

EDT applies AI for a range of uses, such as a machine learning technology that predicts which items in a data set are responsive and which are not. Our data scientists are acutely aware of our responsibility to consider ethical risks and take these issues into account when designing and developing our AI solutions. It’s worth noting that in this case, while our AI models make predictions, a human being still makes the final decisions.

AI equity is not about broader access to AI products that only serve to entrench bias and systemic harms. It’s about awareness and discipline up front, at the embryonic stage of product creation.

There are important socioeconomic decisions to be made. We need tenacity and creativity if we are to get on the right side of AI history. And, this is particularly critical in the realm of criminal justice where lives and liberty are at stake.