Can we trust the algorithms that increasingly run our lives? What happens when those algorithms are used incorrectly, and how much damage can they cause?

Cathy O’Neil’s Weapons of Math Destruction (2016) aims to investigate these questions and issues a warning about the misuse of AI. O’Neil’s argument is that the algorithms we trust to make decisions fair and efficient can instead encode discrimination, amplify inequality, and damage lives at an industrial scale. O’Neil argues that models, which are the core concepts in machine learning and AI, are “opinions embedded in mathematics”.

In Australia, we saw a nationwide example of poorly implemented algorithms. The Robodebt scandal, where Centrelink income was incorrectly calculated, leading to debts for people following the rules, caused several deaths and a huge amount of anguish - all due to a trivially basic algorithmic error. In this instance, the algorithm used income averaging rather than actual fortnightly earnings. This misunderstanding of the law results in many false debt claims.

A core theme running throughout this book is the idea that algorithms are not “unbiased, fair, and objective”. As algorithms are trained on real-world data, they can, if care is not taken, simply amplify problems rather than solve them. While the examples given are US-based, they apply in any jurisdiction. Australia has strong anti-discrimination rules, and blindly applying AI to “solve” the issue may lead to legal issues down the line, if it's shown that these algorithms are not fair and no care was taken to investigate.

Examples of harm

A “Weapon of Math Destruction” is defined in the book through three characteristics:

  1. It was opaque. Those affected by the algorithm (or AI) cannot easily understand or challenge the calculation.
  2. It operates at scale, affecting a large number of people.
  3. It causes profound damage.

An example from the book focuses on loan applications. In the example, those from lower socioeconomic backgrounds were either denied loans or granted them at massively increased interest rates and fees. Looking into the matter, the model codified existing racial prejudice, leading to a disproportionate and unfair targeting of people of colour. The algorithm identified patterns in the past, codified them, and gave a false sense of objectiveness.

This was identified as a “Weapon of Math Destruction”. First, it was opaque. Loan applicants were just told “computer says no”, without clear reasoning behind how the computer made that decision - it's possible that the bank did not understand the decisions either. It affected people at scale, across the company’s whole customer base. And it caused profound damage, either removing customers' ability to get a loan or increasing their payments for no fault of their own.Other examples include: rejecting job applicants because of their names (a strong racial indicator), based on past decisions; credit scores based on postcode, another proxy for race; and teacher evaluation based on faulty logic. For the final example, teachers were evaluated on how well their class improved each year. However, they faced termination if they were in the bottom few per cent. The problem? After one teacher went 6/100 one year and 96/100 the following year, an investigation showed that using just 25-30 students is nowhere near statistically sound, leading to huge variance in evaluations despite no change in teaching methods. Every classroom is different, and failing to understand any statistics here can lead to people unfairly losing their jobs

O’Neil paints a detailed and effective argument for her claims around abuse of algorithms, and particularly their use to “wash” subjective opinion in a fake objective algorithmic layer. However, those algorithms need data, and without extreme care, will just codify existing data, including any poor decision, nuance, or discrimination in it.

What can you do in your organisation?

The key to reducing harm in your organisation is to dive deeper. There is no single evaluation metric that will answer questions like “Is this fair?” for you. However, the same questions that allow you to evaluate fairness in algorithms also support proper evaluation. These include looking beyond basic metrics. Accuracy is easily faked, while more detailed evaluation criteria exist. This can include: the F1 score for overall and class-based analysis; cohort analysis to reduce the risk of discrimination; and AI-understandability to help identify which attributions lead to a decision, rather than blindly accepting the algorithm’s answer.

Another key insight from the book is to ensure that the system can be evaluated on an ongoing basis and updated to learn from new outcomes. Ensure there is a process for challenging decisions, and that those decisions are well understood.

Many AI tools on the market are “black boxes” where customers (and sometimes even the supplier) cannot see or understand why decisions are being made. O’Neil makes a strong argument that this simply is not good enough. Whether this exposes your organisation to anti-discrimination law is unknown, but failing to take steps to ensure your business's decisions are fair may create a new risk. Whether that’s through directly failing foul of the law, or as O’Neil suggests, “what will the newspapers print if this algorithm has not worked?” Reputation risk from algorithmic harm now includes media and Royal Commissions, for example, in the Robodebt scandal.

Australia is introducing further regulations on AI and its ethical use. The 8 AI Ethics Principles are great examples to follow. They are not law, but as the technology advances and we see signals of further regulation around reforms that will affect businesses directly and perhaps severely, if they are deploying algorithms they do not understand, and those algorithms make decisions the business cannot justify under scrutiny.

What we are doing at BRAIN

At BRAIN, we have developed extensive support for businesses to help them properly evaluate AI tools. We support local businesses in using AI to deliver highly effective results, but also ensure they stand up to scrutiny, are robust, reliable, and transparent.For organisations (including commercial, community, and government) wishing to discuss further, we encourage you to sign up with BRAIN for ongoing news and opinion, events, and workshops. Further, we are available to assist businesses in their AI journey, ensuring effective impact from AI.

The link has been copied!