September 2022 – Bias in AI Delivering Public Services

02/09/2022

The Public Law Project (PLP) has called for greater transparency around the increased use of algorithms to stop Universal Credit (UC) claims when fraud is suspected.

The access to justice charity has issued the call in response to details of a DWP trial of a risk model to detect fraud in UC advances claims

At the same time, the Equality and Human Rights Commission (EHRC) has announced that it plans to work with local authorities to monitor how they are using artificial intelligence (AI) to deliver essential services such as benefits payments.

Artificial intelligence, machine learning and automated decision-making are terms that refer to a wide range of technologies used across the private and public sectors. In the public sector this could include using programmes to help allocate benefits or to estimate the risk of an individual committing fraud.

In response to evidence that bias built into algorithms used by public bodies may be causing discriminatory outcomes, the EHRC has published new guidance to help organisations avoid breaches of equality law, including the public sector equality duty giving practical examples of how AI systems may discriminate against people with protected characteristics such as sex and race.

While technology is often a force for good, there is evidence that some innovation, such as the use of artificial intelligence, can perpetuate bias and discrimination if poorly implemented.

Contrast  Contrast : NormalContrast : Increase (For Dyslexic Users)     Font size   Font size : SmallFont size : MediumFont size : Large
News image