The challenges of AI in the public sector

Artificial Intelligence (AI) is a growing topic of interest within the public sector. Government entities are showing increasing interest in using the capabilities that AI brings, to improve efficiency and deliver policy in volatile environments. At the forefront of these reforms are the healthcare sector and law enforcement organisations where the intention of enhancing operational speed and reducing cases of human error are vital. Despite this, collaboration between public, private and non-profit entities can bring complexities in AI delivery as opposing values and management strategies overlap.

Healthcare

The NHS is primarily interested in digital technologies to empower patients to actively participate in their own care. Its strategy emphasizes three key areas to direct its incorporation:
• Utilizing new tools to interpret patient data and deliver personalised self-management and self-care treatment strategies;
• The adoption of technology which gives more time for care and enhances the patient-clinician relationship;
• Treatment based on robust research evidence which aligns purpose with an ethical governance framework that patients, public and staff can trust.

An organisation may face significant barriers in AI adoption. Dedicated resources are required in order to develop machine-learning tools and train the workforce in their use. Possibly one of the biggest concerns affecting AI implementation is the protection of sensitive health data. Bespoke applications will be needed to handle complex patient data, alongside approval mechanisms to properly authorise its use by other healthcare providers, patients and regulators.

Confidence in this matter has already been shaken last year after the NHS experienced a data breach involving the medical data of 150,000 patients. The software developer TPP was blamed for a coding error found within its SystmOne application¹.  Whilst a national data opt-out programme was introduced to stop all patient data being used in research, these instances have had a detrimental effect on patient trust. The NHS must prioritise addressing patient confidence with AI to ensure a positive transition for machine-learning tools through the adoption of data handling and security principles.

Another issue is the impact that AI-driven diagnostic and treatment software has on doctor and patient relationships. Growing reliance on systems with a greater degree of direct-to-patient advice has led to fears that public trust in clinician advice may be diminishing. It has raised the question whether practitioners should inform patients on the technical design behind these applications. Without clarity over how treatment recommendations are made, patients are left to interpret automated results without the benefits of a consultation. Combined, these factors could pose ethical challenges over the accuracy of AI that leaves patients less confident on their appropriate treatment options.

Law Enforcement

AI has also had an influence on law enforcement as predictive policing continues to develop its processes².  The term originated from California’s police chief William Bratton who is a strong advocate of data-driven policing³.  Analytical tools have been applied to these institutions to forecast when and where crimes will take place in an attempt to optimize scarce resources. These include strategies such as:
• Predictive crime mapping to target efforts based on crime type, location, expected date and time;
• Forecasted risk assessments to identify priority individuals at risk of reoffending or engaging in serious crime.

Concerns have been raised over the use of algorithms for criminal justice purposes as there is potential for untended or indirect consequences to occur. An example is the stratification of data such as age, race, postcode or socio-economic groups which has led to cases of discrimination. Analysis of over 7000 arrestees by the investigative journalists ProPublica argued that there was a systematic bias against black defendants in an offender management algorithmic risk score tool. This can be caused by algorithms that use data sets which may have either been incorrectly recorded or influenced by its owner’s cultural bias. Avoiding these inaccuracies requires a framework to be established that outlines data gathering and verification policies.

Insufficient data may lead to discrimination as prediction accuracy is only as strong as the amount of data available. Larger data sets are available for the most commonly recorded crimes such as theft and violence making more accurate forecasting. Concealed offences such as sexual assault, fraud and cybercrime are more difficult to predict as the data collection process is far more complex and resource intensive. This poses a challenge for predicting policing tools as algorithms will have to make decisions based on a smaller range of data over when underrepresented crimes are likely to occur.

Furthermore, establishing accountability is another problem as there is uncertainty over who is responsible for algorithms when mistakes are made. False predictions can cause several legal challenges for law enforcement where individuals can be wrongly detained and accused. Whilst officers may take actions based on the guidance of analytical tools, misconfiguration or faults can be seen as the responsibility of designers, manufacturers and operators. The original developer of an algorithmic tool may not be involved with the subsequent implementation which can leave the installation to those with only limited exposure. Organisations need to establish a policy framework that defines the responsibilities and procedures to be followed when faults are discovered in AI tools.

Our Recommendation

Regency recommends that organisations should consider incorporating policies and procedures on the handling of data associated with the use of AI tools. We can provide consultancy support to help your organisation establish:
• Security and data handling principles to preserve information continuity;
• Data verification and collection practises to reduce occurrence of bias and inaccuracies;
• An organisational framework to outline individual responsibilities and points of contact.

For more information please contact enquiries@regencyitc.co.uk

References:
¹ BBC, NHS data breach affects 150,000 patients in England, 2 July 2018. 
² Andrew G. Ferguson, ‘Policing Predictive Policing’, Washington University Law Review, 94, no.5, 2017.
³ Janet Chan and Lyria Bennet Moses, ‘Can “Big Data” Analytics Predict Policing Practice?’, in Stacey Hannem, Carrie B. Sanders, Christopher J. Schneider, Aaron Doyle and Tony Christensen (eds), Security and Risk Technologies in Criminal Justice: Critical Perspectives (Toronto: Canadian Scholars, 2019).