Clue contributed to a panel discussing data ethics in policing at this year’s Police ICT Conference.   The panel discussed ways of embedding data ethics into policing as well as the possible benefits of a more national approach, embedding good practice.

Here we summarise our key takeaways from this debate.

Clue data ethics

Our customers investigate everything from organised crime, counter extremism, human trafficking, financial crime, child protection – all things where if investigators get it wrong, if the investigative process isn’t followed properly, or the systems they use do not meet the right legal and ethical standards, the risk of harm to victims is high.

The challenge for all our users right now is the sheer volume of data they are dealing with, and so increasingly they need the software to do more for them, they want us to automate as much as possible.

We are very conscious that we have a role to play here, in giving our user community the tools they need, we want to do it in a way that supports ethical use of data.

 Working hand in hand with the user community

We work hand in hand with our user community, we encourage them to share their feature ideas with us, and they very much help us to shape the product roadmap.

When we look across all the feature suggestions and ideas from our community, the most common theme is, ‘we need the software to do more for us, automate as much as possible. We have so much data, we need to be alerted to connections in the data, give me a clue, out of all this data I have, where I should start looking?’

Clue roadmap word cloud

We have so much data, we need to be alerted to connections in the data, give me a clue, out of all this data I have, where I should start looking?’

AI to automate time-consuming activities

To give you a specific example of what this means in our world, let’s consider for a minute that 95% of the data coming into a case management system in unstructured – emails, documents, notes etc. We can now easily plug-in cognitive services to help the investigator process the data much faster. For example, we can automate the recognition of entities within that unstructured text (people, locations, organisations, events) – not only does this speed up the indexing process, traditionally a very manual process, but it allows us to very quickly cross check those entities against a watchlist or data already known to the investigator.

Transparency and accountability

But we need to make sure that in providing these tools, the way they work is transparent to the user – it should be very clear what it being automated and how. And we should be assisting the investigator, no more. So here all we are doing is bringing the potential match to the attention of the user, suggesting you might want to check this. It is for them to decide what to do with that suggestion.

And as much as our users need us to automate as much as possible, they are equally very aware of the need to be accountable – recurring themes and key words from our user community forum are data protection, validation, redaction, risk, audit

Our role is to make sure that the technology we provide is transparent for users, so if we are introducing some AI to automate a process, it must be clear to users how it works and what is being automated, and what isn’t, where the human decision-making process must kick in.

Next steps

We must continue the dialogue with our customers and the wider industry, building upon the importance of transparency and human decision making whilst empowering the investigator to do more with their data. We see huge opportunities to continue to automate non-controversial, manual and time-consuming activities with technology, so whilst the discussion around ethics is essential, we must also not lose sight of the quick wins that are available now.