Reducing harm to investigators

At Hubstream, we work with investigators across the globe tackling a wide variety of crimes.  For some of those investigators, reviewing images and videos of horrific criminal acts and abuse is a daily part of the job.   Motivated by the stories from the front lines about losing great investigators to long-term stress, we have been working on a new feature set to help protect investigators while they do this critical work. 

 

We have learned a lot more about what it takes to protect investigators in these scenarios than when we started this work, and this article shares some of the lessons we learned which can hopefully help others.  Thanks to everyone that we have worked with to get advice and support on developing this feature set - we hope this will help!

 

Media review is a task not a job title

There are lots of people with job titles that should never be exposed to harmful media even when they have access to a system (like IT Helpdesk Technician) so it's easy to build a role-specific block that hides media from certain users.  But most investigators who work with this material spend periods during their work day where they have to see the media, and other times when they are doing investigative tasks that don't actually need media.  Allowing investigators to choose whether they are in "I need to see media" mode versus "I'm not working on that right now" mode can help them have control over their own exposure.

 

It's a cumulative effect

Investigators who are reviewing media can view many thousands of images and videos per day, and very often those images have already been seen multiple times before and have been categorized by other investigators as legal or illegal.  The workflow for investigators can be improved with "smart blocking" that takes the categorization status of media into account.  Media that is already categorized as illegal has already been reviewed and therefore can be blocked by default using either complete black-out, or other filters that have been shown to reduce impact such as greyscale or blur.  Investigators still need a way to reveal either the whole or parts of an image that is already categorized.

 

Some media needs special handling

Even for the most highly-trained investigators who are continuously reviewing difficult content, there are some images and videos that are especially damaging to view.  Studies have shown that if investigators are given a warning before seeing these images, they can be less impactful.  This applies even when they are deep in the media review task, to let them know that it's time to take a pause before viewing.  In Hubstream Intelligence, we support this with a "Harmful" flag on these media which triggers a biohazard thumbnail to warn users before clicking.

 

Sometimes it's not about the media

Given the harmful nature of the media involved, it seemed pretty clear to us at the beginning that investigators would mostly be attributing their levels of stress to the media they are reviewing.  In practice, investigators let us know that it's often the workload and backlog that contributes more stress.  That is, reviewing harmful media is more difficult for them when they are so far behind that they know they are not getting to victims quickly enough.  So, sometimes working on areas like automation and workflow can help to alleviate stress as much as media-specific features.

 

Measuring success

And finally, any features that are designed to reduce harm will either work - or maybe not.  We added telemetry to our application to track how much exposure investigators had and to count the usage of the harm reduction features.  We have started working with an awesome research group to correlate this information with surveys of investigator well-being before and after deploying these features to see how it's working - hopefully we have more to report soon!

Joe Milan