Science, technology, and the criminal justice system

In an earlier post I commented on a recently published study about EEG-based facial image reconstruction, and I made two observations about that study and the CBC’s media coverage.

(1) Although the images of faces viewed by participants in that study were clearly images of different people, the reconstructed images (based on EEG data obtained from study participants while they viewed the original images) all looked the same to me. The reconstructed images all looked like the same person, even though the original images looked like different people. That made me wonder how useful this technology is as an example of brain-based mind reading, and about its potential usefulness in forensic contexts.

(2) In my view the media report misrepresented modest scientific advances by publishing a clickbait story entitled “The mind-blowing future of mind reading (which may be closer than you think)”. If the reconstructed images all look the same when the original images are so clearly different, then how is that mind-blowing? Mind you, the problem isn’t with the science, but with the way the media reported on it. Such reporting creates hype, it promises way more than what the current science and technology actually does, and that’s socially irresponsible. Not only because it might lead policy makers to seek out solutions before they’re actually ready, but also because it teaches the public how to misunderstand science and that’s potentially even more dangerous.

The purpose of this short post is to offer two more examples of attempts to create high tech tools for use in forensic contexts that I thought, in one way or another, demonstrate similar problems to what I discussed in the EEG-based facial image reconstruction post.

The first example (care of Ellen, one of my neurolaw students) concerns the use of DNA data to reconstruct images of suspects. Think of it this way: the reason you and I look different is probably, in good measure, because we have different DNA. And so if we knew enough about how different genes code for phenotypic characteristics like skin colour, eye colour, hair colour, dimples, height, weight, sex, and so on, then perhaps we could get some clues about what the person whose DNA we collected might look like? For details, see this and this story. In some ways, the problems here in regards to how different the different reconstructed images look seem less pronounced than in the EEG-based facial image reconstructions. For instance, the reconstructed images in the NYT story don’t all look like the same white dude. However, unless we start assuming that all white dudes look alike, that all black women look alike, etc then again there’s going to be some pretty important limitations to this technology too. (To be fair, the authors of these articles were a lot more open about the limitations of these DNA-based technologies, for instance by pointing out that very few people managed to correctly identify the person from looking at the DNA-based mug shots.)

The second example comes from this story that I recently read in The Atlantic entitled “A Popular Algorithm Is No Better at Predicting Crimes Than Random People”, which reports on this study. The two main messages I took away from this story are that (1) the computer program’s accuracy at predicting recidivism is no better than if a human were making the prediction, and that (2) because this program was developed by a private company, the algorithm it uses to make its predictions is not made public. After reading that story I kept wondering who convinced whom to purchase this technology? Were palms greased? I’ve no idea. But I suspect that it’s equally likely that it was just that technology is shiny, and it’s easy to succumb to the lure of thinking that if it’s a technological solution then it will be more accurate, more advanced, more thorough, less prone to error, and <<insert here whatever other things you personally find alluring about technology>>. And then I wondered why we’re still using it? It’s no more accurate than humans at predicting recidivism, there’s no way to scrutinise its decisions since the algorithm it uses is secret, and I wouldn’t be surprised if hearing “Oh, but a computer predicted that this offender is likely to recidivate.” carries way more weight than if the prediction is reported as being made by a person even though both are equally poor at making such predictions.

I’m decidedly in favour of using good science and reliable technology within the criminal justice system. However, scientific breakthroughs are often a long way off from what we need at the police station or in a courtroom to improve the criminal justice system. And irresponsible media reporting that creates hype and conveys the impression that if it’s high-tech then it’s obviously better than low-tech, is only likely to have detrimental effects on the public’s understanding of science and on the criminal justice system.