Springer Nature says it won't publish a paper claiming that a facial recognition system can predict whether someone is likely to be a criminal. The research won't be included in a forthcoming book after hundreds of AI researchers sent an open letter asking the publisher to rescind it, calling the technique racist.
- The Harrisburg University study claims that the system can predict "whether someone is likely going to be a criminal" at 80% accuracy, with no racial bias.
- The open letter debunks the paper's science and argues that such crime-prediction technologies using machine learning are racist. It was originally written by five researchers at MIT, the AI Now Institute, Rensselaer Polytechnic Institute, and McGill University.
- The signers have called on all academic publishers to stop publishing similar studies claiming that AI algorithms can predict a person's criminality.
- Springer said the paper was rejected after a "thorough peer review process." A co-author of the paper has declined to comment.
- In 2017, researchers from Google and Princeton refuted similar research from Shanghai Jiao Tong University, claiming that an algorithm could predict criminality based on facial features.
- A Reddit post discussing the topic now has more than 350 comments.