China tech companies Huawei and Megvii tested AI-based camera software to identify Uighur Muslims, according to research organization IPVM. The tests, which date back to 2018, included “Uighur alarms," which sent automated alerts to the Chinese government when a facial recognition system identified members of the oppressed minority group.
- In January 2018, Huawei tested Megvii's Face++ facial recognition on its camera network. The technology was given a passing grade for its ability to scan people's faces in crowds and accurately estimate their age, gender, and ethnicity.
- "Uighur Alerts," one feature of the system, have now come under scrutiny. Last year, the New York Times identified Chinese facial recognition companies, including Megvii, that were developing algorithms to identify Uighur Muslims. This is the first time Huawei has been mentioned in the efforts.
- Huawei confirmed the tests were real but have "not seen real-world application." The company "only supplies general-purpose products" for such testing and does not provide custom algorithms, a spokesman said.
- Human Rights Watch researcher Maya Wang says China's use of AI-enabled surveillance to watch over the public and oppress potential threats is growing. Such systems "lend themselves quite well to countries that want to criminalize minorities," she said.
- Megvii was among the eight AI companies to land on a U.S. blacklist last year. The U.S. accused the startups of contributing to “repression, mass arbitrary detention, and high-technology surveillance” against Uighurs in China's western Xinjiang region.
- In the U.S., concerns about false positives, biases, and mass surveillance have prompted cities like San Francisco to ban facial recognition among city agencies. Amazon, IBM, and Microsoft have all announced various moves this year to rein in facial-recognition technology.