Inside AI - November 27th, 2019

Inside AI (Nov 27th, 2019)

Special Edition: Privacy and AI

Subscribe | View in browser

Dear readers:

Due to the Thanksgiving holiday, we will be going dark on Thursday and Friday and releasing a special edition today about the challenges of privacy in artificial intelligence. These include topics like the use of facial recognition everywhere from airports to Facebook, and the (often undisclosed) gathering and storage of people's data to train AI systems, such as Google's recent deal with Ascension to harvest the health records of millions of patients.

In the meantime, feel free to check out our recently created Inside AI List on Twitter, which has all the latest news from top minds in the field. You can also follow me on Twitter, where I'll be posting about AI-related news all throughout the holiday weekend (and all the weeks thereafter).

Thanks for being a subscriber,


1. Newly leaked documents from China's government reportedly describe how police are using AI technology to filter out and arrest Muslims and other minorities in Xinjiang. The International Consortium of Investigative Journalists (ICIJ) obtained the documents, which show how "Chinese police are guided by a massive data collection and analysis system that uses artificial intelligence to select entire categories of Xinjiang residents for detention," according to the agency. ICIJ reported that the government amasses the personal data of people through "warrantless manual searches, facial recognition cameras, and other means" and flags them for something as innocuous as using certain smartphone apps. The documents direct police to specifically arrest Uighurs - a minority Turkic ethnic group - who have foreign citizenship and to track Xinjiang Uighurs who live abroad, ICIJ reported. - ASSOCIATED PRESS

2. International Delta Airlines travelers who check in at Seattle's Sea-Tac Airport will soon be exposed to facial recognition software, according to The Seattle Times. The airline rolled out the optional system in Atlanta last year and plans to bring it to Sea-Tac by the end of this year. Passengers can choose to undergo the process (it won't be mandatory) if they want to bypass having to show their passports. The software will photograph a passenger’s face and automatically match it with a visa or passport photo on file with U.S. Customs and Border Protection. The service could one day expand to every international boarding point in the airport, and possibly domestic flights as well. Next month, a five-member commission is scheduled to vote on principles to oversee how the technology is used at Sea-Tac. - THE SEATTLE TIMES

3. Gege Gatt, CEO of the AI company EBOs, says AI has no place in the U.K.'s National Health Service (NHS) "if privacy isn't guaranteed." The question about risks to patients' personal data was posed in a recent roundtable led by the publication Verdict Medical Devices, which comes after UK Prime Minister Boris Johnson pledged to invest $350 million into an NHS AI lab. In his response, Gatt points out that in 2017, the Information Commission ruled that an NHS trust didn't do enough to protect patient data in its agreement with DeepMind. Later, after the General Data Protection Regulation (GDPR) took effect, "data security has been prioritized and is also recognized as part of the ethical framework required to build the necessary trust by patients," Gatt notes. Similarly, Human+ automation consultant Oliver Cook says that it's to be expected we would surrender "a certain amount of data" to deliver more efficient and high-level services, "if the correct safeguards are in place." - VERDICT MEDICAL DEVICES

4. Facebook quietly developed a facial recognition app several years ago that it used to identify coworkers and their friends, according to a new report from Business Insider. The social media giant says the app - which used a person's photo to obtain their name and profile picture - has been discontinued since its creation in 2015 or 2016. It only worked on employees who had facial recognition enabled and was designed as a "way to learn about new technologies," a Facebook spokesperson said. In the meantime, the social network remains embroiled in a class-action lawsuit that claims it collected facial recognition data without users' permission, which is an apparent violation of the Illinois Biometric Information Privacy Act. - ENGADGET

5. In a new op-ed published in The Los Angeles Times, Amos Toh, a senior researcher on AI at Human Rights Watch, covers the challenges and concerns surrounding AI-enabled monitoring and behavior/emotion recognition. The uses of the AI technology range from flagging suspects in criminal investigations, to tracking and profiling minorities, he notes. But concerns abound that the technology could single out marginalized populations, or incorrectly assume the emotions and intentions of people, among other misuses. Unless new regulations are put into place, "things are about to get a lot scarier," Toh writes, adding that "transparency is a prerequisite both for protecting individual rights and for assessing whether government practices are lawful, necessary and proportionate." - LA TIMES

6. HireVue has defended its use of AI to analyze applicants for jobs, saying it doesn't use facial recognition or scanning technology. In a statement sent to "Here & Now," the company said its AI-enabled video interviews aren't meant to replace humans in the hiring process, but are rather designed to enhance initial screenings of applicants. According to HireVue, bias is removed from the system by "constantly testing our algorithms and reviewing our approach with input from external data science researchers, social scientists, and educators." - WBUR

7. AI expert Dawn Song, a professor at the University of California, Berkeley, is working on a platform that would allow people to better control their data for privacy. Her new company, Oasis Labs, is developing a platform that would help people control their data and review how it's being used online, according to The New York Times. Song, who is a world expert on computer security and trustworthy AI, says that "it’s particularly important to develop technologies that can utilize data in a privacy-preserving way." You may recognize Song as the lead researcher of a team that investigated how easy it is to fool computer-vision systems, in this case by placing stickers on stop signs, convincing the AI they were actually 40-miles-per-hour speed limit signs. - NYTIMES

Written and curated by Beth Duckett in Orange County. Beth is a former reporter for The Arizona Republic who has written for USA Today, Get Out magazine and other publications. Follow her tweets about breaking news and other topics in southern California here.

Editor: Kim Lyons (Pittsburgh-based journalist and managing editor at Inside).

Copyright © 2020, All rights reserved.

Our mailing address is:
767 Bryant St. #203
San Francisco, CA 94107

Did someone forward this email to you? Head over to to get your very own free subscription!

You received this email because you subscribed to Inside AI. Click here to unsubscribe from Inside AI list or manage your subscriptions.

Subscribe to Inside AI