Inside | Real news, curated by real humans
Inside AI

Inside AI (Jan 21st, 2020)

1. Cybersecurity expert Joseph Steinberg shared tips on how to fool facial recognition systems after The New York Times published an exposé on Clearview AI. The startup developed a tool that scrapes people's images from the web and social media sites to compile a facial-recognition database. In the past year alone,  more than 600 law enforcement agencies have reportedly used the database of over 3 billion images; the agencies can allegedly upload photos of a person of interest, and the system matches them with those in the database, including links to the host. In Saturday's report, The Times wrote that Clearview AI’s system lacks proven accuracy tests and could “end privacy as we know it." Woodrow Hartzog, a professor of law and computer science at Northeastern University, has suggested banning facial recognition tech because of the potential for surveillance abuse. In the meantime, Steinberg suggests that people can skirt such systems by styling their hair and makeup wisely, preventing cameras from seeing certain facial features, and keeping their heads down as they walk, among other tips. - THE VERGE

2. IBM released some new policy proposals for removing bias in AI systems ahead of Wednesday's AI panel at the World Economic Forum in Davos, Switzerland. IBM Chief Executive Officer Ginni Rometty will lead the panel on the sidelines of the annual forum, which runs through Friday. Ahead of the panel, IBM is calling for new rules that would seek to eliminate biases in AI, which typically stem from data that are skewed against people of color, women, and older and disabled individuals. IBM's proposals ask companies and governments to work together to ensure, for example, that black people receive fair access to housing in light of algorithms that may be slanted by discrimination, Fortune reports. Since mid-2019, IBM has been working on AI regulations with the Trump administration, which recently issued guidelines on how federal agencies should use the technology. - BLOOMBERG via FORTUNE

3. Snyk said its latest $150 million funding round has pushed it into unicorn territory. The London startup developed an AI-based cybersecurity platform that protects open-source code by helping developers locate vulnerabilities. Snyk co-founder and president Guy Podjarny says the team uses machine learning to continually "evolve our ability to determine if a source code comment, forum post, or social chatter discusses a vulnerability, and funnels that data to our analysts to verify and place into our vulnerability database." New York-based private equity firm Stripes led the round, with participation from Salesforce Ventures, Tiger Global, Coatue, BoldStart, Amity, and Trend Forward. The startup said the investment brought its valuation to over $1 billion, but did not provide an exact number. - VENTUREBEAT

4. In a new article for Wired, Journalist Clive Thompson goes into detail about the rise and benefits of Edge AI. The author of "Smarter Than You Think: How Technology is Changing Our Minds for the Better" writes that the dramatic improvement of AI software and hardware in the last decade has created "a new breed of neural net" that uses low-power microprocessors and doesn't need the cloud. As a result, people's privacy is more protected, and the systems themselves are more energy-efficient, Thompson says. He cites the example of Picovoice, an edge AI firm that produces voice recognition software that could be installed in household machines like coffee makers, allowing people to utter limited commands (like telling it to brew java) without costing too much or waiting too long. (Edge processing is very fast, he notes.) While it can't banter like Alexa, it doesn't matter if "you're not going to have a meaningful conversation with your coffee maker,” Picovoice founder Alireza Kenarsari-Anhari says. - WIRED

5. Facebook's AI translator made an offensive error with Chinese President Xi Jinping's name. The company is apologizing for a "technical error" that caused President Xi's name to be translated from Burmese into English as "Mr. S**thole." The AI translator on Facebook was, for some reason, printing Xi's name as the offensive word just as he was visiting Burma and there were multiple Burmese language posts about his visit. In one post on the official page of state counselor Aung San Suu Kyi, a headline read, "Dinner honors president s**thole." Google's translation app was not making the same error, and Facebook said it is "taking steps to ensure [the error] doesn’t happen again." - REUTERS

A version of this story first appeared in Inside Social.

6. Macworld's Michael Simon speculates that Apple may have purchased Xnor.ai to make Siri faster and smarter. Last week, Apple confirmed that it bought the Seattle-based AI software startup in a deal worth up to $200 million. Xnor specializes in low-power on-device machine learning tools that can run on-device rather than through the cloud, and has been "best known for its ability to detect people in smart camera feeds," 9to5Mac reports. People have assumed that Apple bought Xnor.ai mainly to boost people detection in its HomeKit Secure Video. However, embedding Xnor.ai's Edge AI into Apple’s chips through the Neural Engine or a co-processor could make Siri "faster and far more capable," all while working offline, the publication notes. - 9TO5MAC

7. UC San Diego professor Timothy Mackey and a team of researchers created two AI tools that can help locate illegal drug sellers online. The systems use deep learning and topic-modeling to recognize and find suspicious content online. The National Institute on Drug Abuse commissioned Mackey for the project, which could eventually help law enforcement track down illegally counterfeits, vaping products, and gun sales. Critics note that the AI tools could contribute to the over-criminalization of low-level drug sellers, and don't actually reduce the demand for drugs. - VOX

8. Philips announced that two of its new OLED TVs will utilize AI for picture processing. The OLED855 and OLED805 models will reportedly harness machine learning and neural networks to analyze clips using a database and fine-tune screens automatically. The TVs, which both use the fourth-generation P5 picture processing engine, is due out for release in May. - TECHRADAR

9. The Great Learning Blog has published a list of AI tools for personal use. A Reddit user posted the list today on the r/ArtificialInteligence subreddit. (Yes, that's the correct spelling.) They include work-related apps like Mosaic, Carly, and Lomi, and social media tools including Capsule.ai, Bright crowd, and Hashely. - GREAT LEARNING BLOG

10. Twitter user Irena Tadic (@IrenaTadic1) shared a video today showing how the rabbit-duck illusion confuses "the hell out of" the Google Cloud Vision API. In the short clip, the drawing is spinning while the image classifier/object detector appears to switch between the probability that it's either a duck or a rabbit. Interestingly, the image first came into the public eye in 1899, when psychologist Joseph Jastrow wanted to note the effects of mental activity on perception. - @IRENATADIC1/TWITTER

Written and curated by Beth Duckett, a former reporter for The Arizona Republic who wrote a book about the solar industry and frequently covers hobby and commercial drones. You can follow her tweets here.

Edited by Sheena Vasani, Inside Dev editor.

Subscribe to Inside AI