Inside | Real news, curated by real humans
Inside AI

Inside AI (Jan 7th, 2020)

1. The White House on Tuesday proposed new rules to govern the use of AI in various industries such as healthcare and transportation. The rules do not cover the use of facial recognition or AI in law enforcement, but are rather geared toward how federal agencies will restrict the technology in the private sector. “We purposely wanted to avoid top-down, one-size-fits-all, blanket regulations," said Lynne Parker, U.S. deputy chief technology officer at the White House’s Office of Science and Technology Policy. The rules will take effect after a period of public comment, which could last several months. - REUTERS

2. Facebook issued a new policy to ban AI-edited deepfakes. An example is the infamous "drunk Nancy Pelosi" video, which featured a heavily edited altered version of the U.S. House Speaker slurring her speech and stumbling over words. In a blog post on Monday, Facebook said it will remove deepfakes from its website if they are edited, but not manipulated videos that are "parody or satire," which includes the Pelosi video. "Only videos generated by artificial intelligence to depict people saying fictional things will be taken down,” Facebook said in a statement. The move aims to crack down on misinformation before the 2020 elections. - WAPO

A version of this story first appeared in Inside Daily Brief.

3. A new AI technique can reportedly diagnose brain tumors from tissue samples much faster and more accurately than human pathologists, according to a study published Monday in Nature Medicine. Although it typically takes experts about 30 minutes to diagnose the samples taken during surgery, the AI-based diagnostic system can turn out results in two and a half minutes while the patient is on the operating table. The technique, developed by New York University neuroscientist Daniel Orringer and colleagues, combines a deep neural network with stimulated Raman histology, which illuminates features in images using scattered light. The algorithm, trained on 2.5 million images of 415 patients, scans the images to categorize tissue as brain tumors. In a clinical trial, the AI's accuracy was 94.6 percent compared to 93.9 percent for human neuropathologists. - GIZMODO

4. Researchers at Microsoft and Beijing's Peking University have proposed new frameworks for face swapping and spotting face images that are deepfaked or forged. The so-called FaceShifter and Face X-Ray methods are described in two academic papers published by the teams, who say that they outperform other similar AI approaches on several baselines and use fewer data. FaceShifter uses a GAN-based Adaptive Embedding Integration Network (AEI-Net) to replace a person with another person in images while preserving things like head pose and lighting. It incorporates Attentional Denormalization (AAD) layers, while a Heuristic Error Acknowledging Refinement Network (HEAR-Net) model "leverages discrepancies between reconstructed images and their inputs to spot occlusions," according to VentureBeat. Face X-Ray creates grayscale images that help it determine if an image was blended from two images of different sources. - VENTUREBEAT

5. German engineering company Robert Bosch GmbH has unveiled a computer vision process that can detect potential attacks on autonomous vehicle systems, such as hackers who intentionally deface road signs, The Wall Street Journal reports. The parallel AI process can analyze signs and other road objects from two perspectives and compare them to each other, with a second algorithm acting as a "check" on the first. In describing the threats that Bosch is attempting to counter, The Journal cited examples of adversarial attacks on AI systems, such as the time researchers placed pieces of tape onto a stop sign to trick algorithms into thinking it was a 45 mph speed limit sign. Bosch is invested in the concept, given that it develops car sensors and other components for things like traffic-sign recognition. - WSJ

6. To combat the potentially massive job loss from AI, researchers at the Future of Humanity Institute think tank suggest that companies should pay additional taxes on any excess profit derived from their AI efforts. Their proposal is outlined in a paper, "The Windfall Clause: Distributing the Benefits of AI for the Common Good," that was recently posted on the arXiv pre-print server. The researchers describe a windfall clause, or ex-ante agreement, that companies would sign in which they agree to pay out excess profits stemming from AI on a sliding scale before any profits actually materialize. The authors note that many experts "argued that AI could lead to a substantial lowering of wages, job displacement, and even large-scale elimination of employment opportunities as the structure of the economy changes productivity." - ZDNET

7. YouTube is rolling out its new policy for children's content starting Monday, which includes not displaying personalized, targeted ads, and disabling comments on kids' videos. As part of its September settlement with the FTC, YouTube is instituting the changes globally starting today, and it's asking for creators' help in self-identifying all content that is intended for children. The company says it will use a combination of AI and self-identification labels to flag and sequester children's content in order not to run afoul of federal laws protecting children's privacy. "Responsibility is our number one priority at YouTube, and this includes protecting kids and their privacy," the company writes in a blog post. Privacy experts contend that the new policies still do not go far enough to protect kids 12 and under, especially given that YouTube has already collected tons of data on its users, and it makes a business of keeping people on the site as long as possible. – WASHINGTON POST

A version of this story first appeared in Inside Social.

8. A newly announced program will combine the expertise of MIT’s School of Engineering and Takeda Pharmaceuticals Company Limited to work on AI initiatives in healthcare and drug development. The MIT-Takeda Program, unveiled on Monday, is based at the Abdul Latif Jameel Clinic for Machine Learning in Health at MIT. The initiative will help the university and Takeda combine faculty and researchers to pursue research projects in machine learning and health,  support annual fellowships and educational programs, and more. Anantha Chandrakasan, dean of MIT’s School of Engineering, said the goal is to "build a community dedicated to the next generation of AI and system-level breakthroughs that aim to advance healthcare around the globe.” - MIT NEWS

9. An Arizona-based clinical lab testing company is using AI to interpret data about patients with Alzheimer’s, helping them to receive tailored plans to manage or slow down the disease. Working with the tech firm uMETHOD Health, Phoenix-based Sonora Quest Laboratories collects the patient data on medical history and lifestyle through a platform known as RestoreU METHOD, which healthcare providers can access. - KTAR

10. Two experts recently offered up some tips for advancing what they call "people-centered" AI initiatives. The concept revolves around the idea that AI should amplify human strengths, according to David Bray, executive director of the People-Centered Internet coalition, and R. “Ray” Wang, CEO of Constellation Research. To accomplish this, they recommended that organizations classify what they're trying to accomplish with AI, embrace guiding principles, establish data advocates, practice “mindful monitoring," and ground expectations. - MIT SLOAN NEWS

Written and curated by Beth Duckett, a former reporter for The Arizona Republic who wrote a book about the solar industry and frequently covers hobby and commercial drones. You can follow her tweets about the latest news in artificial intelligence here.

Edited by Sheena Vasani, Inside Dev editor.

Subscribe to Inside AI