Inside | Real news, curated by real humans
Inside AI

Inside AI (Dec 18th, 2019)

1. A group of eight Democratic lawmakers on Wednesday sent a letter to Ben Carson, the U.S. secretary of Housing and Urban Development (HUD), expressing concerns about the use of facial recognition technology in federally assisted housing. In the letter, Congress members cited a New York Times article that reported on the rising use of face recognition to scan people in public and federal housing buildings. For example, Detroit's public housing authority installed security cameras at housing units earlier this year that can send footage back to the city's police department, which scans it using its new facial recognition software to search for potential criminals. The lawmakers argue that the technology can infringe on basic privacy and protections and has been shown to be inherently biased against people of color, women, and non-cisgender people. They asked HUD to respond to questions, including how many federally assisted housing properties actually use the technology, by January 24. - THE HILL

2. Samsung Electronics says it will produce Baidu's new AI chip Kunlun, which is set to go into mass production early next year. It's the first collaboration of its kind between the South Korean tech giant and the Chinese internet search company. Samsung will make the AI accelerator chip using its 14 nm process technology and its I-Cube, or Interposer-Cube, packaging structure, it said. The chipset features 512 gigabytes per second memory bandwidth and supplies up to 260 trillion operations per second at 150 Watts. Baidu says Kunlun allows Ernie – its pre-training model for natural language processing – to infer three times faster than a traditional GPU or FPGA. The chip can also be used in speech recognition, image processing, autonomous driving, and deep learning. - ZDNET

3. During its GPU Technology Conference in China today, Nvidia announced that it will open source several of its AI models involved in the autonomous vehicle platform Drive. These include the Drive AGX systems for traffic light and sign recognition, vehicle and pedestrian spotting, gesture recognition, and gaze detection, VentureBeat reports. Nvidia CEO Jensen Huang said that opening up access to the models will enable "shared learning across companies and countries," which he says will bring humans closer to the potential for global autonomous vehicles. During the conference, Nvidia also announced its new Orin AI processor for autonomous cars. - VENTURE BEAT

4. Snapchat introduced a feature today that allows users to "deepfake" themselves into videos and GIFs. The company says it has rolled out its Cameo tool globally in 150 variations, which can be accessed via the Chat sticker bar. Snapchat's AI automatically alters a user's face to display different expressions and emotions in short looping videos. Snapchat also offers Time Machine, which uses AI technology to "age" people when they drag a slider across the screen. - USA TODAY

5. A Virginia-based hospital says its AI-based early warning system for sepsis has helped save lives. Sepsis - a life-threatening response to an infection - kills about 250,000 people per year in the U.S. Augusta Health, a not-for-profit hospital in Virginia, developed an AI-based system with Vocera Communications that automatically reviews a patient's electronic health records and alerts nurses if sepsis-like symptoms arise. Detecting the symptoms early is critical since the likelihood of death rises by as much as 8 percent for every hour of delayed treatment. Augusta Health now has a sepsis mortality rate of 4.8 percent, compared to the state's 13.2 percent. By subtracting the actual deaths from the expected mortality rate, the hospital estimates that 282 lives have been saved since it started using the system. - HEALTHCARE IT NEWS

6. Uber AI has developed a technique that can steer natural-language processing models toward more specific topics and away from offensive text. The technique relies on two statistical models - an original language model, and a second model that judges how well the first model stuck to the original topic. After the first model predicts words based on a prompt, it checks its score with the second "evaluation" model and alters any text that doesn't fit. The system could be used to help language models stick to certain topics or certain tones, such as assigning the AI to be more positive or negative in its writing. - MIT TECH REVIEW

7. Image editor Pixelmator unveiled a machine-learning feature that can sharpen and enhance low-res images. The company, which competes with Photoshop, says its “ML Super Resolution” can scale an image up to three times the original resolution without adding pixelation or blurriness. Pixelmator’s creators say the algorithm is 5MB in size, allowing it to run on users’ devices, and was trained on 15,000 samples of low-resolution and high-resolution images, where it learned to "fill in the gaps" between images pixel by pixel. After tests, The Verge reports that it's the "best commercial super-resolution tool" it's seen so far, although it does have caveats. - THE VERGE

8. Google and DeepMind researchers used the speech-to-text transcription service Project Euphonia to recreate the original voice of former NFL linebacker Tim Shaw, who is unable to speak due to ALS. The research team adapted the generative AI model WaveNet to synthesize speech from Shaw’s voice samples from before he was diagnosed six years ago. While the voice is not perfect, the researchers noted that combining Euphonia's speech recognition systems with speech synthesis technology can help people like Shaw communicate more easily. In a blog post, DeepMind researchers explain how WaveNet identifies tonal patterns in speech and can even generate music. - VENTURE BEAT

9. As part of a growing effort to use technology to improve health care outcomes, the Mayo Clinic is testing algorithms stored on the cloud to detect heart abnormalities that remain invisible to even the most skilled cardiologists. Central to this effort is programming computers to analyze reams of data to identify patterns in a matter of seconds. Hundreds of thousands of patients die each year from cardiac illnesses that could be treated if spotted in time. - STAT

A version of this story first appeared in Inside Cloud.

10. With more than one-third of employers now using AI in the hiring process, CNBC reported on the ways that career seekers can "robot-proof" their resumes. The goal is to tailor applications to get past initial screens, which are typically based on company-specific algorithms. Tips include using a text-based application like Microsoft Word (which AI programs can scan more accurately), placing the most senior-level role first on the resume, and including key words from the job's original description. - CNBC

Written and curated by Beth Duckett, a former reporter for The Arizona Republic who wrote a book about the solar industry and frequently covers hobby and commercial drones. You can follow her tweets about the latest news in artificial intelligence here.

Subscribe to Inside AI