Inside AI - December 2nd, 2019

Inside AI (Dec 2nd, 2019)

Subscribe | View in browser

1. At today's AWS re:Invent event, Amazon unveiled the music keyboard DeepComposer, which lets developers compose music in collaboration with generative AI models running in the cloud. Dubbed “the world’s first machine learning-enabled musical keyboard," the 32-key, two-octave keyboard plugs into a PC and uses a control panel for developers to communicate and collaborate with AI models. Once finished, developers can share their GAN-created compositions on SoundCloud. AWS engineer Mike Miller said the system "is designed for giving developers a hands-on opportunity to learn about this new technology while at the same time having fun with music." - ZDNET

2. Also during re:Invent, AWS engineers announced Amazon Transcribe Medical, which lets developers add medical speech-to-text capabilities to their apps. The system's API is designed to transcribe medical speech for primary care and integrates with voice-enabled apps, according to AWS. Currently, the companies Amgen and SoundLines are using the system to generate text transcripts from recorded notes and feed those transcripts into downstream analytics. It comes with automatic and “intelligent” punctuation and supports both conversational transcription and medical dictation, according to Amazon. - VENTURE BEAT

3. Human writers at Cards Against Humanity narrowly beat out an AI in the company's annual Black Friday stunt, which pitted the writers against an algorithm to write a new pack of cards. The company pledged a $5,000 holiday bonus for each writer who beat the AI. If the computer won, the company said it would fire the writers (though we're guessing that was likely a joke). As of Monday, the writers had sold 2 percent more packs, "so their jobs will be replaced by automation later instead of right now," CAH said. The AI in question was a neural network borrowed from OpenAI's GPT-2 model specifically trained to write CAH cards, based on the text of 44,000 white cards. Some notable AI submissions included “Some sort of giant son of a bitch who lives in the internet” and “Sitting in the back of the plane, smoking a cigar and reading the Flickr privacy policy." Both packs can still be ordered for $5 apiece, and you can read what people are saying about the differences in the AI vs. human cards here. - THE VERGE

4. The U.S. Senate Commerce and House Energy and Commerce committees are working on legislation that would govern self-driving vehicles after getting pressure from industry groups. Senate Commerce Committee chairman Roger Wicker (R-Mississippi) said they’ve started working on a federal regulatory framework to govern the safety of autonomous vehicles. The draft language for a new bill states that a Highly Automated Vehicle Advisory Council would be established that would evaluate issues and regulations related to self-driving cars and how they are tested. The timing for any formal introduction of the legislation is unclear. - THE HILL

A version of this story first appeared in Inside IoT.

5. Professional Go player Lee Se-dol is retiring after losing to Google's AlphaGo algorithm. The strategy board game Go has been played in China for about 3,000 years and is extremely complex. In 2016, Lee lost four out of five matches against AlphaGo and attributed his own win to a fault in the AI's programming. He recently told Yonhap News: "Even if I become the number one, there is an entity that cannot be defeated." Lee still plans to compete one last time later this month against the South Korean AI HanDol, which has already beat the country's top five Go players. Even with an advantage, "I feel like I will lose the first game," he said. - PC MAG

6. The U.K.’s data watchdog published its new draft guidance for regulating AI, with a focus on fining organizations that fail to clearly explain how their AIs work. Under the U.K.'s Information Commissioner’s Office plans, firms could face hefty fines (potentially millions of dollars) if they fail to explain decisions made using AI. The guidance, which was created in collaboration with the Alan Turing Institute and is expected to take effect next year, identifies four key principles for AI: transparency, accountability, consideration of context, and reflection on impacts (which are the same principles rooted in the EU's General Data Protection Regulations, or GDPR). The ICO rules ask organizations to make sure that decisions made by AI are “obvious and appropriately” explained to people in a “meaningful” way. It calls on groups to “ask and answer questions about the ethical purposes and objectives of your AI project at the initial stages of formulating the problem and defining the outcome." - NEW SCIENTIST

7. Analyst Daniel Newman predicts that Nvidia will experience better-than-expected growth in the coming months, thanks to its early lead developing AI inferencing hardware chips. "Given industry growth and the company’s current product mix and positioning," that lead should last for the next two years at least, according to Newman, who is a principal analyst at Futurum Research. Nvidia, which has an annual revenue run rate close to $12 billion, has a "formidable lead" over AMD, Intel, and other AI-accelerator chip manufacturers, he notes. According to Tractica forecasts, the market for deep-learning chipsets is expected to rise to $66.3 billion by 2025, up from $1.6 billion in 2017. - MARKET WATCH

8. Researchers at Stanford are using AI techniques to find better treatment options for people with epilepsy. Silicon Valley health-tech start-up is sponsoring the clinical trial, which is led by Dr. Robert Fisher, a professor of neurology and neurological sciences at the Stanford University School of Medicine. According to Forbes, there are more than 14,000 different treatment scenarios for people with epilepsy, and many are tested out on an individual trial-and-error basis. The trial will use predictive AI to determine the best treatment choices on a larger scale, based on data like side effects, genomic information, and environmental exposures. - FORBES

9. Automated systems currently identify only about 16 percent of Facebook posts that involve bullying and harassment, according to MIT. While Facebook’s neural networks are the first to flag these problematic images or behavior, it still mostly relies on people to report them, mainly because the technology hasn't progressed enough to fully comprehend language. However, the company's machine-learning systems still handle the vast majority of moderation on the platform, and in its most recent community standards enforcement report, Facebook said that its systems remove about 98 percent of terrorist videos and photos before anyone actually sees them. - MIT TECH REVIEW

10. A new episode of the AI-focused Sleepwalkers podcast delves into the consolidation of power and control in AI surveillance. Naturally, the installment covers how companies are amassing people's personal data to train advanced AI, as well as the use of facial recognition by police in the U.S. and government authorities in China. As Sleepwalkers host Oz Woloshyn notes, "so much hangs in the balance" when it comes to the regulation of AI surveillance. - WIRED

Written and curated by Beth Duckett in Scottsdale, Arizona. Beth is a former reporter for The Arizona Republic who has published a book about the solar industry and frequently writes about hobby and commercial drones. You can follow her tweets about breaking news in artificial intelligence here.

Editor: Kim Lyons (Pittsburgh-based journalist and managing editor at Inside).

Copyright © 2020, All rights reserved.

Our mailing address is:
767 Bryant St. #203
San Francisco, CA 94107

Did someone forward this email to you? Head over to to get your very own free subscription!

You received this email because you subscribed to Inside AI. Click here to unsubscribe from Inside AI list or manage your subscriptions.

Subscribe to Inside AI