1. Nvidia's earnings and revenues in the second quarter beat analysts’ expectations. The company's success was partially attributed to higher demand for AI and graphics chips as the company "achieved sequential growth across our platforms," CEO Jensen Huang said. Nvidia reported revenues for the second fiscal quarter of $2.58 billion vs. $2.55 billion forecasted by analysts. Its non-GAAP earnings per share were $1.24, vs. $1.15 from analysts. In a statement, Huang said the company's "accelerated computing momentum continues to build as the industry races to enable the next frontier in artificial intelligence, conversational AI, as well as autonomous systems like self-driving vehicles and delivery robots.” - VENTURE BEAT
2. AInnovation, venture capitalist Kai-Fu Lee's startup, is expected to reach $100 million in revenue next year and could go public in 2021, according to Lee. AInnovation creates AI products for companies such as Mars Inc., Nestle SA, and Foxconn Technology Group. The $100 million revenue mark would occur within two years of the startup's March 2018 founding. Lee expects the firm, which has raised about $70 million from Sinovation and others, to have an IPO within the next two years at a valuation of $1 billion to $2 billion. - BLOOMBERG
3. The Allen Institute for Artificial Intelligence and the University of Washington created an AI that can generate fake news based on only a headline. Grover was trained on 120 gigabytes of news articles to actually spot fake news written by AI, but its developers also taught it to generate false articles as well. In scorings, people rated it as more trustworthy than human-written false news. In his own experiment, Fast Company's Mark Wilson put in the headline: "Why Donald Trump Eats 100 Cheeseburgers a Day.” The results, which are here, are pretty convincing. Grover is also online for anyone to try. - FAST COMPANY
4. New York Times tech correspondent Cade Metz visited five offices, including at least one in India, where employees label the data that are used to train AI. The offices that Metz visited are run by iMerit, a Kolkata-based data annotation company that employs more than 2,000 workers in nine offices globally. Its workers are currently contributing to Amazon's online data-labeling service known as SageMaker Ground Truth. Data labeling - which accounts for 80 percent of the time spent to build AI tech - is expected to reach $1.2 billion by 2023, up from $500 million last year. Metz notes that privacy activists continue to raise concerns about the large amounts of data that the tech companies are storing and sharing. - NY TIMES
5. Engineer.ai's chief business officer, Robert Holdheim, sued the company earlier this year over its allegedly false claims about using AI to build apps. According to The Wall Street Journal report from earlier this year, the India-based startup appears to exaggerate its use of AI and mostly relies on human engineers to put together the app until it can actually get its automation platform off the ground. Holdheim claims founder Sachin Dev Duggal told investors that the company was 80 percent finished in developing the product when, in truth, it was barely under development. The company says it uses NLP to estimate pricing and timelines which, as The Verge notes, still means it doesn't "appear that any kind of AI agent or software of any kind is actually compiling code." - THE VERGE
6. Three prominent people in the AI field argued that we need the digital equivalent of drug trials to test AI algorithms, which have the potential to create dangerous feedback loops and "ultimately erode civility and trust in society." The experts issued the warning in a new opinion piece for Wired. They are Olaf J. Groth, founding CEO of Cambrian Labs; Mark J. Nitzberg, executive director of the Center for Human-Compatible AI at UC Berkeley; and Stuart J. Russell, a Berkeley computer science professor, director of CHAI, and author of "Human Compatible: AI and the Problem of Control." Both Groth and Nitzberg are co-authors of "Solomon's Code: Humanity in a World of Thinking Machines." In the piece, the three note that designers of adaptive reinforcement learning algorithms made a drastic mistake in assuming that human tastes are fixed, meaning that current algorithms can inadvertently modify people's tastes and edge us toward extremes. - WIRED
7. Amazon just debuted "Custom Interfaces," a new API for its Alexa Gadgets Toolkit that allows developers to create new interactions between Alexa-enabled devices and their own internet-connected products. The main idea is to offers a way for third-party manufacturers to create more fun and quirky experiences using Alexa. For instance, Amazon envisions Alexa working in tandem with a smart mini keyboard to help teach you how to play the piano. However, it remains to be seen how exactly developers will choose to use the new software in their future product designs. - ENGADGET
This story first appeared in our daily Inside Amazon newsletter.
8. Researchers at IBM Research and Oxford create a weird form of carbon - a cyclocarbon known as 18-carbon - that could one day form the basis of an artificial brain. They created the carbon in solid form via manipulation at the atomic level. Leo Gross, an IBM Research staff member, says future devices built with 18-carbon could be quite efficient, such as tiny carbon neurons that create neural networks in a true artificial brain. - POPULAR MECHANICS
9. The U.S. Department of Defense is seeking machine learning experts to create computer vision algorithms that will speed up the analysis of aerial and satellite imagery. The Defense Innovation Unit is hosting the xView2 Challenge, which aims to produce results that help automate damage assessments after disasters. - DEFENSE.GOV
10. A new AI can, for better or worse, mimic the voice of Canadian psychologist Jordan Peterson. The neural network NotJordanPeterson, developed by AI researcher Chris Vigorito, lets people type in any words and then converts them into Peterson's voice. Vigorito told The Next Web that he used a “combination of two neural network models that were trained using audio data of Dr. Peterson speaking, along with the transcript of his speech.” For some extra fun, there's also a YouTube video of a Peterson AI model trying to sing 'Lose Yourself' by Eminem. The first comment reads: "The implications of this kind of technology are....very scary." - TNW
Written and curated by Beth Duckett in Orange County. Beth is a former reporter for The Arizona Republic who has written for USA Today, Get Out magazine and other publications. Follow her tweets about breaking news and other topics in southern California here.
Editor: Kim Lyons (Pittsburgh-based journalist and managing editor at Inside).