Happy Sunday and welcome to the weekend edition of InsideAI. I'm Rob May, CEO of Talla, and an active AI angel investor (52 companies and counting). I also recently interviewed Steven Peltzman of Forrester on our AI at Work podcast, so be sure to check that out if you are interested in his take on AI in the Age of the Customer.
Below are the most popular links of the week:
Evan Patterson, a fellow at IBM Science for Social Good, led a team that is using machine learning to analyze computer code. Machine learning is often applied to images, video, audio, and natural language text, but this new system focuses on data science software. Patterson's technique captures the functions in the code — data transformation, analysis, modeling, and interpretation operations — and connects them to a data science ontology which presents a semantic flow graph representing the program. — PHYS.ORG
Two startups are using flash memory for AI edge processing. Irvine, Calif.–based Syntiant and Austin, Texas–based Mythic are embedding flash memory to reduce the energy needed for deep-learning computation. Syntiant uses analog circuitry for smaller applications to identify text and speakers. Mythic is using a combination of analog circuitry and programmable digital circuitry for applications that need more complex networks for high resolution video in drones or smartphones. — IEEE SPECTRUM (Disclosure: I'm an investor in Mythic)
AI technology is being used to level the playing field between small law firms with fewer associates and big firms with more staff. AI tools like Casetext for research, LawGeex to review contracts, Veritone to assess video and audio content for compliance purposes, Everlaw to evaluate documents, and Gavelytics for judicial analytics are helping to automate tasks that typically take a lot of manpower and time. These algorithms and services are also becoming more affordable as they become more ubiquitous. — ABOVE THE LAW
Google has announced more details about its new AI developer tools, which are now available in beta. The tools — AutoML Vision, AutoML Natural Language, and AutoML Translation — were first unveiled in July and are a part of Google's Cloud AutoML suite. According to the company, the new products offer customers more specialized machine-learning models. AutoML Vision allows developers to upload their own image datasets and create custom models, AutoML Natural Language helps developers build text analysis into their applications, and AutoML Translation leverages Google's translation functions. — EWEEK
A group of students from Fast.ai, an organization that offers free machine-learning courses online, created an AI algorithm that outperforms code written by Google researchers. Performance is measured by DAWNBench, a tool that tracks the speed of a deep-learning algorithm per dollar of computing power. Google previously held the top ranking, but Fast.ai trained an algorithm using the ImageNet database and AWS (Amazon Web Services) tools in just 18 minutes at the cost of $40 — 40 percent more efficient than Google's effort. "State-of-the-art results are not the exclusive domain of big companies," said Fast.ai co-founder Jeremy Howard. — TECHNOLOGY REVIEW