Inside | Real news, curated by real humans
Inside AI

Inside AI (Jan 19th, 2020)

Happy Sunday and welcome to InsideAI.  I'm Rob May, a Partner at PJC, specializing in seed stage investments in AI, Robotics, and Neurotechnology investments. 

I'm working on a post series on Automation at work so, if you have a job as a tech executive where you have deployed a fair amount of AI automation, or are in process of doing so, please reach out as I'd love to chat about it.

Let's get started with the most popular articles this week from our daily newsletters.

DeepMind researchers have released a paper revealing links between distributional reinforcement learning, a type of machine learning, and the way human brains release dopamine. Through the research that studied dopamine neurons in mice, DeepMind scientist Will Dabney and his colleagues found evidence suggesting that our brains use distributional reward predictions - where individual dopamine neurons vary in their levels of response - to strengthen their learning algorithms. As Dabney explains in New Scientist, scientists previously thought that dopamine neurons would respond identically to rewards, "kind of like a choir but where everyone’s singing the exact same note,” he said. Instead, it's “more like a choir all singing different notes, harmonizing together,” Dabney said. Distributional reinforcement learning has been used by AIs to play games like Starcraft II and Go. In a tweet posted Wednesday, Dabney thanked his colleagues and noted that the work all started three years ago, through co-authored research about distributional reinforcement learning. - DEEPMIND BLOG

Computer models have helped design evolved frog embyros, which are being dubbed as the first-ever "living machine." Researchers took stem cells from existing African clawed frogs and created small living tissue "blobs" whose bodies were designed using special algorithms. Joshua Bongard, a computer scientist and robotics expert at the University of Vermont, said the moving, autonomous organisms are "neither a traditional robot nor a known species of animal," but actually a new type of artifact he called "a living, programmable organism." The algorithms were trained on constraints, such as max muscle power, and were able to produce generations of the so-called xenobots, which are "almost like a wind-up toy," said Sam Kriegman, a doctoral candidate studying evolutionary robotics in the University of Vermont's Department of Computer Science. A study about the findings was published this week in the Proceedings of the National Academy of Sciences. - LIVE SCIENCE

Google released new information about its "precipitation nowcasting," which uses machine learning to forecast rainfall up to six hours in advance, outperforming other techniques. In Google's AI Blog, the company said it can generate the rainfall forecasts on a "nearly instantaneous" basis for the short-term. The model is in the early stages and hasn't been integrated commercially yet, though Google says it will have a lot of applications, from boosting crisis response to reducing deaths and property damage due to extreme weather. As The Verge points out, Google’s approach is faster than two existing models that forecast weather, and much less "computationally intensive." Researchers trained the model on NOAA radar data collected from 2017 and 2019 in the contiguous U.S. It outperformed other existing methods that used the same data, until it had to make forecasts more than six hours in advance. - THE VERGE

Some experts are saying that AI is entering a new "cooling off" phase after the period of hype during the 2010s, the BBC reports. Noel Sharkey, a professor of AI and robotics at Sheffield University, described this new phase as an "AI autumn," which is not as severe as an AI winter but points to a potential plateau, particularly in the field of artificial general intelligence. AI pioneer and Turing Award winner Yoshua Bengio told the BBC that the abilities of AI were overhyped somewhat during the last decade by certain companies such as DeepMind, which was acquired by Google in 2014. In addition, there was a lot of publicity and buildup surrounding artificial general intelligence in the early 2010s that has appeared to die down in recent years. "By the end of the decade there was a growing realization that current techniques can only carry us so far," said AI researcher Gary Marcus. However, while there is still a ways to go before machines truly intelligent, breakthroughs are likely to occur, even if they are more practical. "I hope we'll see a more measured, realistic view of AI's capability, rather than the hype we've seen so far," said former Amazon AI researcher Catherine Breslin, an ex-Amazon AI researcher. - BBC

There has been more internet chatter recently about whether or not we are heading into a new AI Winter.  From a research perspective, perhaps we are.  The innovations coming out of AI research groups seem to be slowing down and are more marginal than breakthrough, but that says little about applied AI.  I think from an applied AI perspective, there is still a lot to do and those tasks will keep AI moving forward in ways that avoid an AI winter.  In fact, I think there are 3 major areas where Applied AI still has a long way to go.

1.  AI Engineering - There are a bunch of practical engineering challenges that still need to be solved to make AI work in many types of production systems.  As models get bigger, there are issues of how to run them in memory, particularly on edge devices.  There are challenges with how long it takes to train the largest models out of research labs, such that they can't really be used in mass produced production applications.  There are challenges managing models in technical workflows.  

2.  New Tasks From New Data Sets - There are lots of areas where AI could be applied but we don't yet have data sets to train models.  There are years worth of work to collect and annotate data to build new models for new applications.  This will keep pushing AI forward for quite some time.

3.  Workflows and Culture - From an operational perspective, most AI implementations are point solutions and few companies have made the operational and cultural changes to really embrace AI.  Workforces need to think about how one AI-driven workflow step ripples through the value chain and requires more changes.  They need to think about how to train workers to annotate data, understand probabilistic outputs, and correct bad model outputs when they happen.

I expect to see a possible "AI Winter" meme come out of the research community in coming years as the gains are more marginal, but, ignore them.  AI is going to continue to move forward because of all the practical work to be done.

Thanks for reading.

@robmay

Subscribe to Inside AI