Happy Sunday and welcome to InsideAI. I'm Rob May, a Partner at PJC, specializing in seed stage investments in AI, Robotics, and Neurotechnology investments.
I'm working on a post series on Automation at work so, if you have a job as a tech executive where you have deployed a fair amount of AI automation, or are in process of doing so, please reach out as I'd love to chat about it.
Let's get started with the most popular articles this week from our daily newsletters.
DeepMind researchers have released a paper revealing links between distributional reinforcement learning, a type of machine learning, and the way human brains release dopamine. Through the research that studied dopamine neurons in mice, DeepMind scientist Will Dabney and his colleagues found evidence suggesting that our brains use distributional reward predictions - where individual dopamine neurons vary in their levels of response - to strengthen their learning algorithms. As Dabney explains in New Scientist, scientists previously thought that dopamine neurons would respond identically to rewards, "kind of like a choir but where everyone’s singing the exact same note,” he said. Instead, it's “more like a choir all singing different notes, harmonizing together,” Dabney said. Distributional reinforcement learning has been used by AIs to play games like Starcraft II and Go. In a tweet posted Wednesday, Dabney thanked his colleagues and noted that the work all started three years ago, through co-authored research about distributional reinforcement learning. - DEEPMIND BLOG
Computer models have helped design evolved frog embyros, which are being dubbed as the first-ever "living machine." Researchers took stem cells from existing African clawed frogs and created small living tissue "blobs" whose bodies were designed using special algorithms. Joshua Bongard, a computer scientist and robotics expert at the University of Vermont, said the moving, autonomous organisms are "neither a traditional robot nor a known species of animal," but actually a new type of artifact he called "a living, programmable organism." The algorithms were trained on constraints, such as max muscle power, and were able to produce generations of the so-called xenobots, which are "almost like a wind-up toy," said Sam Kriegman, a doctoral candidate studying evolutionary robotics in the University of Vermont's Department of Computer Science. A study about the findings was published this week in the Proceedings of the National Academy of Sciences. - LIVE SCIENCE
Google released new information about its "precipitation nowcasting," which uses machine learning to forecast rainfall up to six hours in advance, outperforming other techniques. In Google's AI Blog, the company said it can generate the rainfall forecasts on a "nearly instantaneous" basis for the short-term. The model is in the early stages and hasn't been integrated commercially yet, though Google says it will have a lot of applications, from boosting crisis response to reducing deaths and property damage due to extreme weather. As The Verge points out, Google’s approach is faster than two existing models that forecast weather, and much less "computationally intensive." Researchers trained the model on NOAA radar data collected from 2017 and 2019 in the contiguous U.S. It outperformed other existing methods that used the same data, until it had to make forecasts more than six hours in advance. - THE VERGE
Some experts are saying that AI is entering a new "cooling off" phase after the period of hype during the 2010s, the BBC reports. Noel Sharkey, a professor of AI and robotics at Sheffield University, described this new phase as an "AI autumn," which is not as severe as an AI winter but points to a potential plateau, particularly in the field of artificial general intelligence. AI pioneer and Turing Award winner Yoshua Bengio told the BBC that the abilities of AI were overhyped somewhat during the last decade by certain companies such as DeepMind, which was acquired by Google in 2014. In addition, there was a lot of publicity and buildup surrounding artificial general intelligence in the early 2010s that has appeared to die down in recent years. "By the end of the decade there was a growing realization that current techniques can only carry us so far," said AI researcher Gary Marcus. However, while there is still a ways to go before machines truly intelligent, breakthroughs are likely to occur, even if they are more practical. "I hope we'll see a more measured, realistic view of AI's capability, rather than the hype we've seen so far," said former Amazon AI researcher Catherine Breslin, an ex-Amazon AI researcher. - BBC