Welcome to the weekend edition of InsideAI! I'm Rob May, CEO of Talla. Please check us out if you are looking for AI/ML powered automation to increase employee productivity. I also host the AI at Work podcast. The latest episode with Box's Chief Product Officer Jeetu Patel is fascinating.
The most clicked stories in our daily emails this week were:
Microsoft unveiled the Microsoft Azure Kinect at the Mobile World Congress (MWC) in Barcelona on Sunday. The AI edge device features two cameras and a seven-microphone array and is geared toward developers of enterprise uses. The company released an earlier version of the Kinect as an accessory for Xbox One and Xbox 360 consoles but discontinued its production in 2017 after selling 35 million. The product is priced at $399 and is available for pre-order. — CNBC
University of California at Berkeley researchers developed a system that enables an agent to both generalize between tasks and to hone its learning ability over time with exposure to new tasks. The system combines two techniques: meta learning, the transference of training to perform new tasks, and online learning, which enables a network to evaluate and improve its performance on new tasks. The research is published in the arXiv online repository. — ZDNET
The Animal-AI Olympics is a new contest meant to evaluate the progress of AI systems toward artificial general intelligence (AGI). The event kicks off in June and runs through the end of the year. The contest rules and entry information will be released in April, and the goal is to benchmark AI systems against animal species using established tasks for animal cognition. The project is a collaboration between Prague research institution GoodAI and the University of Cambridge's Leverhulme Center for the Future of Intelligence (CFI). — IEEE SPECTRUM
AI contributed $2 trillion to the global GDP last year, according to a PwC report. The report also estimates that the AI industry could contribute $15.7 trillion to the world economy by the year 2030. The greatest economic gains by 2030, according to the PwC, will be the 26 percent boost to the GDP in China and the 14.5 percent boost in North America, together making up $10.7 trillion and the majority of the global economic impact. — SEEKING ALPHA
French AI startup GreenWaves is developing custom, low-power chips for machine learning based on open source designs. The specialized chips use parallelism and multi-core architecture to perform machine learning functions on edge devices running on batteries with limited power. GreenWaves just raised $7.9 million in Series A funding from Huami, Soitec, and other investors that the company says will be used to bring its first product, GAP8, to market. — ZDNET
-- Commentary --
The frustrating thing about reading AI news is that it covers a lot of the stuff that doesn't matter. As someone who runs an AI company, invests in AI companies, and writes a newsletter about AI (and thus reads a lot of AI news), I thought it would be good to highlight 5 key ideas that I seem mostly missing from the frameworks people use to think about AI. I do talk to a lot of smart people who know these things, in fact, some of the ideas came from AI. executives I've spoken to about AI. adoption, but most people, I believe, are missing these key pieces.
1. AI innovation is more limited than you think. The latest wave of innovation is largely improvements in neural network technology by creating novel topologies, new training methods, and better hyperparameter tuning. AI fields like symbolic logic, evolutionary algorithms, and others have hardly been touched, and even for neural nets much of the work has been researchy, and is difficult to translate into applications. That's why we aren't about to go into an AI winter - because on the big scale of things, it's only been a small amount of AI research that has led to such massive gains and improvements. There is much much much more to come.
2. AI hardware continues to be a thing people ask me about. Anyone who reads this knows I'm a fan, but I regularly get questions like "hasn't NVIDIA won?" and "what would we use a new chip for?" But let me give you an interesting fact that an executive at IBM told me just this week - $300 billion is invested in semiconductor research annually. That is a lot of money. And most of it has been invested in moving chips forward along the curve of Moore's Law. That is coming to an end. So, where does that $300B go? It won't cease to be spent. It will go into new chip designs - optical, analog, neuromorphic, etc. That will to all kinds of new innovation that we can't fully understand right now. But it will take time. Chips make take a decade from initial research to production in a device somewhere, but it's coming. And it will be a massive revolution.
3. AI is not (yet) electricity. I know this is a common framework people use to think about AI. I believe it is wrong. I've actually read a bit about the history of electricity in the past year to figure out if I see parallels in AI adoption. I don't. Electricity is much more standardized and fungible. AI is not. Until we get to the point where a unit of intelligence is a fungible thing (if we ever do) then I think the electricity framework fails for AI.
4. There is a coming phase shift in cultural/workflow adoption, probably 2-3 years out. Here is the example I like to give... in 1996, most companies had a process for making printed brochures. When the web came along, people sometimes designed a web page by following their old process for making printed brochures, and then giving the finished product to a web developer to turn into a web page. That was the wrong process. But AI is in that phase. It is being bolted on to existing workflows, which means sometimes limited functionality and effectiveness, much the way "turn this into HTML" was bolted onto an old graphic design process. As tools and workers of a new generation start to use AI more natively, this will hit a tipping point. The beginning of the phase shift is probably still 2-3 years away - where some companies really start to outperform others because of AI adoption, so the tipping point is probably 5-7 years away when the world flips over to the new paradigm.
5. And finally, the most frustrating thing of all, is that we are focused on the wrong problems. We are worried about killer general AI, which is a long way off, and we are worried about automonous vehicles deciding whether to kill a pedestrian or a passenger. I believe these are both problems that shouldn't be given a lot of thought right now. There are more immediate issues, like the interaction between data and UI and advertising, and what Google and Facebook are doing with your data, and to your mind. I worry about tools like Waze, that may start to use AI to optimize for broader goals. For example, Waze knows I try to get to work by 8:30am, and you do to, and today I'm running a bit late and you are running a bit early. Would Waze send you on a route that is actually 1 minute slower, in order to help me get to work on time? Would it do that because it's "fair" to make sure the most people get to work on time? Or, is that punishing you for being more responsible than me and leaving on time? Or would it send me a faster way because I am more valuable and click on more ads in the app? I worry more about the bias and optimization challenges that creep into our systems and algorithms as they run more of our lives, than I ever do about killer AI.
That's all for this week - my general thoughts on what people aren't talking enough about but should be, in AI. Thanks for reading, and enjoy the rest of your weekend.