Inside | Real news, curated by real humans
Inside AI

Inside AI (Dec 15th, 2019)

Happy Sunday and Welcome to InsideAI!  I'm Rob May, a Partner at PJC.  The purpose of this newsletter is to highlight some things people may not be thinking about with respect to AI, or to discuss trends happening in the space from an applied and/or practical perspective.  In the year end review, coming up in 2 weeks, I'll talk about my AI investing theses for 2020. 

Also, note that in January we will re-launch the AI at Work podcast.  I'm trying something new and launching the entire season at once, Netflix style.  This season has YC companies with an AI bent.

Now let's get started with the top articles of the week from the daily InsideAI:

In its latest annual report, research institute AI Now argues that emotion recognition should be banned when it impacts people's lives. While supporters claim the technology can interpret people's inner emotions based on things like tone of voice or micro-expressions, recent studies show that there is "no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks," explains institute co-founder and professor Kate Crawford. The NYU Institute is calling for laws to limit how emotion-detecting algorithms are used, including an outright ban on the software if it impacts people's decision-making and opportunities. Part of the problem is that some AIs are trained to recognize a limited set of emotions - maybe eight at most - and emotions can be expressed in a large variety of ways (some people scowl when they are confused and not angry, for example), so the results tend not to be accurate, studies show. - MIT TECH REVIEW

The position of artificial intelligence specialist topped LinkedIn's annual Emerging Jobs list, released on Tuesday. Hiring of the role - which typically commands a salary of around $140,000 or more - has grown by nearly 75 percent in the past four years, according to LinkedIn. “AI has infiltrated every industry, and right now the demand for people skilled in AI is outpacing the supply for it,” Guy Berger, principal economist at LinkedIn, told MarketWatch. Rounding out the top five were robotics engineers (40 percent hiring growth), data scientist (37 percent), full stack engineer (35 percent), and site reliability engineer (34 percent). Machine learning engineer also ranked highest on Indeed's annual list of the 25 best jobs of 2019, which came out in April. - MARKET WATCH

In an interview with IEEE Spectrum, renowned deep learning architect Yoshua Bengio spoke about the future and limitations of AI, including what we should build next. Bengio, who won the Turing Award in 2018, answered questions in categories like "deep learning and its discontents" and "physics, language, and common sense." Today, he is scheduled to speak on similar future-oriented subjects at NeurIPS in a talk titled “From System 1 Deep Learning to System 2 Deep Learning.” - IEEE SPECTRUM

Publications of AI-related papers rose by 300 percent between 1998 and 2018, according to the latest AI Index report. The report's creators released a tool to search through the AI research papers and another to help readers sift through country-level data on AI research and investment. The survey, which was created by researchers at Harvard, Stanford, OpenAI, and other organizations, had generally positive things to say about various and current AI trends, which we've highlighted below:

  • The U.S. is the global leader in AI by most metrics, with slightly below $12 billion in private AI investment compared to China's $6.8 billion.
  • The time required to train a machine vision algorithm on ImageNet fell from about three hours in October 2017 to 88 seconds in July 2019.
  • NeurIPS anticipates hosting 13,500 attendees this year, which is up 800 percent from 2012.
  • More than 21 percent of computer science PhDs choose to specialize in AI.
  • Nearly 10 percent of global private investment went into autonomous vehicles, amounting to $7.7 billion. - THE VERGE

The Stanford AI Report released this week is a fascinating read.  But one of the things that jumped out at me is that when computers learn to beat humans at various games, they do it by playing many lifetimes worth of games in a very short period of time.  They have to play many more games than humans do to acheive the same level of competency.  Right now, we are primarily focused on how to make computers as good as humans at various tasks, but they do it so inefficiently, it may not be "learning" the same way we humans do it.

An interesting metric to track for AI would be the 'human learning equivalent.'  So, instead of saying that a certain ML model can acheive human performance on a task after being trained for the human-year equivalent of thousands of years, it would be great to focus on keeping that overal performance benchmark static, and pushing the training equivalent forward instead.  For example, if humans score 95% on an image classification task, and we get a machine learning model that performs just as well, instead of pushing the model to 96% performance, we should focus on how to get it to the same 95% level of performance with less data and faster training cycles.  While there is some work being done on these kinds of things, it doesn't seem to get nearly as much interest in the AI community as it should.

Focusing on how efficiently machines learn would also help as we understand that sometimes two humans learn the same thing a bit differently or a bit quick/slower than others.  Is that because learning is somewhat path dependent, and if so, will it be the same in machines?  Are human learning differences between two people due to pure neurological processing power?  Starting to tease that apart and understand it for both humans and machines will be important as we try to train machines for an increased variety of tasks.

Thanks for reading.

@robmay

Subscribe to Inside AI