Happy Sunday and welcome to the weekend edition of InsideAI. I'm Rob May, CEO at Talla. Can I just make one random comment in this week's intro that I don't usually make? Please please stop naming every AI product like a human person. It's driving me crazy and I think long term, you are destroying the baby-naming process for every pregnant couple. "Oh we should call him Davis," "no, that's a security system, how about Andrew?," "uh no, that's my printer's name" This won't end well. Let's go back to non-human product names while we still can.
We are actively recording the next season of AI at Work, so go check it out if you haven't listened before. Season 2 will focus on AI companies that went through YC, so it should be tons of fun.
Here are the most popular articles from the newsletter in the past week:
Ford unveiled an AI-powered database this week that can recommend solutions for car safety, travel, and parking on city streets. Brett Wheatley, Ford's vice president of mobility, marketing, and growth, announced the Ford City Insights platform at company offices in Ann Arbor, Michigan, where Ford is testing out the database to help solve problems in urban driving. The platform utilizes AI and data from traffic cameras, parking garages, and other sources to determine things like where accidents are most likely to occur in a city. In Ann Arbor, the system was used to determine that there are enough parking spaces, but showed that drivers needed more help in finding the spaces. Ford plans to expand the platform for testing in six other cities: Austin, Indianapolis, Miami, Pittsburgh, Detroit, and Grand Rapids. - CAR AND DRIVER
The machine learning startup Streamlit has launched an open-source tool that allows ML engineers to create custom applications for interacting with data. In addition to the application development framework, Streamlit announced a $6 million seed round led by Gradient Ventures, with participation from Bloomberg Beta, Color Genomics co-founder Elad Gil, #Angels founder Jana Messerschmidt, Y Combinator partner Daniel Gross, Docker co-founder Solomon Hykes, and Insight Data Science CEO Jake Klamka. Streamlit co-founder Adrien Treuille, a machine learning engineer, said the tool was built to be flexible enough to serve multiple requirements. "While most companies are basically trying to systemize some part of the machine learning workflow, we’re giving engineers these sort of Lego blocks to build whatever they want,” Treuille noted. - TECHCRUNCH
Researchers have successfully shrunk Google's BERT from the original 340 million data parameters to 100 million parameters, according to two new papers. Both studies used the compression technique known as knowledge distillation, which used the larger AI model to train the smaller model. In the first paper, Huawei researchers developed a TinyBERT model that's less than a seventh the size and nearly 10 times faster than its predecessor. The second paper describes how Google researchers created their own model that's smaller than the original by a factor of more than 60. According to MIT, these smaller models will advance the use of AI in consumer devices like smartphones. - MIT TECH REVIEW
Amazon Web Services (AWS) is hosting a league of mini self-driving vehicles designed to teach people about reinforcement learning. Employees at companies such as Morningstar and Liberty Mutual Group are learning about advanced AI by programming and racing the mini vehicles, The Wall Street Journal reports. Specifically, the workers are building and training algorithms using Amazon SageMaker and deploying them to the cars that race around a track, either online virtually or in-person at matches. AWS developed the DeepRacer League to teach software developers about machine learning in a more engaging manner, explain Mike Miller, AWS general manager of AI devices. Anyone with an AWS account can participate. - WSJ
When I was in college I read severa books by Stephen Jay Gould. If you have never read his stuff, it's awesome and I highly recommend you check out any of his books. Gould was a scientist in the vein of Carl Sagan - although better than Sagan in my opinion, about writing for the general educated population. It was Gould who first exposed me to the idea that changes in some performance metric could have more to do with changes in the underlying population, more than anything else. (Gould wrote on themes that tied into evolution so, this makes sense).
My favorite piece he wrote was about Joe Dimaggio's 56 game hitting streak. Gould wrote that, as Major League Baseball expanded to more teams, we would expect to find an increase in hitting, to some extent, including an occasional new longest hitting streak. Why? Because with the expansion teams, you now have pitchers playing in the major leagues who would have previously been in the minors. By definition they are a bit worse, meaning your best hitters would take advantage of this. It was an eye opening moment for me. It was the first time I understood that, if statistics showed that batters are getting better over time, it may have nothing to do with what we are teaching batters. Gould's point was that Dimaggio's hitting streak stood out as a stark outlier even when you factored in team expansion, but my point today is that, when you lower the quality of pitching, you can expect the distribution of batting averages to change such that there are some higher than before.
This idea can be expanded on in other areas. When the population changes in ways that make something easier, behavior could change in unexpected ways. It was predicted by many that Uber would make traffice better, because of fewer cars on the road, but, research has shown it has mostly made traffic worse. The ease of calling an Uber means we call one in situations where previously we would have walked or taken public transportation.
Lower the quality of pitching = more high batting averages. Lower barrier to getting a taxi = more rides and worse traffic.
So what happens when we lower the barrier to deploying something intelligent? What is the societal impact when intelligence becomes super cheap and easily replicated, instead of wrapped up in expensive humans who took decades to learn everything they needed to know? What counterintuitive thing will happen as a result of the secondary or tertiary effects of it?
As an example, some research shows that more automation in warehouses increases overall humans working in the industry. Why? Because when you lower the human labor costs of a warehouse, you can put more warehouses in smaller towns that weren't economically feasible before. Having more automation will, initially, increase the desire for human skills like judgment, empathy, and just good old human to human interaction in some fields.
The important point here is that you can't think linearly about what will happen. It's not a 1:1 replacement of automation taking human jobs. It is complex, and will change work in many different ways.
Thanks for reading and enjoy the rest of your weekend.