-- Introduction --
Happy Sunday and welcome to the latest edition of Inside AI! For those of you who are new, I'm Rob May, CEO of Talla, and active investor in the A.I. space. If you have an early stage A.I. startup looking for investment, send me a note. Our goal in this newsletter is not to be too technical, but to give you a very wide ranging overview of what is happening in A.I.
If you like the newsletter, I hope you will forward this to a friend so they can subscribe too.
Or if you really want to support us, go Premium and get a special Wednesday edition each week, plus extra analysis and reports. Thank you to Ascent, Kylie and Dataquest, for being sponsors. (email firstname.lastname@example.org if you want to become a sponsor.)
Our newest sponsor, Dataquest teaches data science and data engineering interactively online. Learn all the skills you need to advance your career, and join thousands of students who have been hired at companies like SpaceX, Amazon, and Microsoft.
-- Big Idea --
The most interesting thing I read this week this article in the Regulatory Review about experts weighing in on fairness in machine learning. The piece highlights something that many people forget - that "fairness" can mean different things to different constituencies, and sometimes we can even come up with algorithms that we consider "fair" that give surprising results. This will be one of the trickiest problems for A.I. to navigate, in my opinion, is going to be the idea of "fairness" and the general closed mindedness of most humans to actually consider alternative views of what is fair. Lots of tricky stuff to navigate here.
-- Must Read Links --
The Last Invention of Man: How AI Might Takeover the World. Nautilus. (fiction)
I don't link to much fiction, because I don't read much fiction, but this story is particularly good and highlights one thought scenario of how a self-improving A.I. might play out. This story is an excerpt from the book "Life 3.0."
Will The Future of A.I. Learning Depend More On Nature or Nurture? IEEE Spectrum.
This is an interesting debate between A.I. powerhouses Yann Lecun and Gary Marcus on where the next advances in A.I. may come from, and how much we need to look to the human brain for breakthrough inspiration.
Your Data Is Being Manipulated. Datapoints.
A great piece by researcher Danah Boyd on the problems with misuses of data, and how it may get worse in an AI world. She suggests we spend more time on testing and building technical antibodies.
Is Artificial Intelligence Going Off The Rails? The Outline.
An interview with AI pioneer and former Stanford CS professor Terry Winograd, on the state of artificial intelligence.
-- Industry Links --
A History of Artificial Intelligence in 10 Landmarks. DigitalTrends.
Google and Uber's Best Practices for Deep Learning. Intuition Machine.
GANs Are Broken In More Than One Way. Inference.vc.
How Robots Are Changing The Way You See A Doctor. Time.
Artificial Intelligence is About The People, Not the Machines. Techcrunch.
Mattel Cancels Plans for a Kid Focused AI Device Because of Privacy Concerns. Washington Post.
Google Will Beat Apple At Its Own Game With Superior AI. Medium.
Will Machine Learning Save the Enterprise Server Business? NetworkWorld.
Nonlinear Computation in Linear Networks. OpenAI.
Accenture Augments Human Capital With A.I. Forbes.
Google Cloud's A.I. Push Could Involve Deep Mind. The Information.
The Future of Smart Cars Is Now. SandHill Road.
DeepMind Launches an Ethics and Society Group. Deepmind Blog.
Tim O'Reilly Says Algorithms Have Already Gone Rogue. Wired.
Motives Behind Acquiring an AI Startup. Medium.
This is How Much Google Is Spending On Cutting Edge AI Research. Quartz.
Omidyar, Hoffman Create a $27M A.I. Public Research Fund. Techcrunch.
How to Build a Self Conscious AI Machine. Wired.
Data Science Systems Engineering Approaches. KDNuggets.
-- Research Links --
Duality of Graphical Models and Tensor Networks. Link.
Deep Abstract Q-Networks. Link.
Learning Graphical Models From a Distributed Stream. Link.
-- Commentary --
I get a lot of questions about how I evaluate A.I. investments. Below is some commentary I wrote a year and a half ago that I'm republishing because it answers that question.
I frequently hear complaints from investors that every company now bills themselves as a machine learning company. So I get a lot of questions about how to evaluate A.I. companies. It's difficult for sure, but below are some key things to think about. Many of the ideas below come from discussions I've had with VCs, but I would love more feedback, and will use it to turn this into a blog post eventually.
Impact - I think A.I. companies can be divided in 3 buckets:
Marginal - these are companies doing "X but with machine learning" like, Zendesk but with ML. Unless there is some other compelling reason to invest, I ignore them.
10x - These companies are an improvement over something that is already being done, but a huge 10x improvement. For example, making a robot/drone autonomous.
Brand New - These are things that couldn't be done at all until the latest round of A.I. This could be an app that uses neural question/answering tech from NLP, or, maybe inductive graph reasoning. There aren't many currently, but they are coming.
Algorithms - If a team tells you they have a proprietary algorithm, there is a 98% chance that it's useless. The field is advancing so fast, and most big companies like Google and Facebook, along with all of academia, are publishing key algorithmic advances. It's highly unlikely that a team could come up with an algorithm that others could not quickly replicate. That doesn't mean algorithms are worthless, just that, algorithms aren't expected to be a long term defensible competitive advantage.
The thing to make sure of, is that teams claiming to be machine learning companies actually know something about algorithms. With everyone on the deep learning bandwagon, it's useful to ask why they chose their particular technical approach and why it is appropriate for their use case. There are many problems that deep learning can't solve, and many approaches to A.I. that don't require neural networks.
Data - The biggest advantage an A.I. company can have at the moment is a proprietary data set. Part of the reason I'm so down on most consumer A.I. is that Google, Facebook, and Amazon have so much access to so much consumer data, it is difficult to compete with them.
I am always looking for companies that have access to unique data sources, or creative ways of getting data. I actually think one of the best uses of chatbots is to gather these data sets, which I wrote about a while back. As the A.I. tech stack matures, there are also opportunities to collect, clean, parse, classify, label, train, etc the data at various stages along that A.I. pipeline. I haven't seen many A.I. infrastructure plays yet, outside of basic "run your data against our models" startups, but I expect to see more.
I also think a lot about the properties of the underlying data sets a company is using. Is the data stationary or does the distribution change over time? Does it require a human in the loop to deal with it? Does it fit a bell curve or a power law or some other distribution? These all have different implications.
Team - A.I. teams are difficult to evaluate because A.I. is still such an academic field. Many, probably most A.I. companies have PhD founders at this point, but I think many academics struggle as entrepreneurs. Because A.I. is so academic, there is a big gap in knowledge, for traditional business people, CEOs, and non-academic entrepreneurs, in understanding what is feasible. So many entrepreneurs are reading A.I. news about some kind of cool academic paper, and trying to start a company around the idea, not realizing that it's 5 years away from being ready for prime time. I think the best teams are people that either have PhD credentials but don't like academia (they tend to be more pragmatic and better at starting companies), or entrepreneurs with deeply technical backgrounds before going to the business side, who can come close to understanding the real state of applied technology in A.I.
Humans vs Automation - Humans are expensive to scale, but having a human in the loop allows you to perform tasks not currently possible by full automation. But it can be a scaling bottleneck if you have to hire lots of A.I. trainers. When companies employ humans in the A.I. loop, I think a lot about early customer adoption. Will a competitor with a bit less functionality, but no humans in the loop, be able to gain more market share faster, and then, as technology improves, automate what their competitors are doing with humans? How valuable to the end user is the task the humans really perform?
I think the best use of humans are cases where, their results can quickly automate more and more tasks for the machines. I wrote more about the concept of a human vs automation flywheel a few months ago. My favorite models are the ones that use humans in the loop but, rather than using mechanical turk or hiring them directly, the humans labeling the data and doing the training are the product end users.
That's all for this week. Thanks again for reading. Please send me any articles you find that you think should be included in future newsletters. I can't respond to everything, but I do read it all and I appreciate the feedback.
-- ABOUT ME --
For new readers, I'm the co-founder and CEO of Talla. I'm also an active angel investor in A.I. I live in Boston, but spend significant time in the Bay Area, and New York.