Happy Sunday and welcome to the latest edition of Inside AI! For those of you who are new, I'm Rob May, CEO of Talla and active investor in the A.I. space. If you have an early stage A.I. startup looking for investment, I'd love to hear from you.
This week's issue is awesome. If you agree, I hope you will forward this to a friend so they can subscribe too.
Or if you really want to support us, go Premium and get a special Wednesday edition each week, plus extra analysis and reports. Also, welcome to Ascent as our newest corporate sponsor. You can thank them and Kylie for the fact that this newsletter will get better with some paid writers helping. (email firstname.lastname@example.org if you want to become a sponsor)
Also, InsideAI reader Todd Hoff has written a fiction book about the first sentient A.I. check it out if you like science fiction.
-- Big Idea --
The best thing I read this week was Louis Coppey's post on Winning Strategies For Applied A.I. Companies. The post does a deep dive on the four main categories of A.I. companies, and looks at how categorization affects risks, rewards, and fundraising needs. It also has good insights on which companies have a chance to move from one category to another. Definitely a must read if you are running, or investing in, A.I. startups.
-- Must Read Links --
A Reality Check For IBM's A.I. Ambitions. MIT Tech Review.
This is a must read piece about the failures, and continued promise, of Watson. Some of the press about Watson has made IBM appear behind some of the main tech leaders, but, keep in mind that Google, Amazon, Facebook, and others don't do the kinds of customer facing projects IBM is doing with Watson. When you look at how the tech giants are positioned, I think IBM has been vastly underestimated, given that they have the one thing few others do - large scale enterprise A.I. projects. Whether it all works today, or not, doesn't matter. The experience and expertise they are building is a competitive advantage in a market that is very young where no other companies are doing these types of projects that will soon enough be mainstream.
Why Humans Must Accept That Robots Make Better Decisions. Telegraph.
Looking at data that shows robots are better than humans at many tasks raises the question of when humans will actually accept that fact. As I've written before, I think human resistance will be the big delay for many A.I. systems, not technology.
Light Powered Computers Brighten A.I.'s Future. Scientific American.
Imagine what can happen if you make deep learning faster, and less power consuming. This is what optical computing can do for the industry, which is why it is such an important trend to follow.
DiffBlue Raises $22M To Bring A.I. To Software Development. Techcrunch.
If this technology becomes real, it will have one of the most profound impacts of any type of A.I. on the world. Imagine the impact if you make coders more productive, and then imagine the impact if you replace many of them altogether. What would be the impact on Silicon Valley, and tech in general, if engineers no longer rule the world, but are as replaceable as workers on a factory line?
I Ask 100 Information Questions To 4 Digital Assistants. Vlad's Box.
Really interesting test of the top digital assistants. And while this is completely unscientific, it's the first attempt at starting to quantify the differences between these types of A.I.s Expect to see more of this kind of testing formalized in the future.
-- Industry Links --
The Dangers Of A.I. In Healthcare: Risk Homeostasis and Automation Bias. Towards Data Science.
Fake News: You Ain't Seen Nothing Yet. Economist.
Who Is Winning The A.I. Race? MIT Tech Review.
The Real Threat of Artificial Intelligence Is Economic Inequality. NY Times.
Parenting In The Age of A.I. Architecht.io
"How Many Jobs Will Be Killed by A.I." is the Wrong Question. LinkedIn.
Great Interview with Murray Shanahan, Research Scientist at DeepMind. YCombinator.
How Artificial Intelligence Can Deliver Real Value To Companies. McKinsey.
Machine Creativity Beats Some Modern Art. MIT Tech Review.
SEC Integrates A.I. and M.L. For Risk Assessment. Waters Technology.
Next Generation A.I. Could Develop It's Own Human Intuition. Inverse.
Salesforce Opens Einstein A.I. To Third Party Developers. Techcrunch.
Driverless Tech Startups Are Driving Past a Trillion Dollar Opportunity. IEEE Spectrum.
How Haven Life Uses Machine Learning To Spin New Life Out of Long Tail Data. ZDNet.
The Coming Battle: A.I., Extremism, and The War of Ideas. Flux Magazine.
Over 150 Of the Best Machine Learning, NLP, And Python Tutorials. Unsupervised Methods.
When A.I. Becomes The New Face Of Your Brand. Harvard Business Review.
A Machine Learning Approach To Venture Capital. McKinsey.
Book Review: "The Mathematical Corporation: Where Machine Intelligence + Human Ingenuity Achieve the Impossible". InsideBigData.
Creating a Continuous M.L. IoT Learning Loop. Towards Data Science.
Many thanks to Inside AI's corporate supporters (email email@example.com to become a sponsor)
-- Commentary --
China has stepped up the use of facial recognition machine learning technology, and according to the Wall Street Journal, plans to use it to score citizen behavior. By giving everyone a "social score" based on how much they comply with the law, China could dole out rewards, punishment, and status, but doing so requires some pretty invasive personal surveillance.
We all know that labeled data sets are everything for A.I. This China stuff has me wondering, "what labeled data sets will China have that others will not?" And could those labeled data sets, gathered because they care less about individual freedom, give them a leg up in A.I.?
I spoke to an A.I. CTO friend this morning about it, and he commented that it's like if the Nazis did inappropriate medical experiments on humans, but the breakthroughs somehow gave them an advantage in the War. In that situation, are you forced to either match their unethical acts, or lose the war and become their slaves? I'm not implying war with China, as I don't think that would happen in any forseeable future, but conceptually, if it becomes clear that violating some of our core principles gives us an edge as a country in the A.I. race, and we believe A.I. is a winner-take-world game... do we do it?
It's unclear if A.I. will be a winner-take-all race, given that there could be scenarios where the physical constraints to winning the A.I. race (robotics, sensors, power, etc) make it more even, and more likely there are multiple winners. But if it is, could a country with different morals around individual freedom end up winning more because of their political power and the data they gather without consent, rather than technical prowess?
I can see China on a path to build things with A.I. that we can't, because we are unwilling to collect the data. And as a Libertarian personally, I feel like I'm willing to take that chance. Better to err on the side of freedom. But, if we lose the A.I. race, maybe we lose the freedom too. This is just one of the many interesting challenges coming in the A.I. era. Let's hope we navigate it well.
That's all for this week. Thanks again for reading. Please send me any articles you find that you think should be included in future newsletters. I can't respond to everything, but I do read it all and I appreciate the feedback.
-- ABOUT ME --
For new readers, I'm the co-founder and CEO of Talla, I'm also an active angel investor in A.I. I live in Boston, but spend significant time in the Bay Area, and New York.