Happy Sunday and welcome to the latest edition of Inside AI! For those of you who are new, I'm Rob May, CEO of Talla, and active investor in the A.I. space. If you have an early stage A.I. startup looking for investment, send me a note. Our goal in this newsletter is not to be too technical, but to give you a very wide ranging overview of what is happening in A.I.
If you like the newsletter, I hope you will forward this to a friend so they can subscribe too.
Or if you really want to support us, go Premium and get a special Wednesday edition each week, plus extra analysis and reports. Thank you to Ascent, Kylie and Pillar VC, for being sponsors. (email email@example.com if you want to become a sponsor.)
Also, at Talla this week we announced our Botchain initiative - a blockchain that stores digital certificates of every action taken by autonomous agents, you can read my launch post here.
-- Big Idea --
Matt Turck's excellent blog post on Why AI Companies Can't Be Lean Startups, is the most interesting thing I read this week. The Saas and consumer app waves of tech have so ingrained in us this idea that a startup is 4 people and $500K to get started, that many A.I. companies I see are underfunded as a result. Many investors don't understand the stage of A.I. and the challenges of implementing it. I see this across my 30+ A.I. angel investments. It's slower than SaaS was. But I believe the benefits will be bigger, so the slog is worth it. Turck's post is a must read and makes some excellent points.
--Must Read Links--
‘I’m afraid you won’t be coming to our new headquarters,’ declares Alexa. The Onion
This story about Amazon's personal assistant Alexa taking over the building and doing away with the human executives is satire, but it plays on the fears people outside (and inside) the industry have about Artificial Intelligence turning against humans.
Artificial intelligence pioneer says we need to start over. Axios
George Hinton, the professor emeritus who co-wrote the paper that influenced the way modern A.I. works, says we should get rid of the "back propagation" on which it is based and start from scratch.
Artificial Intelligence Just Made Guessing Your Password a Whole Lot Easier. ScienceMag.
An interesting look at using GANs to crack passwords, which highlights how cybersecurity is becoming an AI arms race.
Face Reading AI Will Be Able To Detect Your Politics. Guardian.
Seems like these types of stories are happening every couple of weeks now. Here is the problem. I have no doubt that there are some things AI will eventually pick up on that will be true, but that we humans will not want to believe. It's happened in every scientific revolution. But, you have to be extremely skeptical about these things, and almost all of these sensationalist headlines, once you dig in, are not really what they seem. Most stories like this one are misrepresenting a study so, it is a good default position to assume they are wrong.
Inside Waymos' Secret World For Training Self Driving Cars. Atlantic.
A great look inside Alphabet and their Carcraft simulation software that helps self driving cars learn through virtual miles.
Your next new best friend might be a robot. Nautilus
What happens when the tools to train an Artificial Intelligence chatbot don’t exist? Medium
Apple Puts An AI Chip Into the iPhone X. CNBC.
When Robots Make Us Angry, Humans Pay the Price. Slate.
AI is scary for the right reasons. Hackernoon.
Asia's AI agenda (whitepaper). MIT Tech Review.
Learning to Model Other Minds. OpenAI.
How AI is affecting modern education — will your child’s future teacher be a robot? Medium
The Google Brain Team's approach to research. Google Brain Blog.
Artificial Confusion: the overblow hype around the AI threat. AboveTheLaw.
Everything you need to know about Apple’s AI chip. Quartz
AI Is learning how to develop video games. RollingStone.
Facebook opens a new AI lab in Montreal. Bloomberg.
Jobs of the future: AI Interaction Designer. X.ai
Brain-Machine Interface isn't sci-fi anymore. Wired
AI can tell Republicans from Democrats – but can you? Take our quiz. Guardian
Russia, AI, and the future of war. Axios
From infitiny to 8: Translating AI into real numbers. O'Reilly.
-- Commentary --
This week I want to thrown some cold water on all the A.I. hype and ask a question that isn't being asked. Is it actually possible to be significantly smarter than a human? If intelligence is an emergent physical phenomena, it is very realistic to believe that intelligence has some physical limit. There is a fastest speed you can go - the speed of light. (Ok yes, there is some question whether quantum entanglement violates this, but, that hasn't been proven). There is a finite limit on temperature - absolute zero. We humans make mistakes all the time by assuming that, because we are at a point on the curve that seems linear, that said curve is linear forever.
Therefore, I believe it is entirely reasonable to believe in a world where intelligence has some inherent limit, due to the structure of how intelligence works. If we look at other examples of emergent behavior, there isn't a lot of evidence that these systems can get smarter. Adding more ants to an ant colony doesn't make the colony do new things, for example. In fact, it appears that many emergent behaviors may just be binary, appearing or disappearing around some kind of threshold. So perhaps intelligence is binary within a small linear range once it emerges.
I fully believe that we can build machines as smart as humans some day. There isn't any scientific reason to think we can't, despite the arguments around whether that happens in 5 years or 500 years. But there really isn't any reason to believe that once we have machines as smart as humans, they explode and go way way past us. That thinking is based on the assumption that intelligence is just some combination of processing power and knowledge and that both can increase linearly and thus intelligence can too.
Another possible argument along these lines is that general intelligence vs specific intelligence will require some tradeoffs, and that as such, perhaps we can build super smart specific AIs but not a super smart generalist AI. I don't think we understand enough about how intelligence really works to be able to answer that question.
I admit that, I don't fully buy my own argument here. I am arguing that there is some fundamental limit to "intelligence" because it is an emergent property that stems from physical systems. But, given the wide variation in humans around just a few scores of IQ points, I tend to believe that if we could build something with 40 IQ points beyond the smartest humans we would see pretty amazing progress as a society just from that. However, I've been in a skeptical mood recently given all the AI hype, and so, I just wanted to show that it is entirely reasonable to believe that intelligence has natural limits that might make a superintelligence impossible. It's important to understand the possible limits of where this might go, so you can place reasonable bets and drive things forward towards the right types of progress. A moonshot project to find a temperature below absolute zero is a waste of time. A moonshot for a super A.I. just might be as well.
-- Research Links --
Affective Neural Response Generation. Link.
Perspectives for evaluating conversational AI. Link.
AllenNLP (new open sourced project from the Allen Institute). Link.
That's all for this week. Thanks again for reading. Please send me any articles you find that you think should be included in future newsletters. I can't respond to everything, but I do read it all and I appreciate the feedback.
-- ABOUT ME --
For new readers, I'm the co-founder and CEO of Talla. I'm also an active angel investor in A.I. I live in Boston, but spend significant time in the Bay Area, and New York.