Inside | Real news, curated by real humans
Inside AI

Inside AI (Jul 30th, 2017)

Happy Sunday and welcome to the latest edition of Inside AI!   For those of you who are new, I'm Rob May, CEO of Talla, an A.I. tool for internal service teams like I.T. and HR,  and active investor in the A.I. space (sample companies HydraMythic, and Openbionics). If you have an early stage A.I. startup looking for investment, I'd love to hear from you.  Our goal here is not to be too technical, but to give you a very wide ranging overview of what is happening in A.I.

This week's issue is awesome.  If you agree, I hope you will forward this to a friend so they can subscribe too.  

Or if you really want to support us, go Premium and get a special Wednesday edition each week, plus extra analysis and reports.  Thank you to Ascent,  Kylie and Pillar, for being sponsors. (email if you want to become a sponsor.)

-- Big Idea --

The big idea this week is actually lots of ideas in a single video.  Jason interviewed me on This Week in Startups and we talk about all kinds of stuff, but mostly A.I.  You can see me demo Talla, talk about the importance of data annotation pipelines, and fundraising for A.I. companies.  It was a fun session so if you have time, go check it out.

Many thanks to Inside AI's corporate supporters.  Please go check them out!

If you were a corporate sponsor, your logo would be here!

Learn more

-- [Premium subscribers only] InsideAI Premium Research covering 8 companies in the Natural Language Processing space --

We are excited to launch our first report, covering 8 companies in the Natural Language Processing Space.  The report looks at what they consider their competitive advantages, capital raised, and future product directions.  This report is only available to premium subscribers.  Our goal is to do one of these reports every month so go ahead and upgrade to premium if you want to receive them.
Content for premium users only

-- Commentary --

A few weeks ago, a technology glitch caused some tech company stock prices to go haywire and show crazy high or crazy low prices.  I've been thinking about that because I had a discussion this week with a researcher who left NLP as a field because of what he called "cascading inaccuracies."  Parsing something 1% incorrectly at the top level, then passing that the next system level that is maybe 3% inaccurate... these probabilites aren't independent.  They compound, and it causes the end result, after several layers of errors, to be, say, only 60% accurate.  

I believe that every new complex technology is subject to the butterfly effect.  Look at Amazon Web Services.  They've had several major outages over the years, because, when you are dealing with new technology and complex systems at scales never done before, you can't fully predict what might happen or where your system may fail.  While the outages get further and further apart, I think AWS is always at a bit of a risk for something unexpected to happen.

We also saw this in blockchain technology with the theft of $55M from the DAO project.  It's hard to design distributed complex systems to be error proof because nobody can really wrap their head around all the complexity.

It's a pretty safe bet to say that we will see at least a couple of catastrophic A.I. related technology failures in the next couple of years.  Of course it's impossible to predict what they will be or where they will come from, but, such failures are part of early stage technologies that push the limits and the process of gaining mainstream adoption.  The question is, what will happen to the A.I. ecosystem once they come?  Will A.I. safety engineering gain in popularity?  Will legislation come to regulate much of it?  Will consumers pause and seek more conventional products and solutions instead?  

It partly depends on the type of failure and the impact.  A self driving car A.I. failure vs a stock market trading A.I. flash crash vs. a robot going haywire to create some destruction will all have different responses.  In some ways, it makes the future of A.I. path dependent.  The randomness of where the first major debacle occurs will affect how we move forward, and if you could replay history with different initial debacles, A.I. technology might take different paths and end up in different places as a result.  Whatever happens, however bad it may be, I hope we are wise enough to realize it's part of the process, unfortunately, and can move forward past it.  But I'm pretty sure such an event will happen.

  • Email gray
  • Permalink gray

That's all for this week.  Thanks again for reading.  Please send me any articles you find that you think should be included in future newsletters.  I can't respond to everything, but  I do read it all and I appreciate the feedback.   


-- ABOUT ME --
For new readers, I'm the co-founder and CEO of Talla,   I'm also an active angel investor in A.I.  I live in Boston, but spend significant time in the Bay Area, and New York.  

Subscribe to Inside AI