Inside | Real news, curated by real humans
Inside AI

Inside AI (Jul 2nd, 2017)

Happy Sunday and welcome to the latest edition of Inside AI!   For those of you who are new, I'm Rob May, CEO of Talla and active investor in the A.I. space. If you have an early stage A.I. startup looking for investment, I'd love to hear from you.

This week's issue is awesome.  If you agree, I hope you will forward this to a friend so they can subscribe too.  

Or if you really want to support us, go Premium and get a special Wednesday edition each week, plus extra analysis and reports.  Also, welcome to Ascent as our newest corporate sponsor.  You can thank them and Kylie for the fact that this newsletter will get better with some paid writers helping. (email if you want to become a sponsor)

Also, InsideAI reader Todd Hoff has written a fiction book about the first sentient A.I. check it out if you like science fiction.

-- Big Idea --

The best thing I read this week was Louis Coppey's post on Winning Strategies For Applied A.I. Companies.  The post does a deep dive on the four main categories of A.I. companies, and looks at how categorization affects risks, rewards, and fundraising needs.  It also has good insights on which companies have a chance to move from one category to another.  Definitely a must read if you are running, or investing in, A.I. startups.

Many thanks to Inside AI's corporate supporters (email to become a sponsor)


-- Commentary --

China has stepped up the use of facial recognition machine learning technology, and according to the Wall Street Journal, plans to use it to score citizen behavior.  By giving everyone a "social score" based on how much they comply with the law, China could dole out rewards, punishment, and status, but doing so requires some pretty invasive personal surveillance.

We all know that labeled data sets are everything for A.I.  This China stuff has me wondering, "what labeled data sets will China have that others will not?"  And could those labeled data sets, gathered because they care less about individual freedom, give them a leg up in A.I.?

I spoke to an A.I. CTO friend this morning about it, and he commented that it's like if the Nazis did inappropriate medical experiments on humans, but the breakthroughs somehow gave them an advantage in the War.  In that situation, are you forced to either match their unethical acts, or lose the war and become their slaves?  I'm not implying war with China, as I don't think that would happen in any forseeable future, but conceptually, if it becomes clear that violating some of our core principles gives us an edge as a country in the A.I. race, and we believe A.I. is a winner-take-world game... do we do it?

It's unclear if A.I. will be a winner-take-all race, given that there could be scenarios where the physical constraints to winning the A.I. race (robotics, sensors, power, etc) make it more even, and more likely there are multiple winners.  But if it is, could a country with different morals around individual freedom end up winning more because of their political power and the data they gather without consent, rather than technical prowess?

I can see China on a path to build things with A.I. that we can't, because we are unwilling to collect the data.  And as a Libertarian personally, I feel like I'm willing to take that chance.  Better to err on the side of freedom.  But, if we lose the A.I. race, maybe we lose the freedom too.  This is just one of the many interesting challenges coming in the A.I. era.  Let's hope we navigate it well.

  • Email gray
  • Permalink gray

That's all for this week.  Thanks again for reading.  Please send me any articles you find that you think should be included in future newsletters.  I can't respond to everything, but  I do read it all and I appreciate the feedback.   


-- ABOUT ME --
For new readers, I'm the co-founder and CEO of Talla,   I'm also an active angel investor in A.I.  I live in Boston, but spend significant time in the Bay Area, and New York.  

Subscribe to Inside AI