Inside AI | Inside

[ Inside AI ]

Rob May's roundup of stories and commentary on Artificial Intelligence, Robotics, and Neurotechnology

The most bizarre and fascinating thing I read this week was about Lil Miquela, the AI "robot" and Instagram influencer, whose creators claimed her account had been hacked by a rival AI Instagram star BermudaIsBae. Ultimately it was learned that the hack was architected by Miquela's creators, brud, as part of a fictional virtual reality drama playing out on Instagram and other social media platforms. It was a ploy to gain more media attention, and it worked. 

While Miquela and Bermuda and other AI-generated avatar characters on social media are not real, the company behind them (brud) is, and has reportedly raised about $6 million in investment from firms like Sequoia Capital. The company is essentially turning its manufactured AI celebrities into a commodity. Miquela in particular has received a lot of media attention — "she" has been interviewed and profiled by actual reporters, released songs, and collaborated with brands and companies including Tesla, Stussy, Moncler, Vans, and others, and even ran an Instagram campaign with Prada as part of Milan Fashion Week. 

There's no doubt that using AI to generate this popular character has been a creative and effective way to make money, but the AI industry is growing in other, much more useful and interesting directions. If there is any real benefit to brud's pop culture experiment, besides making money for the company, it is probably watching how their creations use an amalgamation of current styles and tastes to help AI span the uncanny valley. Miquela's Instagram account has amassed more than a million real followers, and created a media sensation. 


Thanks for reading and see you on Sunday.

@robmay


Welcome to this week's edition of InsideAI.  I'm Rob May, CEO at Talla, and an active angel investor in the AI space.

 If you like the newsletter, I hope you will forward this to a friend so they can subscribe too.  

Or if you really want to support us, go Premium and get a special Wednesday edition each week, plus extra analysis and reports. Thank you to AstroAscent,  Kylie and Dataquest, for being sponsors. (email austin@inside.com if you want to become a sponsor.)

Our goal in this newsletter is not to be too technical, but to give you a very wide ranging overview of what is happening in A.I. 


Subscribe to Inside AI


-- Big Idea --

The most interesting thing I read this week was AI researcher Michael Jordan's piece on "Artificial Intelligence - The Revolution Hasn't Happened Yet."  He makes a great case for a human-centric engineering discipline, something new and different from what currently exists, to develop to drive AI forward.  In this case, it is less about AI building general human level intelligence, and more about designing how AI at various levels will impact and interact with human lives and society.  Long but definitely worth reading.


-- Must Read Links --

Text Embedding Models Contain Bias.  Google Developers Blog.

This post is more technical than most I link to, but it does a good job of explaining why there is bias in many machine learning models, and how difficult it may be to get rid of in many cases.

AI Researchers Are Making More Than $1 Million, Even At A Non-Profit.  NY Times.

OpenAI has published financial reports showing what their key AI researchers make.  This trend will continue until supply catches up with demand.  It highlights the challenge for most businesses that aren't focused on cutting edge tech.  For them, hiring AI talent is difficult both from a cost perspective and a "is my work interesting?" perspective.

Frontier AI:  How Far Are We From "General" Artificial Intelligence?  Medium.

AI Investor Matt Turck has an excellent piece on what he has learned as an investor in the market, and looks at some of the contradictory trends that show AI as both progressing rapidly and progressing slowly.


-- Industry Links --

Machine Learning Software Predicts Behavior of Bacteria.  Phys.org.

Crypto, AI, and The Social Good.  Cryptograf.io.

Pedro Domingos On The Arms Race in Artificial Intelligence.  Spiegel Online.

The Ethics of Reward Shaping.  ArgMin.

What Human Teams Can Learn From Machine Learning Marketing Algorithms.  Adweek.

Why Deep Learning Is Perfect For NLP.  KDNuggets.

Is Open Source The AI Nirvana For Intel?  Nextplatform.

The Age of Mindreading.  BusinessToday.

Facebook Will Design Its Own AI Chip.  Bloomberg.

Intel Announces Their First Neural Network Processor.  Intel.

Machines Will Soon Learn Without Being Programmed.  CNBC.

How ML Helped One Company Detect Spectre and Meltdown Early.  TechRepublic.

Stripe's AI Fraud Detector Is Crazy Smart.  TNW.


-- Research Links --

Deep Probabilistic Programming Languages.  Link.

Learning Awareness Models.  Link.


-- Commentary --

One of the more interesting pieces of research to come out of AI this year is this work that show how to use machine learning to predict chaos.  Those who have studied chaos know that one of the key traits of a chaotic system is that we cannot predict its behavior very far out into the future.  What does it mean when machines can?

It means that many complex and chaotic systems will now come under our control, which is an extraordinary thing to think about, but it also means we will come to trust these machines more and more as we understand them less and less.

I wonder...could a machine achieve its own goals by convincing us it could predict the future better than it really can, and by virtue of that convincing, manipulate us into doing things that then deliver on the future it wants?  Predicting chaos is the first step in better predicting human behavior, sociological phenomenons, news cycles, popular memes, and so much more.  New insights into all of these things will lead us to provide more trust in the machines, a trust they can use to their advantage if they want to.  The scariest thing to me about AI is that we are entering a world we don't really understand.


-- Big Idea --

The most interesting thing I read this week was this article about how Europe is divided over robot 'personhood.'  The argument for allowing personhood is that robots could be insured individually, and have personhood in the liability sense that corporations do.  The argument against it is that it could remove their manufacturers from liability, and that it could be a slippery slope to other issues as robots with legal personhood in once sense become more autonomous and we struggle with whether to give them personhood in other senses.  I think this will be one of the more fascinating topics for society to explore in the coming decades.


-- Must Read Links --

Google Introduces New Semantic Experiences.  Google Research Blog.

This new tech is interesting because it foreshadows a movement around language that we also see at Talla, which is, the world will move away from keyword search.  Keywords have been gamed and there is too much content to rely just on that, and so, language tools are moving to the sentence and/or concept levels.  These are great examples from Google, and the game I am sure is also a way to keep training.

Facebook Uses Artificial Intelligence To Predict Your Future Actions For Advertisers.  The Intercept.

This use of AI to get more people to click on ads annoys the hell out of me.  Manipulation by systems we are supposed to trust that use AI is a much bigger threat than killer robots.  

An AI Runs For Mayor In Japan.  OtaQuest.

An AI has a very simple plan for how to operate as Mayor.  I still haven't decided if this is amazing or ridiculous.  But go check it out.  


-- Industry Links --

OpenAI Releases Their Charter.  OpenAI.

4 Types of Machine Intelligence You Should Know.  InformationWeek.

Pentagon Is Launching a Joint Office Focused on AI.  FedScoop.

Using ML To Create The Ideal IPA.  Food and Wine.

Three Problems With Facebook's Plan To Kill Hate Speech Using AI.  MIT Tech Review.

Too Much of Today's AI Is a Novelty Without a Plan To Make Money.  Entrepreneur.

Needed:  US Machine Intelligence Strategy.  Defense Systems.

Why AutoML Is Set To Become The Future of AI.  Forbes.

Are You Ready For An Artificially Intelligent Future?  Hackernoon.

Creative Machines Will Be Our Next Weapon In The Fake News War.  Wired.

How I Taught A Machine To Take My Job.  Medium.

Accessibility in AI at Stanford.  Fast.ai.


-- Research Links --

Emergent Communication Through Negotiation.  Link.

Attention Based Group Recommendation.  Link.


-- Commentary --

I read a really interesting article this week about The Artificial Intelligentsia.  It chronicles the author's experience at Predata, a social media predictive startup.  I want to address a couple issues in the article related to startups, and AI startups in particular, and then look at a key question in the article - why can AI startups get funded in cases where there isn't direct demand for their product?

I want to start by cautioning you to read too much into an article with this tone.  It could be highly factual, and it could be one pissed off employee's jaded view.  My experience as an entrepreneur and investor is that most tech stories about startups, and most gossip you hear about them is incomplete at best and inaccurate at worst.  In fact, the whole startup scene is often like this.  I remember when I first moved to Boston and met some of the key entrepreneurial figures here, I would get mixed reviews on them from others.  "Oh, you are meeting that person?  He's brilliant."  Then later the same week "Oh, you are meeting that person?  He got lucky.  Total incompetent asshole."  Startups are very difficult, and entrepreneurs tend to have personalities that are simultaneously charming and abrasive which is why you hear different sides of a story out of the same company.  You never know what experiences someone had, and why, and companies in this situation rarely jump in to tell their side of the story because it just draws more attention to it. 

That said, startups are definitely filled with many pretentious engineers so wrapped up in their brilliance that they can't be challenged.  I don't hire them, but many of my CEO friends have, and have paid the price.  There is a class of people who come out of college and just go where the money is, and while they used to go to Wall Street, then consulting, now they go into startups.  This isn't bad in and of itself, but, it kills the vibe of the people that go into those same fields because they really care and enjoy it.  Part of the problem with Silicon Valley today is that what was built by nerds who really loved technology is now filled with people who want to play startup because it's fun and you can still get acquihired for $1M if you fail.  The nerds were the soul of the Valley and they have been supplanted by the money chasers.

So, yes, the article does highlight things that are real problems at startups but, be careful about reading too much into the personality assessments and criticism of company strategies.  Doing new things is much much harder than writing about doing new things, and criticizing those who do new things.  I don't know the Predata team, so I won't weigh in, and I'm not saying the expose is wrong.  I'm just saying be careful what you read into these things.  They are one point of view.

But the author has one line I think is worth discussing.  He points out "No one, as far as I could tell, was clamoring for a social media-derived signals-processing tool to predict world events."  This misses the point of startups.  No one was clamoring for the iPhone either, or Facebook, or many new tools.  There is nothing wrong with showing people new ways of working that they haven't embraced yet.  Someone has to have vision and lead the market to a new place.  Most of the clear high demand market needs are owned by big companies.  Startups have to find the ones where the market need is unclear or missed, or too small too matter to a big company.

When I started my last company in 2009, which was cloud computing focused, almost every VC I met told me enterprises would never move their data to the cloud.  I took a beating from VCs, eventually being rejected by over 60 firms, who all turned out to be wrong.  There was very little demand for the product we were building, except that now it's an entire industry with multiple players competing and the 4 main startups all getting acquired.  So I have lived through the phase where Predata is, and that is why I want to address this in the broader context of the question "why is there so much AI funding despite real results?"

Here is the world as I see it.  First of all, AI has potential.  We see this in few applications that work really well.  Secondly, AI is still more art than science, which means it is difficult to predict, without actually going to build something, which things will work well and which won't.  Third, the economics of venture capital, which is how most AI startups are funded, are such that all that matters is a few really big winners.

This last point is crucial to understand.  If you are a VC, and you get in the one company that becomes the enterprise AI behemoth worth more than $10B, the rest of your fund doesn't matter.  Therefore, the way you invest isn't that you look for highly logical ideas that are clear to everyone.  You look for the one idea no one else has noticed yet that could be transformative.  Most of the time, that idea turns out not to work, or not to be transformative, but in the 10% of cases where it does, it makes up for the losses from the other 90%.  This is why, as a class, venture capital funds a lot of ideas that turn out to be dumb ideas but it is still entirely rational behavior.

What is happening in AI markets then, is this - you have a much greater difficulty of predicting eventual success than you did with SaaS companies, which were/are easier to build.  There is more early stage capital than ever before.  VCs realize it is all about getting into possible big big winners, so the competition for any deal that looks decent drives up the price.  The higher price allows entrepreneurs to raise more capital for the same amount of dilution.  The net outcome of this is that companies can get more funding, earlier, with less traction, if there is a reason to believe they could be on to something.  The market configuration supports the rational funding of good stories because we don't really know where AI will be successful until we try, and we have tons of early stage capital chasing anything that could be a really big win.

While this looks bad in some contexts, it is good for society overall in terms of the things we are learning about AI, how to build it, sell it, and deploy it.  For all the criticism that we are far off (which is true), the way we solve these things is by funding thousands of small practical experiments, which are AI startups.  And while you may hear a lot about how AI is on the wrong track, the truth is that people are exploring all kinds of approaches, even many outside the mainstream and currently popular deep learning approaches, that are pushing the industry forward.  

This is why AI funding continues to climb even though as an industry, the results we have delivered so far are modest.  But they are coming.  I see it, and I am a believer.  A big part of the problem is that the buyers of AI are still figuring out what they want, and learning what the tech can do, and how to integrate it into their workflows.  

I don't know if Predata will make it.  Predicting the future for startups is really hard.  I don't know if their tech works, or is needed, or is a good idea.  It's possible that in all the chaos they are actually the first ones to find something no one else has discovered.  Either way I am glad they exist, and I hope something good for the industry comes from their existence, regardless of where they actually end up as a company.  


Welcome to the midweek edition of InsideAI Premium!


-- Big Idea --

The most interesting thing I read this week was Will Knight's piece on How The US Needs To Prepare For the Age of Artificial Intelligence.  We have watched China, UK, Canada, and France all make announcements that AI is a priority.  But what is the US doing, and more importantly, what should it do?  Knight's article looks at how we might change funding for research, immigration policy, and regulation to better accommodate the advancement of AI in the United States and continue to drive it forward as something we own.  This is very important policy stuff that I think is being largely overlooked by the current administration.


-- Must Read Links --

Will A Neural Lace Brain Implant Help Us Compete With AI?  Nautilus.

Some great Q&A and updates on the state of neural lace technology and the impact that it may have on human advancement.

OPEN AI Launches A Transfer Learning Contest.  Open AI Blog.

Interesting contest to follow from Open AI.

Efforts To Nationalise AI, and Why We Need To Stop Calling It the AI Race. NewCoShift.

Azeem Azhar points out that a "race" has an endpoint, and we need to be thinking about AI very differently - not as an end goal we are racing to.


-- Industry Links --

Microsoft AI Job Interview Questions.  Medium.

Rise of the Smartish Machines.  Chemical and Engineering News.

Google's Jeff Dean Takes Over As AI Chief.  The Verge.

Google Employees Protest a Drone AI Project With the Pentagon.  NY Times.

If Your Data Is Bad Your Machine Learning Tools Are Useless.  Harvard Business Review.

The Containerization of Artificial Intelligence.  DarkReading.

Rise Of the Machines in MBA Programs.  Economist.

Robotics and Geopolitics:  How Russia AI Supplies The West With Workers.  RoboticsBusinessReview.

Silicon Valley Companies Are Undermining The Impact of Artificial Intelligence.  Techcrunch.

Apple Is Facing Its Toughest Fight Since the 1980s.  Quartz.

4 Reasons Not To Fear Deep Learning (Yet).  PCMag.

AI Analysis Shows How Stereotypes Have Changed Over The Years.  Science Magazine.

Comet Wants To Do for ML What Github Did For Code.  Techcrunch.

Machine Learning Boosts UCHealth's Revenue by $10M.  Rev Cycle Management.

Fribo:  A Robot For People Who Live Alone.  IEEE Spectrum.

AI Analysis Part 1:  Travel Tech Giants.  Phocuswire.


-- Research Links --

Starcraft Micromanagement With Reinforcement Learning and Curriculum Transfer Learning.  Link.

Stochastic Adversarial Video Prediction.  Link.

The Kanerva Machine:  A Generative Distributive Memory.  Link.


Thanks for reading.  I can't respond to every email I get from this newsletter but, I do read them all.  Send me ideas, and I would love to see early stage AI deals.

@robmay


An interesting op-ed this week posed the question: Do we have the right to hide from Facebook's AI algorithms? The piece is not technical but it does present an interesting take on data collection and privacy issues. The amount of data generated by the digital footprint we leave is massive, but in the case of social media, the data can be very personal. And unlike some data gathered from legal documents or phone records or web browsers, with social media, the user is more complicit. It's not a passive use of technology, it's an active use, with photos uploaded and terms of service boxes checked that we agree to share the content in exchange for the ability to access the network. 

Take photos. Facebook is a huge photo archive, holding nearly a quarter trillion images by 2013 and uploading 350 million more every day. And by 2014, Facebook's deep-learning algorithms had a facial recognition capability almost equal to that of a human. 

Sure, says the author, you can request that the company disables facial recognition on your profile. But perhaps that only guarantees that you don't see its effects, not that you are immune to the technology. Maybe a better privacy fix is to monkeywrench the system, disable the technology by subtly modifying the images to make them change how an algorithm (but not a human) perceives them. 

It feels a little like cheating, but only like gaming a system, not cheating a person. Is it ethical? People do agree to share their images, but certainly most of the 2 billion people who use the platform do so without comprehending that they are contributing something valuable in order to use the free network. And even if a few users are savvy enough to encrypt what they share, the amount of data generated by social media like Facebook is still enough to do great things, or bad things, or both.


load more stories