Inside AI | Inside

[ Inside AI ]

Rob May's roundup of stories and commentary on Artificial Intelligence, Robotics, and Neurotechnology

Hello and welcome to the midweek edition of Inside AI. Thanks for subscribing. 


--Links--

A new report created by 20+ researchers from the Universities of Oxford and Cambridge, OpenAI, and the Electronic Frontier Foundation warns about possible scenarios where AI could be used maliciously. 

Ben Lamm of the chatbot company Conversable has founded a new startup called Hypergiant that will work with large brands and companies to discover how AI can best be utilized for their business. One example: Flanagan, an AI-powered "bartender" for TGI Friday's restaurants.

A Boston-based company is crowdfunding its new Mercury smart thermal jacket that syncs with Alexa and uses machine learning to customize its temperature. 

Startup Vectra raised $36 million in a Series D funding round to support Cognito, its AI-based cyber security system. 

One of the most difficult things for 3D animators to reproduce is animal fur because of the way the fibers interact with light. UC researchers trained a neural network to use the subsurface scattering principle and apply it to various animal models, making the process more efficient and the fur more realistic. 

NatWest banks are testing a very lifelike AI-powered bot called "Cora" to answer customer's verbal questions. 

DeepMind scientist Thore Graepel, who helped create AlphaGo, is suggesting that a cooperative effort by AI agents would be more powerful than competition. 


There was a really interesting story in Wired this week about a computer using light instead of electricity to train machine learning software. The company Fathom Computing was co-founded in 2014 by brothers William and Michael Andregg and the new optical computing approach is unique. Most tech giants are investing in powerful computer chips or creating their own special machine learning chips

We already use light instead of electricity to work with data via telecommunications optical fiber cables. Electric signals have resistance and produce waste heat, and optical circuits could be more efficient. Optical processors were used in the military in the 1960s, but were replaced by chips that grew exponentially more powerful. But some researchers suggest that this rapid increase in computation technology might slow down. 

Fathom Computing's prototype computer is still in the nascent stage, but other startups are also starting to see the light, including LightOn, Lightmatter, and Lightelligence. This will be an interesting trend to follow in the industry.


Subscribe to Inside AI


--Big Idea--

The most interesting thing I read this week was this post on "Inverse Reinforcement Learning."  It's the process of trying to figure out an agent's rewards by observing the agent's behavior.  Why would we want to do this?  One reason is that if we could watch humans playing a game and use IRL to determine the motives of the human, that may help us better train RL systems without so much manual tuning of RL algorithms.  This post is more technical than what I would usually link to in this section but the concept is interesting enough, and new to me, that I thought many of you would enjoy it.


--Must Read Links--

ARM Announces Project Trillium.  AnandTech.

The ARM announcement that machine learning is coming to their chips is interesting on two fronts.  First of all, from an ecosystem perspective, more and more competition in A.I. hardware will change the dynamic of the A.I. arms race, which to date has been mostly in software.  Secondly, the initial application of having visual object recognition of most common objects right in your pocket is going to open up a whole host of new applications and innovation.

The Terrifying Future of Fake News.  Buzzfeed.

The best quote from this article is "What happens when anyone an make it appear as if anything has happened, regardless of whether or not it did?"  It's a scary future, and it probably means pictures and videos cease to matter as evidence of anything.

Addressing Tech's Ethical Dark Side.  NY Times.

This makes me happy.  Harvard, MIT, and Stanford are starting to offer ethics courses that address many of the issues tech has created.  Training the next generation to thin about this is a great start.


--Industry Links--

Artificial Intelligence is now fighting fake porn. Wired.

Neural networks everywhere. MIT News

Deep Reinforcement Learning Doesn't Work Yet.  Sorta Insightful.

Missing Data Hinder Replication of AI Studies.  Science Mag.

Google Trains AI To Write Wikipedia Articles.  The Register.

Building a Robotic Colleague With Personality.  MIT Sloan Review.

As China Marches Forward on A.I., the White House is silent. NY Times.

How To Spot a Machine Learning Opportunity.  Enterprisers Project.

'Humans Not Invited' is a CAPTCHA Test That Welcomes Bots, Filters Out Humans. Motherboard.

With Strategic Zaps To the Brain, Scientists Boost Memory.  Nautilus.

Miso Scores $10M To Bring Its Hamburger Flipping Robot to More Restaurants.  Techcrunch.

The Birth of AI and the First AI Hype Cycle. KDnuggets.

Amazon is becoming an AI chipmaker, speeding Alexa responses. The Information

Google boots up Tensor processors on its cloud. The Next Platform

Modeling uncertainty helps MIT's drone zip around obstacles. IEEE Spectrum.


That's all for this week.  See you Sunday.

@robmay


--Research Links--

Crowd Ideation of Supervised Learning Problems.  Link.

Reinforcement Learning From Imperfect Demonstrations.  Link.


--Commentary--

I had the pleasure to meet Alison Darcy, the CEO and founder of Woebot, at an A.I. event a few weeks ago, and was really impressed with the Woebot story.  Alison was gracious enough to answer some questions for the newsletter this week, so I hope you enjoy her story and insights.

1.  Before we talk about chatbots, some of the press on Woebot mentions "other approaches."  Did you try A.I. approaches that weren't chat based?

We started out building psychologically-themed video games, though they were mostly prototypes with no AI. The problem we were trying solve was not efficacy (because we already knew we could do that) but engagement. This is actually the biggest challenge in digital mental health.  We knew that video games would be super engaging and we built dozens of game prototypes before we built Woebot.. With hindsight, building games is so hard, and most of them were either interactive fiction or had a lot of dialogue, all of which I believe helped to feed into Woebot's design as a character.  

2.  The press about Woebot has focused on how useful it is.  Multiple authors seem to have enjoyed it.  Do you have any data you can share about success rates, and how you measure them?

Yes, we partnered with former colleagues at Stanford to run a study last year. We randomized 70 people to talk to Woebot for 2 weeks or an information control group (they got an ebook published by the National Institutes of Health). After 2 weeks we saw significant reductions in symptoms of depression among those who talked to Woebot compared to the control group and both groups significantly reduced anxiety. We also saw high engagement - an average of 12 conversations in 14 days. But what was really incredible was the way that participants oriented toward Woebot and this helped us realize the potential we have to help people.  Since we're a bunch of recovering academics we still measure success in terms of best-practice outcome measures (can't help it) and that's what we optimize for.  One thing that David Lim, a brilliant Stanford fellow studied over the summer was working alliance - which is a standard measure of therapeutic relationship that is used in studies of traditional therapy.  We're just writing up the findings now, but he discovered that people do indeed form a bond with Woebot and it is slightly stronger than that which they have with their therapist's, even though both yield similar levels of overall satisfaction. The study has some limitations that mean the results should be regarded with caution but still, we think that's very interesting.    

3.  What is the biggest technical problem Woebot faces as that you wish someone would solve?

What we're constantly dealing with mostly is improving the bot's NLU. I don't think there has ever been a better time to develop something like Woebot, but we're still at the very beginning in terms of this tech.      

4.  At the A.I. dinner where you and I first met, there was a lot of discussion about ethical implications for bot interactions.  What are some of the ethical issues you faced building Woebot, and how did you handle them?

There are several ethical considerations in this space and I believe we should all be talking about them.  Since we spent almost 2 decades in Academic Medicine as practitioner-scientists, this stuff seeps into everything we do.  As a company we want to set a higher standard in digital mental health care, so that's what we're aiming for all the time. The most pressing ethical issue we face is that people do not mistake Woebot for being an entity that is capable of intervening.  We want to protect our user's anonymity so we do not know who anyone is, and therefore can't intervene even if we wanted to, should someone divulge something that would be considered high risk. So we deal with that by being completely transparent.  The second ethical consideration is around data privacy. Again, we anchor on full transparency, so for example, our users on Facebook have to acknowledge that they understand that whilst they're anonymous to us, they are still subject to Facebook's data policy.  From our perspective, Woebot must establish trust with his users after all, that's the foundation of any good relationship.  Transparency is crucial for that.   

5.  What is the most surprising or interesting thing that happened as you built and launched Woebot?

I think it's just how personal this technology is and, paradoxically, it's potential to make us more human.  When was the last time you shared something personal with someone without any thought whatsoever to how you would be perceived? This is virtually impossible when we're talking to another person.  It's a very human trait that shows how much we value connection that we worry about how we're coming across and what the other person is thinking. With Woebot, this is completely removed, so it's a rare opportunity for people to explore their own minds with themselves and nobody else.  

6.  How has the healthcare community received Woebot, and where do you think Chatbots as healthcare tools will go over the next 5 years?

We've been encouraged by the positive reception that we've received from the healthcare community.  For the most part, we've been well regarded by our colleagues, especially when people understand the role that Woebot is assuming. Woebot is no therapist, and he's not trying to be, but he can offer something at 3am when there are no other reasonable options.  At the end of the day, most of our colleagues know that they will never be able to see enough people and that so many people are suffering in silence.  On a more practical level, many therapists encourage their clients to use Woebot to augment the care they are already receiving. We almost always encourage our patients to practice new skills like challenging negative thinking in between sessions as data shows this leads to better outcomes.  Finally, I have never met a therapist that feels good about the fact that they have a long wait list. So all-in-all, yes, I think automated guides like Woebot have a huge role in democratizing health systems that have traditionally been difficult to access.  I think we can expect to see them pop up in health systems as a means of delivering better continuity of care. 

7.  What's next?  Are you focused on going deeper on Woebot, or bringing more similar bots to market?

We are focussed on helping Woebot become more sophisticated.  We see him developing in 3 key ways; first, he will be more available on more platforms (Android app coming soon), second his conversational ability will improve, and finally, he's going to include a much broader repertoire of skills that he can draw from and guide people through. That's the thing about Chatbots - they're memory is great. Woebot already does CBT pretty well, but soon he's going to learn new skills like DBT (Dialectical Behavioral Therapy) and also be helpful for a lot of the other day to day problems that our users are facing.

8.  Entrepreneurship is one of the most mentally and emotionally challenging jobs given the ups and downs of building a company, but you have a PhD in Psychology.  Are you weathering the startup storm better than the rest of us as a result?  

Oh how I wish that were true!  You know the way the Japanese have a word for the simultaneous experience of beauty and pain? That's how I feel about building a company, I have never been so stressed but at the same time so stimulated in all my life (that I can remember).  My trick is to keep reminding myself of something Woebot is oriented around too — it is precisely this struggle that leads to growth. 


That's all for this week. Thanks again for reading. Please send me any articles you find that you think should be included in future newsletters. I can't respond to everything, but I do read it all and I appreciate the feedback.   

@robmay


Email x1 ai psychologist

The most interesting thing I read this week was about DeepMind creating cognitive tests for AI. DeepMind built PsychLab, a virtual 3D lab that administers tests to study the behavior of artificial agents in a controlled environment. PsychLab is open-source and has a series of tasks that include visual search, continuous recognition, arbitrary visuomotor mapping, change detection, visual acuity and contrast sensitivity, glass pattern detection, random dot motion discrimination, and multiple object tracking. The developers say that the API is easy to learn and will enable the creation of new tasks. 

Essentially, AI are being given the same tests as humans. Researchers have already learned things about their agents by putting them through the PsychLab tests, and it will be fascinating to see what else can be learned about the way AI "think" when researchers are able to draw from what has been learned in the 150-year study of human psychology. 


-- Big Idea --

The big idea this week doesn't come from anything I read, but rather something I heard at a dinner with several A.I. types.  I will call this idea "Vigoda's Law" since I credit Ben Vigoda with first bringing it to my attention.  Basically, the idea is that there is a Moore's Law like curve of some kind for the decreasing amount of training data needed to get a certain level of functionality out of a model when starting from scratch.  This has to do with improved algorithms, techniques, and sometimes better pre-trained lower model layers.  It's definitely a more complicated relationship (if it exists) than Moore's Law was.  The best way to explain Vigoda's Law might be to say that "the importance of data relative to other A.I. elements in solving a specific problem is declining at an accelerating rate."  I have no idea if that is true but, I've been chewing on it since I heard it the other night.  It's a big idea if it is true.  If you have an opinion, I'd love to hear it.


-- Must Read Links --

Just How Shallow Is The Artificial Intelligence Talent Pool?  Bloomberg.

A look at one of the most critical topics in A.I. at the moment - that there aren't enough people to work on it.

Can Computers Learn Like Humans?  NPR.

A good general piece about the challenges of deep learning.

After Settling With Uber, Waymo Faces Bigger Challenges.  NY Times.

A look at Waymo post settlement and what it takes to be successful going forward.


-- Industry Links --

Disney Is Populating Parks With Autonomous Robots.  Techcrunch.

Reddit Bans AI-Generated Porn.  The Verge.

Experts Say A.I. Isn't Fire or Wheel Yet.  Axios.

Why Ethical Robots Might Not Be Such a Good Idea.  IEEE Spectrum.

Can We Keep Our Biases From Creeping Into A.I.?  Harvard Business Review.

Why Deep Learning Needs Standards For Industrialization.  Intuition Machine.

How to Use Machine Learning To Predict The Quality of Wines.  FreeCodeCamp.

Google's Vision to Mainstream Machine Learning.  NextPlatform.

How Artificial Intelligence Is Unleashing A New Type of Cybercrime.  TechRepublic.

Building a Deep Neural Net in Google Sheets.  Medium.

Robot Soldiers Can't Replace Human Soldiers.  Washington Examiner.

New Quantum Linear System Algorithm Could Speed Up Machine Learning.  OpenGov.


-- Research Links --

Learning Role Based Graph Embeddings.  Link.

Coordinated Exploration in Concurrent Reinforcement Learning.  Link.

Recent Advancements in Neural Program Synthesis.  Link.


-- Commentary --

George Soros is a big proponent of a concept called “reflexivity.” It deals with circular relationships between cause and effect, and I think it is a very important concept in the development of new products in new markets. The early attempts at products, the various technologies that rise to the top in an early market influence how potential customers, entrepreneurs, and investors see that market going forward.

I see too many entrepreneurs look at problems linearly, as if there is a static issue that impacts the world and, they solve it, then build a company around it. It never works that way. Buyer behavior changes all the time, along with UI expectations, technology options, pricing expectations, and pretty much everything else. Rule #1 in business is that the world is dynamic.

A Reflexivity Example

I will give you a very personal example from my last startup. We sold to enterprise customers, and SOC2 was a relatively new standard. Our main competitor announced their SOC2 certification and, at our next executive management meeting we discussed it and said “this is dumb, no customer is asking for this, the few customers we spoke to said it wasn’t that important, our competitor wasted their time, we are not wasting money on a SOC2 audit.”

Then that competitor proceeded to kick our ass for 6 months until we got one. Why? Because on every sales call now they educated the prospects on the value and importance of SOC2, and so customers suddenly began asking us something they weren’t asking before. In other words, our main competitor didn’t beat us by “listening to customers” better or “responding to customer requests” better or whatever other pithy aphorism you hear about how to build a company. They predicted the future demands of the customer better than us. They added things customers weren’t asking for, but would in the future. They positioned themselves as leaders, and their actions influenced customer behavior and customer requests. This is a form of reflexivity.

How This Applies to A.I.

A.I. is a field where early products have sometimes been difficult to build, and everyone has been unsure of what the “killer apps” will be. As a result, we’ve seen lots of platforms, which entrepreneurs built in hopes other entrepreneurs could figure out the real use cases, and we’ve seen lots of marginal products (existing product adding machine learning to make it slightly better). And we’ve seen a few real use cases like self-driving cars and better predictive analytics. But it still feels like something is missing.

I think what is missing is clear market demand for “intelligence” built into everything we use. What I mean is, everyone can nod their heads and say they want smarter software and appliances and whatever, but, when push comes to shove no one agrees on exactly what that should look like. In many markets you can determine customer needs by simply talking to customers but, as we build intelligence into things, it’s different.

To be successful in these markets entrepreneurs need to embrace product reflexivity. They need to accept the idea that customer development in brand new markets is a circular, partially self-referential process. It starts with understanding some potential needs of some potential customers, and then showing them ideas to solve their needs but, also suggesting other applications of the same technology set. Unfortunately it’s also a more ambiguous and uncertain process than more direct forms of market entry.

I was in graduate school during web bubble 1.0, so, I didn’t work directly in the space, but it seems to me the web 1.0 space went through a similar process. What is possible? What is useful? What is actually likely? The difference this time around is that A.I. as an industry has a very different set of properties and structure. The A.I. industry is driven as much by new data sets as it is by new technologies. Plus there is a flywheel effect around data acquisition, learning, and algorithm performance where they strengthen and reinforce each other in ways that build defensibility. Your success isn’t just a product of your approach to the problem you are solving, it’s also a product of the data you have access to and the new things it enables.

But all of this leads to a conclusion that is possibly counterintuitive for entrepreneurs and investors, which is, your reflexivity process should circle around the data sets you have more than anything else. People always ask “what problem are you solving?” And that’s important to answer eventually. But new problems are arising all the time. You have to reason from first principles and move where the market is going.

So if you are starting an A.I. company, you have to show customers vision just as much as you ask them about their problems. Customers don’t understand yet what these new technologies are capable of. And the process is reflexive because what you (and other) early startups do, impacts how customers perceive the early market and thus how they see the problems they have and the potential solutions A.I. can provide. In other words — it’s more complicated than before, but the payoffs could be bigger, so it is still worth pursuing.

The most common prospect we get at Talla says something like “my boss said we need to use more A.I. so, I am trying to figure out where.” These prospects aren’t looking for our particular product. They have no particular problem to solve. They are looking to understand what A.I. can do and where it can have an impact. That is a very different market dynamic than what we saw in the SaaS cycle of startups.

But the good news is, if you are an early A.I. entrepreneur, you can use reflexivity to your advantage. You can educate customers in new ways that highlight problems they didn’t know they had, new problems they are about to have, and new things that A.I. enables that they haven’t thought of.

I have 40+ early stage A.I. investments. If you are an entrepreneur who isn’t solving an existing problem, but instead is looking at a future to enable something amazing that wasn’t possible before, I’d love to chat.


Email x1 learn ai

--Commentary--

I read an interesting story on Medium this week in which the author suggests that it is possible to learn deep learning technology in six months. He does say that there are some prerequisites: candidates need to be versed in math and have some programming skills (although he says it's possible to pick up Python and cloud along the way). And of course, they need to have computer and internet access and commit to spending 10-20 hours a week. 

He goes on to recommend basic and then advanced online courses, but his main point is that people learn how to drive by driving, and that the same top-down approach is a good way to jump in to the industry. I think this is a good attitude to have at a time when by all accounts there is a high demand for AI programmers. Even as more universities start teaching data science, it is going to take some time before all of that talent reaches the labor pool and helps alleviate the demand.  It will be interesting to see if more companies start training candidates themselves at work.


-- Big Idea --

The most interesting thing I read this week was the Wired Guide to Artificial Intelligence.  The magazine has a strong focus on A.I. this month which is worth reading through.  I particularly enjoyed the article Greedy, Brittle, Opaque, and Shallow:  The Downsides of Deep Learning, which shows it may not be the A.I. panacea some think it is.


-- Must Read Links --

Oracle Places Huge Bets On A.I. and Machine Learning.  Forbes.

Good old Oracle is never the first one to the party so, by the time they arrive, you know the trend is powerful enough that you should pay attention.

Google's A.I. Push Comes With Plenty of People Problems.  NY Times.

This is a great reminder that advancing A.I. isn't just about technology.

Are Autonomous Cars Really Safer Than Human Drivers?  Scientific American.

An argument that we haven't really done a fair comparison between the two.  Most self driving car stats are on good roads in good weather.  Will they hold as autonomous cars move to really do everything humans do?

MIT Aims For Moonshots With Intelligence Quest.  Techcrunch.

A new two part group at MIT has received funding to reverse engineer human intelligence.  It's a very big and exciting idea.


-- Industry Links --

More Efficient Machine Learning Could Upend the A.I. Paradigm.  MIT Tech Review.

In An Era of Fake News, Advancing Face Swap Apps Blur More Lines.  NPR.

Shake-up at Facebook Highlights Tensions In The Race for A.I.  Washington Post.

How To Make Life Easy For A Newly Hired Data Scientist.  KDNuggets.

Skills That Help Accounting Professionals Succeed Alongside A.I.  Journal of Accountancy.

Is There a Tradeoff Between Immediate and Longer Term A.I. Safety Efforts?  Future of Life.

Andrew Ng Launches at $175M A.I. Focused Fund.  Business Insider.

Google's Vision For Mainstreaming Machine Learning.  NextPlatform.

GM Takes An Unexpected Lead In The Race For Autonomous Vehicles.  Economist.

How Deep Learning and Video Analytics Could Automate The Future of Player Scouting.  Sport Techie.

Google Flights Uses AI to Predict Delays.  ZDNet.

Ray:  A Distributed System for A.I.  Berkeley A.I. Blog.


load more stories