Inside AI | Inside

[ Inside AI ]

Rob May's roundup of stories and commentary on Artificial Intelligence, Robotics, and Neurotechnology

Happy Wednesday, and welcome to the midweek edition of Inside AI. 

Thanks for reading and see you on Sunday.


--Big Idea--

The most interesting thing thing I read this week was a Businessweek article on how The World's Dominant Crypto Mining Company Wants To Own AI.  Bitmain has become a chip powerhouse via ASICs for mining.  As hardware starts to matter more and more for AI (See the open AI blog post in the industry links section), Bitmain finds itself very well positioned to compete.  If 40 percent of revenue really comes from AI chips in a few years, the company will definitely be a player, and that could shake up the traditional chip oligopoly as the market for AI chips goes into its growth phase.

Subscribe to Inside AI

89% of companies still use multiple video conferencing platforms.

How does your company stack up vs the industry? The State of Video Conferencing 2018 report showcases the different video conferencing options and their ideal setups, so you can assess whether your communication technology needs to change.

How does your company compare? Check out the report to find out.

--Must Read Links--

Do we have to choose between AI and privacy? Medium.

This is a good look at the effective and ineffective ways of protecting privacy when gathering data.

How The Enlightenment ends. The Atlantic.

The last technological advance to alter the course of human history was the invention of the printing press in the 15th century, which ultimately spawned The Age of Reason or The Enlightenment. The advancement of AI stands to effect change at the same level. Are we prepared?

Chatbots Are Saints.  Roughtype.

Nicholas Carr claims that natural language processing isn't about making computers smarter but instead about dumbing them down enough so that they can work with us.

--Industry Links--

Employers are monitoring computers, toilet breaks – even emotions. Is your boss watching you? The Guardian.

This man Is the godfather the AI community wants to forget. Bloomberg.

Machine Learning Is Stuck on Asking Why.  Atlantic.

AI and Compute. Open AI blog.

Why is the human brain so efficient? Nautilus.

Baidu's Top AI Exec Is Stepping Down.  Techcrunch.

Crossbar pushes resistive RAM into embedded AI. IEEE Spectrum. 

AI Is Harder Than You Think.  NY Times.

The 'father of A.I.' urges humans not to fear the technology. CNBC.

How Artificial Intelligence Is making chatbots better for businesses. Forbes.

The road to artificial intelligence is paved with calculus. William & Mary.

The world’s dominant crypto-mining company wants to own AI. Bloomberg.

Artificial intelligence will both disrupt and benefit the workplace, Stanford scholar says. Stanford News. 

--Research Links--

Evolutionary RL for Container Loading.  Link.

Generalized Structural Causal Models.  Link.


This week I had the pleasure of interviewing Ben Vigoda from Gamalon.  The company recently closed a Series A financing and has been one of the most forward thinking AI companies around.  Ben was the guy who introduced me to the concept of probabilistic programming when we met a couple of years ago, and Gamalon is one of the few startups taking such an approach. 

1.  Describe Gamalon.  What's the pitch?

We are creating a new AI-fueled super-power: imagine if one person could have a one-on-one personal conversation, like they might have texting with a friend, but with thousands or even millions of other people simultaneously. Our vision of the future is not about people chatting with bots.  It is about people communicating with other people, massively augmented and leveraged by AI.

In the enterprise, there is a burning need for this, because thousands or millions of customers, communicate to companies digitally across a broad array of channels. They send in natural language and raw text of all kinds including surveys, trouble tickets, receipts, forms, chats, call transcripts, speech to text-based appliances, receipts, inquiries, and more.

Before if companies wanted to process incoming natural language and raw text and get it into a structured database or spreadsheet, they needed to contract large teams to do slow, inaccurate, expensive data entry.  Then it took a long time to get back to customers as well.  It’s bureaucracy plain and simple.

We transform months to milliseconds with much higher accuracy than manual processing, which is such a big change that it is qualitative not just quantitative. It means that a company can know instantly what every natural language input from a customer is saying, and flow that right into their systems in real-time, reply immediately, and better serve customers, even when customers start saying new, unanticipated things. As this becomes a hub for structuring and merging multiple raw text streams, bureaucracy becomes a fluent friendly conversation with customers. It’s a new super-power.

2.  What are the benefits of probabilistic programming over DNNs?

It’s not about one or the other; probabilistic programming versus deep neural networks, or whether one type of machine learning is better, it’s about designing the best AI. In my view, there are currently four foundational axioms emerging in AI, and all four are critical for future progress:

A. models are programs, a fixed arrangement of neurons and synapses is a special but limited case of a program

B. variables carry uncertainty (we like quantifying uncertainty as probability since it is normalized)

C. users interact directly with “hidden layers” in the model

D. we will be talking about this fourth axiom in the future

In 1969, the book “Perceptrons” decimated the field of 1-layer neural networks (called perceptrons) by proving that they cannot learn an XOR function.  It was not until 1986, that Rumelhart, Hinton and Williams showed experimentally that having multiple hidden layers in a neural network trained using back-propagation could overcome this kind of problem.  Today, there are lots of new examples of things that even very deep neural networks cannot learn.  Here’s one: DNNs cannot find clouds on an exoplanet 1000 light years away without training data.  My friend at MIT did exactly that using a (Bayesian) model of the physics.  There are lots of structured signals hiding in noise where a neural network has no hope of finding it.  Actually most signals are like this. Someone talking to you is like that. To parse these signals you need to be able to specify models with more precision and complexity than static arrangements of neurons and synapses. A very good way to think about machine learning is as a communications system. A hidden system transmits (noisy) data to us. We need to guess the state and configuration of the system that sent us the data. At the relatively easy extreme are things like WiFi and cell phones where the transmitter was designed by the IEEE and the urban wireless channel was exhaustively modeled by armies of PhD’s armed with antenna trucks. We have relatively little to do in order to infer the state and structure of the transmitter, but still there is a lot of machine learning technology there, and this is what wireless receivers spend their days computing.  The other end of the extreme is to act as if we know absolutely nothing about the transmitter system, and use a universal function approximator to infer the structure of the transmitter from supervised data.  That’s what DNNs do. But golly there’s a heck of a lot of room in between. When models are programs, then you can easily design a receiver to model anything that we do know about the transmitter; And even for the receiver model to reconfigure itself on the fly to model changes in the configuration of the transmitter system. This is especially important when the transmitter is a person talking. People sure are interesting - our minds were not designed by the IEEE, and cannot be modeled as simple state machines. We need more complex models if we want machines to understand what people are thinking when they talk to machines. Using Turing complete programs as our modeling language gives us the most precise and powerful way to say what we mean in a model.  And since lots of people already know how to program, it also provides us a with a great existing “install base” of engineers.

There are a lot of benefits of variables carrying uncertainty. For starters, the system can ask clarifying questions when it doesn’t understand what someone is saying or what someone is teaching it.  It knows what it didn’t understand!  Gamalon has a demo showing this for ordering a pizza that we think is pretty neat-o:  The immediate impact is that a system can approach 100% accuracy by asking for clarification as it learns and predicts.  More generally, uncertainty is also the foundation for curiosity.  When a system reads something that it doesn’t understand, it needs to be able to pinpoint precisely what it does not understand in order to be able to autonomously pursue learning about it further.

When people arrive in a new job, a new conversation, etc. they get context specific information from the other people when they join. This helps them configure their model for that situation. What ideas am I going to need for this conversation? How would an AI do the same thing?  You could program a new model for every new situation, but that’s only really good for developers. Users want to configure a model for a given situation without needing to program. In regular software, UI/UX was invented to solve this problem.  User interfaces enable non-programmers to configure and interact with a programmed system. Gamalon has created the very first UI/UX for AI that lets users directly interact with "hidden layers”, the ideas inside an AI model. I think that ten years from now, we will look back at this first product from Gamalon with its easy to use UI/UX, and it will have been the template for how a great number of subsequent AI based software products work.    Stay tuned!  

3.  Are there any customer use case examples of Gamalon you can share?

In the Global 2000, people in roles in Digital Transformation, Product, Marketing and IT/Data Science are deluged by natural language and raw text from their customers.  It comes in as surveys, chat transcripts and trouble tickets among others. One of our clients, a leading global automotive manufacturer, receives millions of customer messages per year. To manually process just a hundred thousand of them using an outsourcing firm took them months to get through a small sample of the total surveys and cost them way too much money. They had people file each message into one of about a hundred high-level categories and, even then, the accuracy was too low at 65%. That accuracy rate is pretty typical by the way. They were always stuck with the same high-level categories as changing them was too costly in time and money, so they couldn’t learn anything new from the data. Even if there were new customer messages that didn’t fit into any of the categories, it didn’t matter - that message was going into one of the existing categories. Anything new was getting washed away.

Now that they’ve implemented Gamalon, months has gone to milliseconds and 65% accuracy became over 95% accuracy, all for a fraction of the cost of doing it manually. Plus, they’ve gotten the ability to drill into the data to explore and discover new trends. All this means they can react and respond faster to improve customer retention and, ultimately, revenue.

4.  AI is easy to market because it's a hot item.  Selling AI is a bit more challenging.  What have you learned about selling AI to enterprises?

We haven’t found it to be all that challenging for Gamalon selling in the enterprise, actually. I think that is because we did a ton of initial work at Gamalon, like any SaaS company must do in the early stages, to identify product-market fit - to make sure that we solve a specific burning need in the Global 2000.

The fact is, folks in most enterprises are just getting started with AI.  Only about 4% of enterprises have adopted AI so far. But they know it is important. I just think that enterprises don’t really want complex data science IT consulting projects. What they really want are SaaS solutions fueled by AI. They want to instantly see remarkable ROI with 1-click software that is easy to use but does something that was radically impossible before.

At Gamalon, our focus every day is on making our solution ever easier and faster to get to ROI, and I think this is starting to really pay off.

5.  What AI-related problem do you wish someone would solve that isn't something Gamalon is working on?

Video-conferencing and remote whiteboarding that just works. Uses AI to get all the participants quickly and seamlessly connected, stays connected, takes notes, captures hand-drawn figures and re-draws them beautifully, and sends everything to everyone in a nice summary afterwards.

A system that listens to your weekly sales war room, emails, and customer visits and auto-populates your CRM with all of the relevant information.

An IDE that pair codes with you.

There’s a million of them. At Gamalon we had 93 ideas when we started. We narrowed them down by interviewing customers to find their burning need, and a problem where we felt passionate about applying our technology.

6.  What needs to happen for probabilistic programming to go mainstream?

It’s not mainstream. But some of the best minds in machine learning in the world are working in this new direction of axioms A and B.  (I don’t know of others besides Gamalon working on C & D.) For me that is actually a more interesting moment than when it is ultimately mainstream.

The hotbeds in industry are currently Google DeepMind, Uber AI Labs, and humbly, Gamalon Gamalon is the first to have produced a commercial product using this technology.  I don’t think anyone else will have done that for at least another couple of years.

Some of the early leaders in academia who are my heros include Noah Goodman at Stanford, Josh Tenenbaum at MIT, David Blei and Andrew Gelman at Columbia, Stuart Russell at Berkeley, Zoubin Ghahramani at Uber/Cambridge University, Ryan Adams at Princeton, … I surely cannot list them all.  Oh and by the way, Yann Lecun had a blog post in January where he said “Deep Learning est mort. Vive Differentiable Programming.”  So he is onto axiom A now, he just needs B, C, and D.

As I said, it’s not about probabilistic programming. It’s about these (at least) four new axioms for AI.  How long will it take for most systems to incorporate all of these axioms?  This could take a long time. These are complex concepts so the speed of uptake from person to person is slow. AI is essentially the same kind of field as theoretical physics; There is a lot to learn and understand. I have been working very hard for my entire life to improve my understanding of machine learning, and there is always so much more to learn.

The expertise could therefore end up being very unevenly distributed.  Some teams may have walking and talking AI disrupting an industry while other teams are still developing preliminary expertise. I don’t mean to be bleak.  I hope that by articulating this possibility, we encourage more teams to open their minds to the amount there is to learn, and apply themselves to ramping their organizations in this area.  Teams that want to fundamentally innovate in machine learning will require expertise across signal process communications and information theory, probabilistic models, statistics, numerical methods, a wide range of solver methods, statistical physics, convex optimization, as well as specific machine learning verticals such as speech, natural language, vision, and control theory, compilers, high performance computing and cloud architectures, data pipelines. Deep learning is just one part of a larger mix.

7.  Elon Musk and Mark Zuckerberg have an ongoing debate about whether killer AI is going to take over the world or not.  Do you have an opinion on the Musk/Zuck debate?

Imagine if you could go into your teenager’s brain and delete all their ideas and thoughts about dating? Of course we would never do this to people! But that seems like the kind of capability we need for our AI-based products in order for them to be safe and controllable. Gamalon’s product is already like that: a business analyst can use our UI/UX to design and edit the ideas that our system learns. This took a ton of work to get the system to be like this, but we made the investment not only because it’s the right path forward for AI, but because it is what our enterprise customers want.

Because of what we have been able to do in our product, from my current vantage point, I believe that AI can be made to be pretty safe, but we have to work at it, just like we need to work hard on designs for cars, rocket ships, and social networks in order for them to be safe.

Elon Musk is an expert in electromechanical systems, and as far as I can tell he is working hard to make his cars and rockets safe. Marc Zuckerberg is an expert programmer and an expert in social networks, and I deeply hope that he is working hard to make his social network safe for democracy, although we would all like to see more evidence and positive progress - AI could help a lot by reading massive numbers of natural language posts to look for fake news for example.

I welcome Elon’s contribution to funding openAI, I think that was a nice thing to do for science.  But does making something open necessarily make it safe? Are openly available nuclear weapons designs a good thing for nuclear safety?

In my opinion, the way to make AI safe is to make the models understandable and editable, so that we can audit and control what the system is learning.  We should also focus on the real issues, not the science fiction ones. Weaponized AI for cyber attacks is worth worrying about a lot more than the singularity.  Relatively rapid job displacements due to AI is worth attention.

We can choose to address these real issues.  At Gamalon we have focused our product direction on augmenting human capabilities, giving people new super powers, rather than displacing people in the workforce. For example, we think that the future of customer support is workers with high quality jobs, who have time to understand the product and the customer, providing a highly empathetic and on-brand experience. We can use AI to help them support a lot more people at once, so that they can also be highly cost competitive.

And there’s a whole lot more potential if we are creative. Take Gamalon’s product for example - not just enterprises, we are all deluged by tons of messages every day. Our time is our most valuable commodity, and every new message divides our attention span into smaller and smaller chunks, effectively making us dumber than we are. 

By condensing millions of messages down to their core ideas, removing duplicate ideas, and organizing the ideas into an “Idea Tree” where you can drill down on just the things that are important to you, Gamalon’s technology could help people communicate. Local governments might be able to coordinate better with their constituents around issues like opiate addiction or foster care. It might give people back some of their attention span to focus on important issues that are undermining our democracy - things like gerrymandering.  Or imagine AI powered grass roots organizing. If everyone had their own personal AI that could help you find and collate meaningful ideas no matter where or how they are posted, everyone wouldn’t have to go to one place like Facebook and rely on their centralized recommendation system. This could help us get back the web we wanted, level the playing field, and diminish the importance of big money campaign spending on ads.

Right now in AI I feel that we should be focusing our efforts on addressing the real issues, with real expertise, and finding creative ways to have a positive impact using this fast improving technology.

"Don't be evil" has long been the credo for the tech giant Google, but apparently some of its employees and researchers believe that the company's AI project has a dark side. Google's Project Maven is a customized AI surveillance engine that uses data captured by U.S. drones to detect vehicles and other objects, track their motions, and provide results to the Department of Defense.

The idea of AI being weaponized by the U.S. military has made a lot of people in the industry worried. About a dozen Google employees resigned in protest of the project, and more than 4,000 employees signed a letter condemning Google's involvement with the military and how it will damage public trust in the company. Additionally, 90 academics, researchers, and scholars signed a petition asking that all tech companies sign an international treaty to ban autonomous weapons systems

According to the petition, just because the defense industry has historically driven advances in computing R&D doesn't mean that it has to be the future of technology. The petition makes a great point, that Google and other companies including Microsoft and Amazon are violating the public trust by using personal data for military purposes. While those companies might benefit financially from having DoD contracts they are doing so by exploiting the consumers that have put them in the position to develop technology. 

It will be interesting to see how Google and other tech companies respond to these actions. 

--Big Idea--

No surprise that the big idea this week is the Google Duplex announcement.  If you haven't listened to it yet, please do.  The examples are incredible.  But what is really interesting is the discussion threads around this.  I think Jeff Schneider's tweet summed it up well.  And Bloomberg has a summary of the concerns about the service.  This is part of why we pioneered Botchain actually, because AIs are going to need an identity in this coming world, and humans (bots too) will want to ask the entities they interact with if they are humans or bots.  This whole thing is a really impressive advance in AI from both a technology perspective, and a sociological perspective.  If you haven't read up on it, go check out the links above.  Google Duplex is a big idea.

Join the Cognitive Revolution

The Cognitive Revolution Symposium explores the future & ethics of brain-computer interfaces on Sat, May 19.

With the Geneva Center for Security Policy and ETH Zurich’s Health Ethics, join experts from BCI research, AI, neuroscience, international security, social sciences, human rights and design. 

Register your interest 

--Must Read Links--

The Trump Administration Plays Catchup On AI.  Wired.

Finally the U.S. government is paying attention.  I don't know who started it in the administration but I am glad they did.  I hope they take this seriously.

How Cambridge Analytica Turned Clicks Into Votes.  Guardian.

If you are curious about the tech behind this big story, this article covers some of it.

Carnegie Mellon Launches Undergraduate Degree In Artificial Intelligence.  CMU News.

Hopefully this starts to address the dramatic shortage of AI talent.

--Industry Links--

AI Redefines Performance Requirements At the Edge.  NextPlatform.

Where Bank of America Uses AI, and Where It's Worries Lie.  American Banker.

How Long Until A Robot Cries?  Nautilus.

8 Useful Advices For Aspiring Data Scientists.  KDNuggets.

Artificial Neural Networks Grow Brainlike Navigation Cells.  Quanta Magazine.

Hidden Audio Attacks In Alexa and Siri.  NYTimes.

How Artificial Intelligence Is Shaping Religion in the 21st Century.  CNBC.

The WhiteHouse Says A New AI Task Force Will Protect Workers and Keep America First.  MIT Tech Review.

A Canadian Startup Applies Machine Learning To Corporate Bond Issuance.  Economist.

How AI Is Taking Over The Economy (video).  Bloomberg.

The AI Problem of Product-Marketing Fit.  (my post)  Medium.

The New Google News:  AI Meets Human Intelligence.  Google blog.

7 Things Lawyers Should Know About Artificial Intelligence.  AboveTheLaw.

Data Science For Startups.  Towards Data Science.

Google's New Conversational AI Could Eventually Undermine Our Sense of Identity.  Verdict.

Andy Rubin Says Everything Is About To Get Robot Legs.  CNET.

--Research Links--

Inference Attacks Against Collaborative Learning.  Link.

OK Google.  What Is Your Ontology?  Link.


AI products are often hard to sell because the customers don’t know what they really want.  Talk to any head of sales at an AI company and she will complain about the tire kickers who come in and “want to put AI in their business.”  But they don’t know where, or how.

If you run an AI company and face this problem with constant customer misperceptions, I have a book for you - The Challenger Sale.  The book discusses a study of the key salesperson profiles and finds that the most successful is that of the “challenger.”  The challenger understands the customer’s business very well, and is willing to push and challenge the customer’s perceptions. The challenger sale is about teaching the customer that sometimes they are wrong in what they want because they have misunderstandings.

Most complex sales today are focused on relationship selling.  You build a strong relationship with the customer and understand what they want and try to nurture the relationship.  But in an AI world, misunderstandings and misperceptions about.  If you challenge your customers and educate them on what is real and what isn’t, then you can earn their respect, their trust, and eventually the sale.

If you are in an AI company, check out the book, and start pushing back on so many of the false narratives that exist about artificial intelligence.  

Did Google's new AI assistant just beat the Turing test? The company showed off its Google Duplex technology at this week's I/O developers conference; Duplex scheduled a haircut appointment and made a reservation for dinner at a Chinese restaurant over the phone, and in neither case did the person on the other end of the line realize they were talking to an AI. 

The Duplex AI system performs a narrowly defined task, and grew out of earlier Deep Learning projects such as WaveNet. The company says its initial application will be for automated customer service centers. The demonstration was impressive and it seems like the technology has come a long way from its chatbot predecessors.  Even though AI with narrow uses is proving to be safe and reliable, the more efficient it becomes, the more the media and some critics seem to fear artificial general intelligence. 

In the meantime we can enjoy how well the new technology works — including the new, realistic sounding John Legend voice available for Google Assistant

-- Big Idea --

The most interesting thing I read this week was an EE Times article about how AI is reviving an interest in "in-memory" processors.  What I love about this is that hardware has been so conceptually stale for so long (at the applied level, not the academic level) and these new ways of doing things, as they spread, will unlock other opportunities no one has thought of yet.  The interplay between hardware and software is important and with the hardware landscape changing so rapidly because of AI, I think by 2020 we will see a ton of new software frameworks and ideas that we didn't previously consider. 

-- Must Read Links --

Trust Me, I'm A Bot.  Medium.

This piece by Beerud Sheth from Gupshup is the first piece of thought leadership from a Botchain ecosystem partner, and I love it because it highlights the really important issues Botchain can solve.  As we move towards a world where we turn more of our lives over to AI, these kinds of control systems will become increasingly important.

The Lapsing of Finland's UBI Trial.  Economist.

Universal Basic Income is a hot topic, in part due to AI and automation.  Finland's experiment has ended but the results haven't been released.  Yet, they decided not to extend it.  This is a good summary of where thoughts on UBI stand.

How Can We Be Sure AI WIll Behave?  Perhaps By Watching It Argue With Itself.  MIT Tech Review.

Make an AI debate another AI in natural language with a human judge?  Sounds crazy, but interesting.  

Has Artificial Intelligence Become Alchemy?  ScienceMag.

A given AI model isn't a black box - the entire field is.  This article highlights the challenges of really understanding why AI practices are what they are.  It isn't nearly as well understood as most other technical disciplines, and the examples clearly highlight why this is a problem.

-- Industry Links --

Bookies Use AI To Keep Gambler's Hooked.  Guardian.

The Politics of Machine Learning Algorithms.  Project-Syndicate.

What It Will Take To Be a Designer In The Era of AI.  Medium.

How 5 Robots Replaced 7 Employees At a Swiss Bank.  Bloomberg.

AI Could Generate The Next Big Fashion Trends.  Smithsonian.

Intelligent AI:  The Robots Are Coming To Make Your Job Easier.  Evening Standard.

New AI Is Mostly Being Used To Solve Old Problems.  NextPlatform.

How Adobe Moves ML and AI Through Their Product Pipeline.  ZDNet.

Terrorists Are Going to Use Artificial Intelligence.  DefenseOne.

AI Researchers Are Boycotting a New Machine Intelligence Journal.  Motherboard.

AI Is Cracking Open The Vatican's Secret Archives.  Atlantic.

How AI Can and Can't Fix Facebook.  Wired.

Robot Chefs:  Hype or Industry Change?  WTOP.

Singapore Airport Will Use Facial Recognition To Find Late Passengers.  NY Times.

This Rehab Robot Will Challenge You To Tic-Tac-Toe.  IEEE Spectrum.

-- Research Links --

Learning Conceptual Space Representations of Interrelated Concepts.  Link.

AGI Safety Literature Review.  Link.

load more stories