Inside AI | Inside

[ Inside AI ]

Rob May's roundup of stories and commentary on Artificial Intelligence, Robotics, and Neurotechnology

--Big Idea--

The most interesting article I read this week was The (Holy) Ghost In The Machine:  Catholic Thinkers Take On The Ethics of AI.  For some people, building smart machines raises more questions about humans than it does the machines themselves.  The Catholic Church is the only formal religious body I have seen referenced that is starting to think about these issues but, I suspect as AIs permeate more of our lives, and more questions arise, other religions will think about these issues as well.


Learn how this growing startup upgraded their meetings

Pendo’s previous meeting room tech was super painful. It was confusing to use, took too much time to set up, and did a terrible job capturing the people in the room on video.

Pendo made the call to put Meeting Owls in every conference room after their CEO learned about it at a networking dinner. They chose it because it was simple to use, and the 360° view was a seriously upgraded experience for the folks outside the room.

Want to try it? Grab one on Amazon.


--Must Read Links--

StitchFix:  The Amazing Use Case of AI in Retail.  Forbes.

StitchFix was started in an AI era when most startups were building AI infrastructure and platforms.  But it has been one of the most successful use cases, and they pioneered some of the coolest applied AI techniques I have seen.

Facebook Is Designing Its Own Chips To Filter Live Video.  Bloomberg.

Interesting that AI chips can find more use cases like this that are specific enough, but large enough, to warrant a custom chip design.  The intellectual spillover from these designs is going to be incredible in how it changes the way we think about hardware and software.

Brookings Survey Looks At What Worries The Public About AI.  Brookings Institute.

The American public is worried about AI-related job loss and whether China will pass the US in AI innovation.  The number of "I'm not sure" votes when asked about positivity or negativity towards AI is surprising.


Subscribe to Inside AI


Happy Wednesday, and welcome to the midweek edition of Inside AI. 


Thanks for reading and see you on Sunday.

@robmay


--Industry Links--

AI ‘part of the answer’ to fake news, Facebook scientist says. (The Star)

Some Say AI Needs To Learn Like a Child.  Science.

Navigating the risks of artificial intelligence and machine learning in low-income countries. (Tech Crunch)

Jennifer Sun, MD: Why Ophthalmology Needs Machine Learning. (Healthcare Analytics News)

Why AI Will Bring an Explosion of New Jobs. (Hackernoon)

Tearing apart Google's TPU 3.0 AI coprocessor. (The Next Platform)

Gym Retro. (OpenAI Blog)

Intel AI Lab open-sources library for deep learning-driven NLP. (Venture Beat)

So, Umm, Google Duplex’s Chatter Is Not Quite Human. (Scientific American)

Growing up with AI: How can families play and learn with their new smart toys and companions? (Medium)

Eric Schmidt Thinks Elon Musk Is "Exactly Wrong" on Artificial Intelligence.  Neowin.

AI Is Developing Fast - Can the FDA Keep Up?  StatNews.


--Research Links--

Hyperbolic Attention Networks.  Link.

Neuromorphic Spiking Array Processors.  Link.


--Commentary--

The idea that "the rich get richer" is definitely going to be true.  In fact, a new Mckinsey report points out that companies who adopt AI earlier will be at an advantage.  This idea is often called the "Matthew Effect" after a Bible verse from the book of Matthew and has been shown to impact things like reading in children.  It turns out children who read more earlier are better readers at any given age, which encourages them to keep reading more, and thus they end up knowing more and being better readers than their peers.  

The same thing is going to happen to companies.  It amazes me that some companies are slow to adopt AI because they aren't sure where to apply it or how well it will work.  I think they don't understand that, by the time these things get figured out, they will be way behind.  Every company of any given size should have some sort of "training wheels" AI program where they are spending a small budget to experiment and learn.  

The Matthew Effect in AI will mean that companies who adopt AI earlier not only better understand where and how to apply it, but understand what data to collect, and how to build on top of what they have for better performance.  This is like building a corporate website in 1998.  You didn't do it because it was a major source of leads.  You did it because you wanted to understand the web - the tools, the techniques, the workflows, the talent, and everything else about it.  If you think AI will matter, don't wait until it is all figured out.  Now is the time to lay the groundwork for the future applications and get your organization AI ready.


Thanks for reading.  Please send me email for stuff you would like to see in the newsletter.

Cheers

@robmay


--Big Idea--

The most interesting thing thing I read this week was a Businessweek article on how The World's Dominant Crypto Mining Company Wants To Own AI.  Bitmain has become a chip powerhouse via ASICs for mining.  As hardware starts to matter more and more for AI (See the open AI blog post in the industry links section), Bitmain finds itself very well positioned to compete.  If 40 percent of revenue really comes from AI chips in a few years, the company will definitely be a player, and that could shake up the traditional chip oligopoly as the market for AI chips goes into its growth phase.


89% of companies still use multiple video conferencing platforms.

How does your company stack up vs the industry? The State of Video Conferencing 2018 report showcases the different video conferencing options and their ideal setups, so you can assess whether your communication technology needs to change.

How does your company compare? Check out the report to find out.


--Must Read Links--

Do we have to choose between AI and privacy? Medium.

This is a good look at the effective and ineffective ways of protecting privacy when gathering data.

How The Enlightenment ends. The Atlantic.

The last technological advance to alter the course of human history was the invention of the printing press in the 15th century, which ultimately spawned The Age of Reason or The Enlightenment. The advancement of AI stands to effect change at the same level. Are we prepared?

Chatbots Are Saints.  Roughtype.

Nicholas Carr claims that natural language processing isn't about making computers smarter but instead about dumbing them down enough so that they can work with us.


--Industry Links--

Employers are monitoring computers, toilet breaks – even emotions. Is your boss watching you? The Guardian.

This man Is the godfather the AI community wants to forget. Bloomberg.

Machine Learning Is Stuck on Asking Why.  Atlantic.

AI and Compute. Open AI blog.

Why is the human brain so efficient? Nautilus.

Baidu's Top AI Exec Is Stepping Down.  Techcrunch.

Crossbar pushes resistive RAM into embedded AI. IEEE Spectrum. 

AI Is Harder Than You Think.  NY Times.

The 'father of A.I.' urges humans not to fear the technology. CNBC.

How Artificial Intelligence Is making chatbots better for businesses. Forbes.

The road to artificial intelligence is paved with calculus. William & Mary.

The world’s dominant crypto-mining company wants to own AI. Bloomberg.

Artificial intelligence will both disrupt and benefit the workplace, Stanford scholar says. Stanford News. 


--Research Links--

Evolutionary RL for Container Loading.  Link.

Generalized Structural Causal Models.  Link.


--Commentary--

This week I had the pleasure of interviewing Ben Vigoda from Gamalon.  The company recently closed a Series A financing and has been one of the most forward thinking AI companies around.  Ben was the guy who introduced me to the concept of probabilistic programming when we met a couple of years ago, and Gamalon is one of the few startups taking such an approach. 

1.  Describe Gamalon.  What's the pitch?

We are creating a new AI-fueled super-power: imagine if one person could have a one-on-one personal conversation, like they might have texting with a friend, but with thousands or even millions of other people simultaneously. Our vision of the future is not about people chatting with bots.  It is about people communicating with other people, massively augmented and leveraged by AI.

In the enterprise, there is a burning need for this, because thousands or millions of customers, communicate to companies digitally across a broad array of channels. They send in natural language and raw text of all kinds including surveys, trouble tickets, receipts, forms, chats, call transcripts, speech to text-based appliances, receipts, inquiries, and more.

Before if companies wanted to process incoming natural language and raw text and get it into a structured database or spreadsheet, they needed to contract large teams to do slow, inaccurate, expensive data entry.  Then it took a long time to get back to customers as well.  It’s bureaucracy plain and simple.

We transform months to milliseconds with much higher accuracy than manual processing, which is such a big change that it is qualitative not just quantitative. It means that a company can know instantly what every natural language input from a customer is saying, and flow that right into their systems in real-time, reply immediately, and better serve customers, even when customers start saying new, unanticipated things. As this becomes a hub for structuring and merging multiple raw text streams, bureaucracy becomes a fluent friendly conversation with customers. It’s a new super-power.

2.  What are the benefits of probabilistic programming over DNNs?

It’s not about one or the other; probabilistic programming versus deep neural networks, or whether one type of machine learning is better, it’s about designing the best AI. In my view, there are currently four foundational axioms emerging in AI, and all four are critical for future progress:

A. models are programs, a fixed arrangement of neurons and synapses is a special but limited case of a program

B. variables carry uncertainty (we like quantifying uncertainty as probability since it is normalized)

C. users interact directly with “hidden layers” in the model

D. we will be talking about this fourth axiom in the future

In 1969, the book “Perceptrons” decimated the field of 1-layer neural networks (called perceptrons) by proving that they cannot learn an XOR function.  It was not until 1986, that Rumelhart, Hinton and Williams showed experimentally that having multiple hidden layers in a neural network trained using back-propagation could overcome this kind of problem.  Today, there are lots of new examples of things that even very deep neural networks cannot learn.  Here’s one: DNNs cannot find clouds on an exoplanet 1000 light years away without training data.  My friend at MIT did exactly that using a (Bayesian) model of the physics.  There are lots of structured signals hiding in noise where a neural network has no hope of finding it.  Actually most signals are like this. Someone talking to you is like that. To parse these signals you need to be able to specify models with more precision and complexity than static arrangements of neurons and synapses. A very good way to think about machine learning is as a communications system. A hidden system transmits (noisy) data to us. We need to guess the state and configuration of the system that sent us the data. At the relatively easy extreme are things like WiFi and cell phones where the transmitter was designed by the IEEE and the urban wireless channel was exhaustively modeled by armies of PhD’s armed with antenna trucks. We have relatively little to do in order to infer the state and structure of the transmitter, but still there is a lot of machine learning technology there, and this is what wireless receivers spend their days computing.  The other end of the extreme is to act as if we know absolutely nothing about the transmitter system, and use a universal function approximator to infer the structure of the transmitter from supervised data.  That’s what DNNs do. But golly there’s a heck of a lot of room in between. When models are programs, then you can easily design a receiver to model anything that we do know about the transmitter; And even for the receiver model to reconfigure itself on the fly to model changes in the configuration of the transmitter system. This is especially important when the transmitter is a person talking. People sure are interesting - our minds were not designed by the IEEE, and cannot be modeled as simple state machines. We need more complex models if we want machines to understand what people are thinking when they talk to machines. Using Turing complete programs as our modeling language gives us the most precise and powerful way to say what we mean in a model.  And since lots of people already know how to program, it also provides us a with a great existing “install base” of engineers.

There are a lot of benefits of variables carrying uncertainty. For starters, the system can ask clarifying questions when it doesn’t understand what someone is saying or what someone is teaching it.  It knows what it didn’t understand!  Gamalon has a demo showing this for ordering a pizza that we think is pretty neat-o: https://gamalon.vids.io/videos/4c9adeb61611e0c6c4/ben-vigoda-mp4.  The immediate impact is that a system can approach 100% accuracy by asking for clarification as it learns and predicts.  More generally, uncertainty is also the foundation for curiosity.  When a system reads something that it doesn’t understand, it needs to be able to pinpoint precisely what it does not understand in order to be able to autonomously pursue learning about it further.

When people arrive in a new job, a new conversation, etc. they get context specific information from the other people when they join. This helps them configure their model for that situation. What ideas am I going to need for this conversation? How would an AI do the same thing?  You could program a new model for every new situation, but that’s only really good for developers. Users want to configure a model for a given situation without needing to program. In regular software, UI/UX was invented to solve this problem.  User interfaces enable non-programmers to configure and interact with a programmed system. Gamalon has created the very first UI/UX for AI that lets users directly interact with "hidden layers”, the ideas inside an AI model. I think that ten years from now, we will look back at this first product from Gamalon with its easy to use UI/UX, and it will have been the template for how a great number of subsequent AI based software products work.    Stay tuned!  

3.  Are there any customer use case examples of Gamalon you can share?

In the Global 2000, people in roles in Digital Transformation, Product, Marketing and IT/Data Science are deluged by natural language and raw text from their customers.  It comes in as surveys, chat transcripts and trouble tickets among others. One of our clients, a leading global automotive manufacturer, receives millions of customer messages per year. To manually process just a hundred thousand of them using an outsourcing firm took them months to get through a small sample of the total surveys and cost them way too much money. They had people file each message into one of about a hundred high-level categories and, even then, the accuracy was too low at 65%. That accuracy rate is pretty typical by the way. They were always stuck with the same high-level categories as changing them was too costly in time and money, so they couldn’t learn anything new from the data. Even if there were new customer messages that didn’t fit into any of the categories, it didn’t matter - that message was going into one of the existing categories. Anything new was getting washed away.

Now that they’ve implemented Gamalon, months has gone to milliseconds and 65% accuracy became over 95% accuracy, all for a fraction of the cost of doing it manually. Plus, they’ve gotten the ability to drill into the data to explore and discover new trends. All this means they can react and respond faster to improve customer retention and, ultimately, revenue.

4.  AI is easy to market because it's a hot item.  Selling AI is a bit more challenging.  What have you learned about selling AI to enterprises?

We haven’t found it to be all that challenging for Gamalon selling in the enterprise, actually. I think that is because we did a ton of initial work at Gamalon, like any SaaS company must do in the early stages, to identify product-market fit - to make sure that we solve a specific burning need in the Global 2000.

The fact is, folks in most enterprises are just getting started with AI.  Only about 4% of enterprises have adopted AI so far. But they know it is important. I just think that enterprises don’t really want complex data science IT consulting projects. What they really want are SaaS solutions fueled by AI. They want to instantly see remarkable ROI with 1-click software that is easy to use but does something that was radically impossible before.

At Gamalon, our focus every day is on making our solution ever easier and faster to get to ROI, and I think this is starting to really pay off.

5.  What AI-related problem do you wish someone would solve that isn't something Gamalon is working on?

Video-conferencing and remote whiteboarding that just works. Uses AI to get all the participants quickly and seamlessly connected, stays connected, takes notes, captures hand-drawn figures and re-draws them beautifully, and sends everything to everyone in a nice summary afterwards.

A system that listens to your weekly sales war room, emails, and customer visits and auto-populates your CRM with all of the relevant information.

An IDE that pair codes with you.

There’s a million of them. At Gamalon we had 93 ideas when we started. We narrowed them down by interviewing customers to find their burning need, and a problem where we felt passionate about applying our technology.

6.  What needs to happen for probabilistic programming to go mainstream?

It’s not mainstream. But some of the best minds in machine learning in the world are working in this new direction of axioms A and B.  (I don’t know of others besides Gamalon working on C & D.) For me that is actually a more interesting moment than when it is ultimately mainstream.

The hotbeds in industry are currently Google DeepMind, Uber AI Labs, and humbly, Gamalon Gamalon is the first to have produced a commercial product using this technology.  I don’t think anyone else will have done that for at least another couple of years.

Some of the early leaders in academia who are my heros include Noah Goodman at Stanford, Josh Tenenbaum at MIT, David Blei and Andrew Gelman at Columbia, Stuart Russell at Berkeley, Zoubin Ghahramani at Uber/Cambridge University, Ryan Adams at Princeton, … I surely cannot list them all.  Oh and by the way, Yann Lecun had a blog post in January where he said “Deep Learning est mort. Vive Differentiable Programming.”  So he is onto axiom A now, he just needs B, C, and D.

As I said, it’s not about probabilistic programming. It’s about these (at least) four new axioms for AI.  How long will it take for most systems to incorporate all of these axioms?  This could take a long time. These are complex concepts so the speed of uptake from person to person is slow. AI is essentially the same kind of field as theoretical physics; There is a lot to learn and understand. I have been working very hard for my entire life to improve my understanding of machine learning, and there is always so much more to learn.

The expertise could therefore end up being very unevenly distributed.  Some teams may have walking and talking AI disrupting an industry while other teams are still developing preliminary expertise. I don’t mean to be bleak.  I hope that by articulating this possibility, we encourage more teams to open their minds to the amount there is to learn, and apply themselves to ramping their organizations in this area.  Teams that want to fundamentally innovate in machine learning will require expertise across signal process communications and information theory, probabilistic models, statistics, numerical methods, a wide range of solver methods, statistical physics, convex optimization, as well as specific machine learning verticals such as speech, natural language, vision, and control theory, compilers, high performance computing and cloud architectures, data pipelines. Deep learning is just one part of a larger mix.

7.  Elon Musk and Mark Zuckerberg have an ongoing debate about whether killer AI is going to take over the world or not.  Do you have an opinion on the Musk/Zuck debate?

Imagine if you could go into your teenager’s brain and delete all their ideas and thoughts about dating? Of course we would never do this to people! But that seems like the kind of capability we need for our AI-based products in order for them to be safe and controllable. Gamalon’s product is already like that: a business analyst can use our UI/UX to design and edit the ideas that our system learns. This took a ton of work to get the system to be like this, but we made the investment not only because it’s the right path forward for AI, but because it is what our enterprise customers want.

Because of what we have been able to do in our product, from my current vantage point, I believe that AI can be made to be pretty safe, but we have to work at it, just like we need to work hard on designs for cars, rocket ships, and social networks in order for them to be safe.

Elon Musk is an expert in electromechanical systems, and as far as I can tell he is working hard to make his cars and rockets safe. Marc Zuckerberg is an expert programmer and an expert in social networks, and I deeply hope that he is working hard to make his social network safe for democracy, although we would all like to see more evidence and positive progress - AI could help a lot by reading massive numbers of natural language posts to look for fake news for example.

I welcome Elon’s contribution to funding openAI, I think that was a nice thing to do for science.  But does making something open necessarily make it safe? Are openly available nuclear weapons designs a good thing for nuclear safety?

In my opinion, the way to make AI safe is to make the models understandable and editable, so that we can audit and control what the system is learning.  We should also focus on the real issues, not the science fiction ones. Weaponized AI for cyber attacks is worth worrying about a lot more than the singularity.  Relatively rapid job displacements due to AI is worth attention.

We can choose to address these real issues.  At Gamalon we have focused our product direction on augmenting human capabilities, giving people new super powers, rather than displacing people in the workforce. For example, we think that the future of customer support is workers with high quality jobs, who have time to understand the product and the customer, providing a highly empathetic and on-brand experience. We can use AI to help them support a lot more people at once, so that they can also be highly cost competitive.

And there’s a whole lot more potential if we are creative. Take Gamalon’s product for example - not just enterprises, we are all deluged by tons of messages every day. Our time is our most valuable commodity, and every new message divides our attention span into smaller and smaller chunks, effectively making us dumber than we are. 

By condensing millions of messages down to their core ideas, removing duplicate ideas, and organizing the ideas into an “Idea Tree” where you can drill down on just the things that are important to you, Gamalon’s technology could help people communicate. Local governments might be able to coordinate better with their constituents around issues like opiate addiction or foster care. It might give people back some of their attention span to focus on important issues that are undermining our democracy - things like gerrymandering.  Or imagine AI powered grass roots organizing. If everyone had their own personal AI that could help you find and collate meaningful ideas no matter where or how they are posted, everyone wouldn’t have to go to one place like Facebook and rely on their centralized recommendation system. This could help us get back the web we wanted, level the playing field, and diminish the importance of big money campaign spending on ads.

Right now in AI I feel that we should be focusing our efforts on addressing the real issues, with real expertise, and finding creative ways to have a positive impact using this fast improving technology.


"Don't be evil" has long been the credo for the tech giant Google, but apparently some of its employees and researchers believe that the company's AI project has a dark side. Google's Project Maven is a customized AI surveillance engine that uses data captured by U.S. drones to detect vehicles and other objects, track their motions, and provide results to the Department of Defense.

The idea of AI being weaponized by the U.S. military has made a lot of people in the industry worried. About a dozen Google employees resigned in protest of the project, and more than 4,000 employees signed a letter condemning Google's involvement with the military and how it will damage public trust in the company. Additionally, 90 academics, researchers, and scholars signed a petition asking that all tech companies sign an international treaty to ban autonomous weapons systems

According to the petition, just because the defense industry has historically driven advances in computing R&D doesn't mean that it has to be the future of technology. The petition makes a great point, that Google and other companies including Microsoft and Amazon are violating the public trust by using personal data for military purposes. While those companies might benefit financially from having DoD contracts they are doing so by exploiting the consumers that have put them in the position to develop technology. 

It will be interesting to see how Google and other tech companies respond to these actions. 


--Big Idea--

No surprise that the big idea this week is the Google Duplex announcement.  If you haven't listened to it yet, please do.  The examples are incredible.  But what is really interesting is the discussion threads around this.  I think Jeff Schneider's tweet summed it up well.  And Bloomberg has a summary of the concerns about the service.  This is part of why we pioneered Botchain actually, because AIs are going to need an identity in this coming world, and humans (bots too) will want to ask the entities they interact with if they are humans or bots.  This whole thing is a really impressive advance in AI from both a technology perspective, and a sociological perspective.  If you haven't read up on it, go check out the links above.  Google Duplex is a big idea.


Join the Cognitive Revolution

The Cognitive Revolution Symposium explores the future & ethics of brain-computer interfaces on Sat, May 19.

With the Geneva Center for Security Policy and ETH Zurich’s Health Ethics, join experts from BCI research, AI, neuroscience, international security, social sciences, human rights and design. 

Register your interest 


--Must Read Links--

The Trump Administration Plays Catchup On AI.  Wired.

Finally the U.S. government is paying attention.  I don't know who started it in the administration but I am glad they did.  I hope they take this seriously.

How Cambridge Analytica Turned Clicks Into Votes.  Guardian.

If you are curious about the tech behind this big story, this article covers some of it.

Carnegie Mellon Launches Undergraduate Degree In Artificial Intelligence.  CMU News.

Hopefully this starts to address the dramatic shortage of AI talent.


--Industry Links--

AI Redefines Performance Requirements At the Edge.  NextPlatform.

Where Bank of America Uses AI, and Where It's Worries Lie.  American Banker.

How Long Until A Robot Cries?  Nautilus.

8 Useful Advices For Aspiring Data Scientists.  KDNuggets.

Artificial Neural Networks Grow Brainlike Navigation Cells.  Quanta Magazine.

Hidden Audio Attacks In Alexa and Siri.  NYTimes.

How Artificial Intelligence Is Shaping Religion in the 21st Century.  CNBC.

The WhiteHouse Says A New AI Task Force Will Protect Workers and Keep America First.  MIT Tech Review.

A Canadian Startup Applies Machine Learning To Corporate Bond Issuance.  Economist.

How AI Is Taking Over The Economy (video).  Bloomberg.

The AI Problem of Product-Marketing Fit.  (my post)  Medium.

The New Google News:  AI Meets Human Intelligence.  Google blog.

7 Things Lawyers Should Know About Artificial Intelligence.  AboveTheLaw.

Data Science For Startups.  Towards Data Science.

Google's New Conversational AI Could Eventually Undermine Our Sense of Identity.  Verdict.

Andy Rubin Says Everything Is About To Get Robot Legs.  CNET.


load more stories