Inside | Real news, curated by real humans
Inside AI

Inside AI (Dec 1st, 2019)

Happy Sunday, and Happy Thanksgiving Weekend to U.S. readers.  This week I'm ignoring the summary of weekly articles because it was a light week, and I want to focus on one key question... when we look back on AI in 50 years, will we be thankful for the technology?

AI technology provides a lot of promise, but like many technologies, comes with some dark sides.  For every good use case of AI, like decreasing credit card fraud, there is a bad use case like DeepFakes.  When we look back on AI in 50 years, will we judge it as more good, or more bad?

Technologists tend to lean positive on the ethics of technology, saying things like the tech itself is neutral and ethics only comes into play on how it is used.  But part of the problem with evaluating whether a technology is "good" or "bad" in retrospect is that we each mean different things when we say "good."  In the early days of tech I believe this was true but, as tech becomes more ingrained in our lives, it increasingly impacts us on deeper levels. 

Tech aids more and more of our decisions, which is complicated by the fact that our choices in general are increasingly architected for us by media, government, and corporations.  I wrote about this a few years ago from a political perspective in Why Libertarianism Is Based On a Lie, But I'm Still a Libertarian, but it applies to tech as well.  I'm not sure it matters which constituency controls AI, it could still turnout badly.

In fact, let's look at 2 major technology trends of the last 20 years and how they turned out in retrospect.  I was in college in the mid 90s when the Internet was taking off and everyone used to talk about how great it was going to be - the freedom and openness.  It was going to create more educated and enlightened citizens, and improve democracy.  Yet, here we are 20 years later with walled gardens and unhealthy monopolies, talking about what to do with Facebook, Google, and Amazon.   Lots of people are now talking about how the World Wide Web didn't end up like we thought.

The second technology is social media.  I was a very early blogger, and I remember in 2003 - 2005, we believed that the world would be so great when those stupid elitist gatekeepers stopped force feeding us ideas, and we let the democracy of ideas flourish.  Anyone could blog and it was going to be this explosion of the brilliance of the common man.  Except that what we learned is that so many of the "common man" are idiots, say stupid things, and enable information cascades that undermine clear thought and rationality.  Lowering the barrier to creating media meant that yes, we found some brilliant diamonds in the rough, but at the expense of lots of stupid noise from everyone else.

Now with social media, blogging, and a world where anyone can publish, we are begging for the very gatekeepers and curators we hated 15 years ago to return and save us from the flood of online garbage.  We are 0-for-2 predicting that those technologies would be mostly positive.

It isn't a stretch to believe that AI will follow a similar path - some great things, but lots of garbage as an outcome.  AI is more dangerous because it is often an aid to decision making at a much deeper level, and so it could be choice architecture on steroids, whether it is controlled by government, corporations, or someone else.  I believe that when we look at AI in 50 years, the results will be very mixed, and could look very bad.

It's kind of like Yuval Harari's thesis in Sapiens.   The core thesis of the book is kind of like - humans had it good, then started making tradeoffs that seemed smart but were damaging in retrospect.  Farming?  Great idea, we can work less and stay in one spot.  Except that now we are subject to the feast and famine cycles of our crops.  That's just one example.  The book has much more.

But, let me give you a reason to be hopeful.  All of this bad behavior we are witnessing in tech is influencing a generation that will grow up and have to deal with these ethical issues in AI.  Previous generations thought of tech as value neutral, because it largely was.  This generation understands how easy it is to misapply it and use it for harm.  They won't grow up with the neutral view of tech, which means, as they start to run the world, and AI really hits its sweet spot, I hope they will be better prepared to manage it appropriately.  That is our best chance to end up in a good spot.

This Thanksgiving, I'm thankful that we have hope that our children may do a better job than we did on some of these decisions, because the decisions are going to get harder.  

Thanks for reading.

@robmay

Subscribe to Inside AI