-- Commentary --
In an interesting post this week, Per Bylund argues that robots can't replace humans because even if they replace us as workers, they can't replace us as consumers. This is, in my opinion, the biggest conceptual gap in current thinking on AI. I wrote about this a few weeks ago but I want to revisit it here and clarify the argument a bit.
The most powerful force in the world, in my opinion, is human inertia. It is the reluctance of humans to change the way we do things. This is why we are stuck with so many path dependent historical accidents that we now accept as gospel. Therefore one of the biggest challenges in rolling out AI technology is the adaptability (or lack of) by humans. As technology innovation continues to increase at an exponential rate, the gating factor to realizing the benefits of new technology will be the speed at which humans can adapt so they can adopt it.
This is important because the coming technology world is going to destroy our faith in many things that we currently believe. (For a great treatment of this, check out Yuval Harari's "Homo Deus"). Capitalism has worked wonderfully well, but many pundits are speculating that it may not work so well in the future. It is a system that is based on growth, and that growth, if humans are the primary consumers, may slow considerably. But transitioning to a brand new system will be difficult and jarring and will probably not happen slowly. If it happens, it will come fast and be part of some radical event that happens in the world. I'm betting though, that we try our best to keep capitalism and shape it in an AI world.
The only way that can work is if two things happen. First, AI's become consumers. This is what Per Byland isn't thinking about in his article. I think this is a long way off, > 15 years, but, I believe it will happen. It will happen because the nature of capitalism is competition. As your company and my company both design and sell AIs to compete for work, we will eventually stop programming the AIs to do things and instead teach them to mostly be learning machines. To do such learning, we will have to give them desires and goals. This will lead to them deciding how to augment themselves. It's very conceivable that someday trillions of machines are buying things (skills, knowledge, help, work) from each other, and from humans. This bot economy will eventually surpass the human economy and will continue to drive the growth of capitalism in a world where humans can't. I know from the past discussion on this that many of you are extremely skeptical, but, no one has so far provided a logical train of thought as to why this doesn't make sense, all the responses were "that seems unlikely because AIs won't have desires." If you have a stronger reason to believe this can't happen, I'd love to hear it. But I believe that someday, companies will be talking about AI agents as their target market.
Thanks for reading, and enjoy the rest of your weekend.