Historically, tech people have been neutral on the ethical value of the things created by the tech industry. Take something like end-to-end encryption. It has no inherent morality built into it. It can be used for good or bad purposes. This removes the maker of any ethical responsiblity, and places it on the user.
But as tech has advanced, we have moved from performing tasks to making decisions, and decisions can often have more inputs and moral consequences. It starts with something as simple as the decision to show an ad to someone. You could look at this through a few lenses. One is to say the algorithm has all this data, knows what this person wants, and is just fulfilling that need. It's making the person happier. Sometimes though, our wants are bad for us, or, at least are bad for us at some level of scale. Showing a cookie ad to a person trying to diet, or a Las Vegas travel ad to someone with a gambling problem who is trying to quit, or a get-rich-quick ad to someone who is trying to save money and may be identified by the algorithms as more on the gullible side of the spectrum. These decisions aren't as netural.
With AI, this is getting ramped up to a new level of intensity. As algorithms start to delve into reasoning and logic and higher level decision making, this can become a real problem. The programmers who code the algorithm, and the users who feed it data based on their behavior, can influence the moral components of the algorithm output. It isn't just the cases you hear about like, if a self driving car is about to hit a pedestrian or crash and possibly kill the driver, what does it do? That's actually a very rare use case.
I'm more concerned about AI algorithms making decisions about, say, who gets promoted at work. Or, what about an intelligent chatbot that is trying to sell you something? How far will it go to manipulate you? How much does the chatbot believe what you are buying is good for you?
Humans may need to have the ability to toggle their AI interactions to one of three settings: Better Human, Status Quo, or Indulgent. Think about an algorithm that helps you decide what to eat. Do you want to recommend the Better Human meal of broccoli and salmon, the Status Quo option of whatever type of meal you normally eat, or the Indulgent option of your favorite past meal with heavy sauce?
What about an online algorithm that, in the Better Human setting, shows you more "boring" articles that do more to educate you but less to enrage you to vote for or against a political candidate? The former may lead to fewer clicks and less time spent in online media. Rage can send you down a rathole that is beneficial to the media companies and their performance metrics.
Maybe an AI model controls the toggling between these options so you get a balanced life of mostly becoming a better human but occasionally lets you indulge. Morality sliding scales, or personal morality models, that can be toggled as inputs to these decisions may be the best solution.
The point of all this though, is that tech can no longer hide behind the idea that the work of the tech industry is morally neutral. But it's also dangerous for tech to take a stand on certain moral topics, particularly those on which different countries and societes are divided. Can you imagine an online psychiatrist AI that is generally really helpful to people and shows good results, but, has to counsel a pregnant teenager on her decision of whether or not to have an abortion? It's an opportunity for some group of people to be enraged no matter the outcome.
How tech thinks about the tech industry needs to change. And it needs to be thoughtful and responsible., or else we will end up with solutions that are knee jerk reactions to the problems as they arise. Now is the time to think through how we handle this problem, before it becomes obvious to everyone.
Thanks for reading.