Artificial intelligence (AI) is rapidly improving, becoming an embedded feature of almost any type of software platform you can imagine, and serving as the foundation for countless types of digital assistants. It’s used in everything from data analytics and pattern recognition to automation and speech replication.
The potential of this technology has sparked imaginative minds for decades, inspiring science fiction authors, entrepreneurs, and everyone in between to speculate about what an AI-driven future could look like. But as we get nearer and nearer to a hypothetical technological singularity, there are some ethical concerns we need to keep in mind.
Unemployment and Job Availability
Up first is the problem of unemployment. AI certainly has the power to automate tasks that were once capable of completion only with manual human effort.
At one extreme, experts argue that this could one day be devastating for our economy and human wellbeing; AI could become so advanced and so prevalent that it replaces the majority of human jobs. This would lead to record unemployment numbers, which could tank the economy and lead to widespread depression—and, subsequently, other problems like crime rates.
At the other extreme, experts argue that AI will mostly change jobs that already exist; rather than replacing jobs, AI would enhance them, giving people an opportunity to improve their skillsets and advance.
The ethical dilemma here largely rests with employers. If you could leverage AI to replace a human being, it would increase efficiency and reduce costs, while possibly improving safety as well, would you do it? Doing so seems like the logical move, but at scale, lots of businesses making these types of decisions could have dangerous consequences.
Technology Access and Wealth Inequality
We also need to think about the accessibility of AI technology, and its potential effects on wealth inequality in the future. Currently, the entities with the most advanced AI tend to be big tech companies and wealthy individuals. Google, for example, leverages AI for its traditional business operations, including software development as well as experimental novelties—like beating the world’s best Go player.
AI has the power to greatly improve productive capacity, innovation, and even creativity. Whoever has access to the most advanced AI will have an immense and ever-growing advantage over people with inferior access. Given that only the wealthiest people and most powerful companies will have access to the most powerful AI, this will almost certainly make the wealth and power gaps that already exist much stronger.
But what’s the alternative? Should there be an authority to dole out access to AI? If so, who should make these decisions? The answer isn’t so simple.
What It Means to Be Human
Using AI to modify human intelligence or change how humans interact would also require us to consider what it means to be human. If a human being demonstrates an intellectual feat with the help of an implanted AI chip, can we still consider it a human feat? If we heavily rely on AI interactions rather than human interactions for our daily needs, what kind of effect would it have on our mood and wellbeing? Should we change our approach to AI to avoid this?
The Paperclip Maximizer and Other Problems of AI Being “Too Good”
One of the most familiar problems in AI is its potential to be “too good.” Essentially, this means the AI is incredibly powerful and designed to do a specific task, but its performance has unforeseen consequences.
The thought experiment commonly cited to explore this idea is the “paperclip maximizer,” an AI designed to make paperclips as efficiently as possible. This machine’s only purpose is to make paperclips, so if left to its own devices, it may start making paperclips out of finite material resources, eventually exhausting the planet. And if you try to turn it off, it may stop you—since you’re getting in the way of its only function, making paperclips. The machine isn’t malevolent or even conscious, but capable of highly destructive actions.
This dilemma is made even more complicated by the fact that most programmers won’t know the holes in their own programming until its too late. Currently, no regulatory body can dictate how AI must be programmed to avoid such catastrophes because the problem is, by definition, invisible. Should we continue pushing the limits of AI regardless? Or slow our momentum until we can better address this issue?
Bias and Uneven Benefits
As we use rudimentary forms of AI in our daily life, we’re becoming increasingly aware of the biases lurking within their coding. Conversational AI, facial recognition algorithms, and even search engines were largely designed by similar demographics, and therefore ignore the problems faced by other demographics. For example, facial recognition systems may be better at recognizing white faces than the faces of minority populations.
Again, who is going to be responsible for solving this problem? A more diverse workforce of programmers could potentially counteract these effects, but is this a guarantee? And if so, how would you enforce such a policy?
Privacy and Security
Consumers are also growing increasingly concerned about their privacy and security when it comes to AI, and for good reason. Today’s tech consumers are getting used to having devices and software constantly involved in their lives; their smartphones, smart speakers, and other devices are always listening and gathering data on them. Every action you take on the web, from checking a social media app to searching for a product, is logged.
On the surface, this may not seem like much of an issue. But if powerful AI is in the wrong hands, it could easily be exploited. A sufficiently motivated individual, company, or rogue hacker could leverage AI to learn about potential targets and attack them—or else use their information for nefarious purposes.
The Evil Genius Problem
Speaking of nefarious purposes, another ethical concern in the AI world is the “evil genius” problem. In other words, what controls can we put in place to prevent powerful AI from getting in the hands of an “evil genius,” and who should be responsible for those controls?
This problem is similar to the problem with nuclear weapons. If even one “evil” person gets access to these technologies, they could do untold damage to the world. The best recommended solution for nuclear weapons has been disarmament, or limiting the number of weapons currently available, on all sides. But AI would be much more difficult to control—plus, we’d be missing out on all the potential benefits of AI by limiting its progression.
AI Rights
Science fiction authors like to imagine a world where AI is so complex that it’s practically indistinguishable from human intelligence. Experts debate whether this is possible, but let’s assume it is. Would it be in our best interests to treat this AI like a “true” form of intelligence? Would that mean it has the same rights as a human being?
This opens the door to a large subset of ethical considerations. For example, it calls back to our question on “what it means to be human,” and forces us to consider whether shutting down a machine could someday qualify as murder.
Of all the ethical considerations on this list, this is one of the most far-off. We’re nowhere near territory that could make AI seem like human-level intelligence.
The Technological Singularity
There’s also the prospect of the technological singularity—the point at which AI becomes so powerful that it surpasses human intelligence in every conceivable way, doing more than simply replacing some functions that have been traditionally very manual. When this happens, AI would conceivably be able to improve itself—and operate without human intervention.
What would this mean for the future? Could we ever be confident that this machine will operate with humanity’s best interests in mind? Would the best course of action be avoiding this level of advancement at all costs?
There isn’t a clear answer for any of these ethical dilemmas, which is why they remain such powerful and important dilemmas to consider. If we’re going to continue advancing technologically while remaining a safe, ethical, and productive culture, we need to take these concerns seriously as we continue making progress.