Home Forget Elon Musk’s ban — let’s put our energy into building safe AI

Forget Elon Musk’s ban — let’s put our energy into building safe AI

Elon Musk recently commented on the need to regulate AI, citing it as an existential risk for humanity. As is the case with any human creation, the increasing leverage technology affords humans can certainly be used for good or evil, but the premise that we need to fear AI and regulate it this early in its development is not well founded. The first question we might consider is whether what we fear is the apathy or malevolence that AI might evolve.

I bring this up because Musk himself has previously referred to the development of AI as “summoning the demon,” associating the imagery of evil with it. Any honest assessment of the history of mankind shows us that the most shockingly malevolent intent can arise from human hearts and minds.

See also: Elon Musk calls on government to begin regulating AI

History also shows, however, technology overwhelmingly advances our shared human experience for good. From the printing press to the Internet, there have always been naysayers who evangelize fear of new technology. Yet, when channeled by leaders for the collective good, these technologies, although disruptive to the known way of life, create a positive evolution in our human experience. AI is no different.

Technology is always neutral by itself

In the hands of responsible, moral leaders, the technology promises to augment human capacities in a manner which could unlock unimagined human potential. AI, as any technology, is neutral. The morality of the technology is a reflection of our collective morality, determined by how we choose to use it.

Imagine any one of history’s dictators with a large nuclear arsenal. If their vengeance weapons were nuclear tipped and could reach all points of the earth, how would they have shaped the rest of history? Consider what Vlad the Impaler, Ivan the Terrible, and Genghis Khan would have done, for example. Not only were these malevolent humans, they actually rose to be the leaders and kings of men. Has technology already developed to a point where a mad man can lay waste to the planet? With nuclear, biological and chemical weapons, the answer is sadly, yes. We already live with the existential risk that comes from our own malevolence and the multiplicative effect of technology. We don’t need AI for that.

Falling prey to fear at this stage will harm constructive AI development. It has been argued that technology drives history. That if there is a human purpose, it is to be found in learning, evolving, progressing and building. Exercising our creative potential to free ourselves from the resource limitations that plague us and the scarcity that brings out the worst in us. In this way, Artificial Intelligence – technology that may mimic the most wondrous human quality, the quality of thought – can be a liberating force and our ultimate achievement. There is far more to gain from AI at this stage.

If that weren’t enough, take a minute to ponder the irreversibility of innovation. No meaningful technology has been developed and then put back in the bottle, so to speak. When the world was fragmented and disconnected, from time to time some knowledge was lost, but it was almost always re-discovered in a distant corner of the globe by some independent thinker with no connection to the original discovery. That is the nature of technology and knowledge…it yearns to be discovered. If we think that regulation and controls will prevent the development of Artificial Intelligence, we are mistaken. What they might do is prevent those who have good intentions from developing it. They will not stop the rest.

How would a ban work?

When contemplating bans, it is important to consider if they can be enforced, and how all parties overtly impacted by the ban will actually behave. Game theory, a branch of mathematics concerned with decision making in conditions of conflict and cooperation, poses a famous problem called The Prisoner’s Dilemma.

The dilemma goes something like this: Two members of a gang, A and B, are both arrested and locked up independently. If they both betray each other, each serves two years in prison. If A betrays B, but B doesn’t implicate his comrade, A goes free but B serves three years. And if both of them stay silent, they serve a year each. While it would seem that the “honorable” thing to do would be to stay silent and serve a year so that the punishment is equal and minimal, neither party can trust that the other will take this honorable course. The reason is that by betraying the other, there is the potential gain to the dishonorable actor of going scot-free. Both B and A will have to consider that the other might take the course most suitable for their own situation, and if this were the case, the betrayed party would then suffer maximum damage (i.e. three years in prison). Therefore, the rational course of action available to both parties is to betray each other and “settle” for a year in prison.

Let’s extend this framework and see how it applies to an AI ban. AI is clearly a technology that has a transformative impact in every field of endeavor; from medicine, manufacturing, and energy to defense and government. If AI were banned in military endeavors there would be multiple parties – countries, in this case – that would begin to think like A and B in our Prisoner’s Dilemma. If they honor the ban but others “betray” by surreptitiously continuing the development of weaponized AI, the advantage for others is maximized, while the downside for the followers of the ban is tremendous.

If all parties voluntarily give up such developments and honor the ban, then we have a best case scenario. But there is no assurance that this will be the case… much like the prisoners, these countries are making decisions behind closed doors with imperfect knowledge of what the other might be up to. And lastly, if all parties develop such technology the scenario is less rosy than honoring the ban – risks exist – but all parties are at least aware that they will face resistance if any one of them decides to use AI weapons, i.e. there is a deterrent in place.

Should we hope that AI is used for good? To heal rather than to harm? Should we commit ourselves to this goal and work towards it? Of course. But not at the cost of deluding ourselves into thinking that we can simply ban our problems away. AI is here and it is here to stay. It will keep getting smarter and more capable. Knowledge wishes to be discovered and no ban can keep an innovation from surging forth when its time has arrived. Rather than going down the path of diktats and bans, we actually need to redouble investments in even more rapid AI advancements in areas such as Explainable AI, ethical systems, and safety in AI. These are and can become real technologies, capabilities, and algorithms that will enable safe handling of accidents and counters to deliberate misuse. Our own work at SparkCognition focuses on making AI systems explainable so that decisions don’t just pop out of a black box with no rationalization, but come with evidence and explanation.

Beyond our labs, tremendous amounts of work are being done in the broader community, including at the University of Texas at Austin, in thinking through various aspects of safety in AI systems. Let’s move past “ban thinking”, roll up our sleeves and commit to the hard work of developing the frameworks that will allow humanity to positively leverage AI – our greatest invention – and enable our children to reap unimaginable rewards.”

The author is a serial entrepreneur and inventor based in Austin, Texas. He is the Founder & CEO of SparkCognition, Inc. an award-winning Machine Learning/AI driven Cognitive Analytics Company, a Member of the Board of Advisors for IBM Watson, a member of the Forbes Technology Council and a Member of the Board of Advisors for The University of Texas at Austin, Department of Computer Science.

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.