Home Well, AI, How do You Explain That?!

Well, AI, How do You Explain That?!

AI explainability is one of the most significant barriers to its adoption — but revealing the logic can take the human-machine relationship to the next level. The AI boom is upon us, but a closer look at specific use cases reveals a big barrier to adoption. In every vertical, businesses are struggling to make the most of AI’s promise. The biggest pain point? AI explainability.

This term refers to the fact that artificial intelligence and machine learning systems are infamously opaque. Most advanced AI models are black boxes that can’t explain how they reached a specific decision. Data goes in, results come out. Typically, users find that attempting to reverse engineer the decision-making process fails.

AI explainability is an issue of trust.

A lack of AI explainability can be a deal-breaker for adoption. Often, businesses must justify—either legally, morally, or practically—how these models arrive at their decisions. One simple example would be for law enforcement agencies using facial recognition platforms. If an AI system tags a person as a suspect, the process that led the AI to that individual must stand up to scrutiny.  

But the more common reason that AI explainability bars adoption is the issue of trust. Simply put, many people find it difficult to embrace results when they lack insight into the factors that determined them. At times, it can feel that human users simply concede to the whims of the machine.

The AI trust issue also affects the business-customer relationship. According to a recent Accenture survey, “72% of executives report that their organizations seek to gain customer trust and confidence by being transparent in their AI-based decisions and actions.” This, of course, can only be achieved if the AI is transparent to its business users themselves.

Revealing the rationale.

In its report, “The Top 10 Data and Analytics Technology Trends for 2019,” Gartner puts explainable AI at number 4. It notes that “To build trust with users and stakeholders, application leaders must make [AI] models more interpretable and explainable.” For people to collaborate with machines to solve complex problems, AI systems need to reveal the “why.”

Advanced AI systems understand the communication imperative. Some are already implementing mechanisms of transparency. As per Gartner, “Explainable AI in data science and ML platforms, for example, auto-generates an explanation of models in terms of accuracy, attributes, model statistics and features in natural language.”

AI engineers consider chess a common ground for AI development because of its rule-based properties. While chess-playing AI surpassed humans by the late 1980s, these systems could not explain their gaming strategies until recently. Before now, the best explanations were merely statistical. For example, the system may justify its recommendations by explaining that its choices improved the player’s odds by a fraction of a percent.

Explaining the next generation of AI.

Next-generation systems model human thinking. They provide rationales in much the same way a person would. Cutting-edge chess engines, for instance, may explain its choices thusly: The move “removes the queen from an unsupported square; defends the two unsupported rooks; and enables the threat of winning the knight.” This explanation ticks all the boxes of an optimal explainable AI suggested by Accenture. It’s comprehensible, succinct, actionable, reusable, accurate, and complete. 

Explainable AI systems better serve both the user and the technology. When users understand AI, they can make their own judgment calls. The exchange creates a feedback loop that can improve the AI functionality. Explainability also increases engagement with AI, which gives systems more opportunities to learn and improve.

It’s the human’s call.

The success of future AI platforms will be based on transparency. These systems must give users the confidence they need to rely on their decisions. When we invest in AI systems, we want to know how they arrived at their judgments. Already, platforms are enhancing their AI engines with a conversational layer, which improves explainability. The user then has complete transparency into the variables considered by the system and its reasoning.

The value here is twofold: Users feel more confident using AI and are thus much more inclined to rely on the technology. But even more importantly, they can use their own judgment to analyze the machine’s call. And, as in the chess example, they can learn from the machine’s decision-making process to improve their own game. 

Ultimately, that’s the real goal of AI. Not to subvert people to its domineering decisions, but to enable professionals to augment and expand their knowledge and performance. Explainable AI is the key to that relationship.

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

I'm an accomplished executive, driven by a passion for finding simple solutions to complex problems. An innovator with expertise in AI and big-data across the digital domain, I have led teams in both established corporations and startups, pushing the limits of technology to solve complex problems related to human behavior. I currently lead Novarize, a disruptive B2B startup connecting consumer brands with their best customers.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.