Home RPA Get Smarter – Ethics and Transparency Should be Top of Mind

RPA Get Smarter – Ethics and Transparency Should be Top of Mind

The early incarnations of Robotic Process Automation (or RPA) technologies followed fundamental rules.  These systems were akin to user interface testing tools in which, instead of a human operator clicking on areas of the screen, software (or a ‘robot’ as it came to be known) would do this instead.  This freed up user time spent on exceedingly low-level tasks such as scraping content from the screen, copy and paste, etc.

Whilst basic in the functionality, these early implementations of RPA brought clear speed and efficiency advantages.  The tools evolved to encompass basic workflow automation in the following years, but the process was rigid with limited applicability across an enterprise.

Shortly after 2000, automation companies such as UiPath, Automation Anywhere, and Blue Prism were founded (albeit some with different names at their initial incarnation).  With a clear focus on the automation space, these companies started making significant inroads into the enterprise automation space.

RPA gets smarter

Over the years, the functionality of RPA systems has grown significantly.  No longer are they the rigid tools of their early incarnations, but instead, they offer much smarter process automation.  UiPath, for example, list 20 automation products on their website across groups such as Discover, Build, Manage, Run & Engage.  Their competitors also have comprehensive offerings.

Use cases for Robotic Process Automation are now wide and varied.  For example, with smart technology built-in, rather than just clicking on-screen regions, systems may now automatically extract content from invoices (or other customer-submitted data) and convert this into a structured database format.  These smart features may well be powered by forms of Artificial Intelligence, albeit hidden under the hood of the RPA application itself.  Automation Anywhere has a good example of this exact use case.

Given the breadth of use cases now addressed by RPA technologies across enterprise organizations, it is hard to see a development and product expansion route that does not add more AI functionality to the RPA tools themselves.  Whilst still being delivered in the package of Robotic.

Process Automation software, it is likely that this functionality will move from being hidden under the hood and used to power specific use cases in the RPA software (such as content extraction) to function in its own right that is accessible to the user.

The blurring of AI & RPA

The RPA vendors will compete with the AI vendors that sell automated machine learning software to the enterprise.  Termed AutoML, these tools enable users with little or no data science experience (often termed citizen data scientists) to build custom AI models with their data.  These models are not restricted to specifically defined use cases but can be anything the business users wish to (and have the supporting data to) build.

With our example above, once the data has been extracted from the invoices, why not let the customer build a custom AI model to classify these invoices by priority without bringing in or connecting to an additional 3rd party AI tool?  This is the logical next step in the RPA marketplace; some leaders in the space already have some of this functionality in place.

This blurring of the lines between Robotic Process Automation and Artificial Intelligence is particularly topical right now because, alongside the specialized RPA vendors, established technology companies such as Microsoft are releasing their own low-code RPA solutions to the market.  Taking Microsoft as an example, it has a long history with Artificial Intelligence.  Via Azure, its many different AI tools, including tools to build custom AI models and a dedicated AutoML solution.  Most relevant is the push to combine their products to make unique value propositions.  In our context here, that means it is likely that low-code RPA and Azure’s AI technologies will be closely aligned.

The evolving discussion of AI ethics

Evolving at the same time as RPA and AI technologies are the discussions, and in some jurisdictions regulations, on the ethics of AI systems.  Valid concerns are being raised about the ethics of AI and the diversity of organizations that build AI.

In general, these discussions and regulations aim to ensure that AI systems are built, deployed, and used in a fair, transparent and responsible manner.  There are critical organizational and ethical reasons to ensure your AI systems behave ethically.

When systems are built that operate on data that represents people (such as in HR, Finance, Healthcare, Insurance, etc.), the systems must be transparent and unbiased; even beyond use cases built with people’s data, organizations are now demanding transparency in their AI so that they can effectively assess the operational risks of deploying that AI in their business.

A typical approach is defining the business’s ethical principles, creating or adopting an ethical AI framework, and continually evaluating your AI systems against that framework and ethical principles.

As with RPA, the development of AI models may be outsourced to 3rd party companies. So evaluating the transparency and ethics of these systems becomes even more important given the lack of insight into the build process.

However, most public and organizational discussions of ethics are usually only in the context of Artificial Intelligence (where the headlines in the media are typically focused).  For this reason, developers and users of RPA systems could feel that these ethical concerns may not apply to them as they are ‘only’ working with process automation software.

Automation can impact people’s lives

If we go back to our example of invoice processing used before, we saw the potential for a custom AI model within the RPA software to automatically prioritize invoices for payment.  The technology shift would be minor to change this use case to one in healthcare that prioritized healthcare insurance claims instead of invoices.

The RPA technology could still extract data from claims documents automatically and translate this into a structured format.  The business could then train a custom classification model (using historical claims data) to prioritize payments, or conversely, flag payments to be put on hold pending review.

However, here the ethical concerns should now be very apparent.  The decision made by this model, held within the RPA software, will directly affect individuals’ health and finances.

As seen in this example, what may seem like relatively benign automation software is actually evolving to either reduce (or potentially completely remove) the human in the loop from critical decisions that impact people’s lives.  The technology may or may not be explicitly labeled and sold as Artificial Intelligence; however, the notions of ethics should still very much be top of mind.

We need a different lens

It may be better to see these ethical concerns, not through a lens of AI but instead, one focussed on automated algorithmic decisioning.

The reality is that it is not just the fact that AI technology may be making decisions that should be of concern, but in fact, any automated approach that does not have sufficient human oversight (whether this is powered by a rules-based system, Robotic Process Automation, shallow machine learning or complex deep learning for example).

Indeed if you look to the UK’s recently announced Ethics, Transparency and Accountability Framework, which is targeted at the public sector, you will see that it is focussed on ‘Automated Decision-Making.’  From the guidance document, “Automated decision-making refers to both solely automated decisions (no human judgment) and automated assisted decision-making (assisting human judgment).”

Similarly, the GDPR has been in force in the European Union for some time now, making clear provisions for individuals’ rights concerning automated individual decision-making.  The European Commission gives the following definition: “Decision-making based solely on automated means happens when decisions are taken about you by technological means and without any human involvement.

Finally, the state of California proposed in 2020 the Automated Decision Systems Accountability Act with similar goals and definitions.  Within this Act Artificial Intelligence (but not Robotic Process Automation explicitly) is called out: “‘Automated decision system’ or ‘ADS’ means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts persons” with assessment for accuracy, fairness, bias, discrimination, privacy, and security. Therefore, it is clear that the principle of the more general lens is recognized in public policymaking.

Enterprises should apply governance to RPA too

As organizations are putting in place teams, processes, and technologies to govern the development and use of AI within their organization, these must be extended to include all automated decisioning systems.  To reduce the burden and facilitate operation at scale within large organizations, there should not be one set of processes and tools for RPA and one for AI (or indeed, for each AI model).

This would result in a huge manual process to gather the relevant information, make this information comparable, and map it to the chosen process framework.  Instead, a unified approach should allow for a common set of controls that lead to informed decision-making and approvals.

This should not also be seen at odds with the adoption of RPA or AI; clear guidelines and approvals enable teams to go ahead with implementation, knowing the bounds in which they can operate. When using the more general lens, rather than one just targeted at AI, the implication becomes clear; ethics should be top of mind for developers and users of all automated decisioning systems, not just AI, which includes Robotic Process Automation.

Image Credit: pixabay; pexels; thank you!

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Stuart Battersby
Chief Technology Officer @ Chatterbox Labs

Dr Stuart Battersby is a technology leader and CTO of Chatterbox Labs. With a PhD in Cognitive Science from Queen Mary, University of London Stuart now leads all research and technical development for Chatterbox's ethical AI platform AIMI.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.