Banishing bias in chatbots

July 31, 2019
Pank Seelen

Expectations for chatbots and virtual assistants powered by Machine Learning and Artificial Intelligence (AI) are extremely high. With investment in the industry already topping EUR 20 billion, analysts are labelling the technology the ‘next big thing’. However, 92% of C-suite executives are concerned about the potential negative impact of rogue data & analytics on reputation. Take back control by addressing key issues around governance and bias to protect your brands, trust and profit.   

As chatbots gain in importance in marketing, sales and customer service, your prospects and customers are more and more likely to come into contact with them. In some cases, their conversation with the bot will be the single point of contact with your organisation. Therefore, it is crucial that these bots reflect your values and knowledge base. 

The importance of understanding users’ intent

Put simply, chatbots are programmed to answer users’ questions in a conversational form. By recognising language use and sentence structure, bots can pick up on a specific tone or emotion and then reflect it in their answer to sound more human and relatable.

The challenge for bots is to understand a user’s intention, also known as ‘intent’. In order to do so, they need to comprehend the context of the user’s question. To identify intent and know how to respond, bots are programmed with algorithms based on so-called training data, which is made up of masses of information, parameters and answers.

AI is used to make the interaction more engaging, conversational and lively. It also enables bots to learn from conversations and improve their ability to offer appropriate responses and solutions. However, bots are only as good as the training data they are built on.

New call-to-action

How flawed data impacts chatbots

If the data is poorly sourced or biased, the results can be disastrous. Google’s facial recognition tool had trouble recognising non-Caucasians because photographs of faces it had been fed were mainly white. At another company, an HR bot was discovered to favour male job applicants. Why? Its training data included the company’s previous recruitment information and this had an anti-female bias.

Taking action against bias

Avoid disastrous scenarios like the ones above, by testing your bots for bias before deployment. Consider the following questions:

  • What are the knowledge base and values of your organisation? How will you make sure that your bot reflects them?
  • How does your bot perform if it interacts with someone (or another bot) whose values run counter to yours?
  • Is the bot’s initial set of questions and answers diverse enough in terms of culture, gender, ethnicity and socio-economic background?
  • How familiar are you with the training data you’re using and where it was sourced?
  • Does your bot have an avatar with a traditional gender, ethnic or racial identity? If so, does it reference any stereotypes?
  • How does your bot respond to gendered or sexist remarks? And to racial epithets or religious slurs? Are the responses appropriate to people in the targeted group?

Answering the above will help determine whether bias has been unintentionally encoded into your system so you can take retroactive measures. In fact, bias can creep in at any stage of the bot development process, including framing the problem the bot will address, and data collection and preparation. We will dive deeper into this topic in a future blog.

Governance: the bigger picture

Avoid bias in future by tackling issues around AI and data governance. If governance is causing you sleepless nights, you are not alone. As mentioned in the introduction, when Forrester surveyed C-level executives on our behalf, 92% said they were concerned about the threat posed by faulty data analytics on their reputation. And just 35% fully trusted their own organisation’s use of different analytics.

Staying in control of AI

In response to these concerns, KPMG developed the AI in Control framework. Using tested AI governance models, the framework addresses risks involved in AI, and helps organisations achieve greater confidence and transparency throughout the lifecycle of a bot. It includes recommendations and best practices for establishing effective AI governance, performing assessments, and integrating continuous monitoring. To give you a taster, here are three tips to improve AI governance in your organisation: 

  1. Develop AI design criteria and establish controls in an environment that fosters innovation and flexibility.
  2. Integrate a risk management framework to identify and prioritise business-critical algorithms and incorporate an agile risk mitigation strategy. Address cybersecurity, integrity, fairness and resilience.
  3. Design and implement end-to-end AI governance and operating models across the entire lifecycle: strategy, building, training, evaluating, deploying, operating and monitoring AI.

What you can do now

These tips are a start to establishing reliable AI governance and avoiding the dangers bias poses to your brands and bottom line. KPMG’s Digital Advisor platform is designed to manage the governance of multiple chatbots in your organisation. If you have any questions or would like to experience how the technology can work for your organisation, contact us now.

Pank Seelen is a member of the Strategy and Investment Team at KPMG’s Smart Tech Solutions.

 

Tay: the downside of learning by example

Remember Tay – the 2016 bot designed by Microsoft to learn from human behaviour on Twitter? It took less than 24 hours for trolls to corrupt it beyond repair.

Microsoft described Tay as an experiment in “conversational understanding”. The more people communicated with Tay, the better it was supposed to get. Instead, people began sabotaging the bot within minutes of launch by tweeting it misogynistic, racist and alt-right remarks.

They knew Tay had a repeat-after-me function and quickly taught it to share hate messages of its own. Worryingly, some of the tweets did not simply parrot what it had been fed: thanks to the Deep Learning capabilities programmed into it, Tay had become bigoted.

Microsoft retired the bot within a day of launch. In 2018, it introduced Zo, a new bot designed to engage teens on social media platforms. She (for her avatar and vocabulary are clearly female) refuses to be drawn on religion, ethnicity or other potentially controversial topics, proving it is possible to protect chatbots against debasement.