Expectations for chatbots and virtual assistants powered by Machine Learning and Artificial Intelligence (AI) are extremely high. With investment in the industry already topping EUR 20 billion, analysts are labelling the technology the ‘next big thing’. However, 92% of C-suite executives are concerned about the potential negative impact of rogue data & analytics on reputation. Take back control by addressing key issues around governance and bias to protect your brands, trust and profit.
As chatbots gain in importance in marketing, sales and customer service, your prospects and customers are more and more likely to come into contact with them. In some cases, their conversation with the bot will be the single point of contact with your organisation. Therefore, it is crucial that these bots reflect your values and knowledge base.
Put simply, chatbots are programmed to answer users’ questions in a conversational form. By recognising language use and sentence structure, bots can pick up on a specific tone or emotion and then reflect it in their answer to sound more human and relatable.
The challenge for bots is to understand a user’s intention, also known as ‘intent’. In order to do so, they need to comprehend the context of the user’s question. To identify intent and know how to respond, bots are programmed with algorithms based on so-called training data, which is made up of masses of information, parameters and answers.
AI is used to make the interaction more engaging, conversational and lively. It also enables bots to learn from conversations and improve their ability to offer appropriate responses and solutions. However, bots are only as good as the training data they are built on.
If the data is poorly sourced or biased, the results can be disastrous. Google’s facial recognition tool had trouble recognising non-Caucasians because photographs of faces it had been fed were mainly white. At another company, an HR bot was discovered to favour male job applicants. Why? Its training data included the company’s previous recruitment information and this had an anti-female bias.
Avoid disastrous scenarios like the ones above, by testing your bots for bias before deployment. Consider the following questions:
Answering the above will help determine whether bias has been unintentionally encoded into your system so you can take retroactive measures. In fact, bias can creep in at any stage of the bot development process, including framing the problem the bot will address, and data collection and preparation. We will dive deeper into this topic in a future blog.
Avoid bias in future by tackling issues around AI and data governance. If governance is causing you sleepless nights, you are not alone. As mentioned in the introduction, when Forrester surveyed C-level executives on our behalf, 92% said they were concerned about the threat posed by faulty data analytics on their reputation. And just 35% fully trusted their own organisation’s use of different analytics.
In response to these concerns, KPMG developed the AI in Control framework. Using tested AI governance models, the framework addresses risks involved in AI, and helps organisations achieve greater confidence and transparency throughout the lifecycle of a bot. It includes recommendations and best practices for establishing effective AI governance, performing assessments, and integrating continuous monitoring. To give you a taster, here are three tips to improve AI governance in your organisation:
These tips are a start to establishing reliable AI governance and avoiding the dangers bias poses to your brands and bottom line. KPMG’s Digital Advisor platform is designed to manage the governance of multiple chatbots in your organisation. If you have any questions or would like to experience how the technology can work for your organisation, contact us now.
Pank Seelen is a member of the Strategy and Investment Team at KPMG’s Smart Tech Solutions.
Remember Tay – the 2016 bot designed by Microsoft to learn from human behaviour on Twitter? It took less than 24 hours for trolls to corrupt it beyond repair.
Microsoft described Tay as an experiment in “conversational understanding”. The more people communicated with Tay, the better it was supposed to get. Instead, people began sabotaging the bot within minutes of launch by tweeting it misogynistic, racist and alt-right remarks.
They knew Tay had a repeat-after-me function and quickly taught it to share hate messages of its own. Worryingly, some of the tweets did not simply parrot what it had been fed: thanks to the Deep Learning capabilities programmed into it, Tay had become bigoted.
Microsoft retired the bot within a day of launch. In 2018, it introduced Zo, a new bot designed to engage teens on social media platforms. She (for her avatar and vocabulary are clearly female) refuses to be drawn on religion, ethnicity or other potentially controversial topics, proving it is possible to protect chatbots against debasement.
Through Smart Tech Solutions, KPMG unleashes its worldwide knowledge and experience in the areas of growth, performance, risk and regulation. With the help of data and technology, our smart tech solutions create insights into performance, opportunities and threats.