Michael McTear is an Emeritus Professor at Ulster University. He has been researching in the field of spoken dialogue systems for more than 20 years and is the author of several books, including most recently Conversational AI: Dialogue Systems, Conversational Agents, and Chatbots (Springer Link 2021). Michael has delivered keynote addresses and tutorials at many academic conferences and workshops. Currently Michael is involved in several research and development projects investigating the use of conversational agents in mental health support and the home monitoring of older adults.
What initially attracted you to machine learning?
Until recently I have worked with rule-based approaches to conversational AI, particularly in the area of dialogue management where the basic idea is to develop rules that determine the agent’s strategies in a dialogue. However, with recent advances in machine learning, first in reinforcement learning and then in deep learning, I have found that these approaches can potentially address some of the issues faced by rule-based methods, such as the problem of scaling and the need to write multiple rules to cater for more complex dialogue flows.
You’ve been working on voice and conversational AI for over 20 years, what made you focus on this field?
I have been interested in how conversation works for a long time. In my PhD thesis I studied the development of conversational competence in young children and this was the topic of my first book. Later I became intrigued by the idea that computers could engage in conversations in a human-like way and since then I have followed developments in this area. At first it was very much a minor area within AI but within the past few years it has become very important as it has been adopted by the large tech companies as an area of strategic interest for them.
One of your most recent projects that you focused on is ChatPAL to promote mental well-being in rural areas. Could you discuss the challenges behind building a chatbot for users who may not be tech savvy or familiar with the concept of chatbots?
Many people are familiar with voice assistants on their smartphones and as smart speakers such as Amazon Alexa. Young people use their phones mainly to text and so they are familiar with the idea of interacting with a text-based chatbot. However, when it comes to interacting with a chatbot that is specialised for a particular domain, as in our Chatpal project that was developed to promote mental well-being in rural areas, we found that some users had expectations about the technology based on their experiences with Alexa and other similar chatbots that exceeded what we were able to offer with more limited resources. We attempted to address the issue of users who were not tech savvy or familiar with chatbots through initial living lab sessions as well as making sure that interactions with the chatbot were natural and intuitive.
What are some of the challenges behind building chatbots that are focused on mental health?
There is the danger that some users might expect more from the chatbot that is possible with current technology. We did not want to become involved in doing any diagnosis as we felt that this is too risky and there have been reports of chatbot responses in such situations that could be considered harmful or even dangerous. We were guided by the requirements of various ethics committees as well as recognised standards for the design and development of chatbots. Another issue is that we found that there were differences between users in terms of how they used the chatbot. Some users gave up quickly when they experienced technical issues whereas others were prepared to persist. There also was an age-related difference as younger users were happy to interact with our text-based chatbot whereas older users were less happy with this sort of interface.
Some of the apps that you’ve worked on offer action plans for users, how do you effectively generate user motivation in an app?
To do this it is necessary to create and maintain profiles for each user that reflect things like their upcoming appointments, medications, general preferences, and what they have discussed in previous conversations with the chatbot. Users often state that they want the chatbot to be aware of their individual needs and to keep track of what has been discussed before rather than providing a more generic and less adaptive interaction.
However, against this, there are issues of data privacy and users also express a concern about the use of their private information. There is a delicate balance to be struck here and of course there is an ever increasing amount of legislation to control the ethical use of AI in public and private life.
What are some ethical considerations behind building chatbots?
One of the main ethical considerations to consider when building chatbots is whether they reinforce gender stereotypes. Traditionally females have taken assistant-type roles in the workplace while males have assumed leadership roles. Implementing a chatbot with a female persona could reinforce such gender stereotypes.
Another important ethical issue is whether chatbots should embrace human values and behave in a way that inspires trust. Thus is known as the alignment problem. Chatbots should be designed so that they avoid breaching human rights and creating bias, and their decisions should be transparent to human users.
Also, as mentioned earlier, chatbots should respect user privacy and data protection laws. There is a lot of research and effort being devoted currently to these ethical considerations.
In a world that is focused on English speaking chatbots, what are some challenges behind designing multi-lingual and international chatbots?
It all depends on the availability of language resources such as language models and, for voice-based systems, speech recognition and speech synthesis engines. This is not a problem for widely spoken languages but difficult for languages with limited resources that may nevertheless be spoken by a large number of people and where there is a definite need for the services of a chatbot. One possible solution is to use transfer learning from a model pretrained in a language such as English and fine tune it with data from the low resource language.
Most of the apps that you’ve designed use open source software, what are some of the best open source tools out there?
Using open source software was a requirement of the agencies that funded our projects.
We have used Rasa in our projects as it is open source but also very powerful as it makes use of the latest developments in conversational AI technologies. As well as Rasa there are several excellent open source conversational AI software products, including: Botpress, Microsoft Bot Framework, OpenDialog, and DeepPavlov, to mention just a few.
You’ll be speaking at the upcoming Future of Chatbots & Conversational AI Summit, what will you be discussing?
In my talk I will be comparing the traditional approaches to chatbot development based on best practices, known as conversation design, with new approaches based on large language models such as ChatGPT. I will cover the pros and cons of each approach and argue that, although approaches based on large language models offer a lot of potential for the future development of chatbots, there are still many issues with the uncontrolled use of large language models, especially in areas such as healthcare and business, so that there is still a need for conversation designers who can ensure explainable, transparent, and ethical conversational AI.
Thank you for the great interview, readers who wish to hear Michael McTear speak should attend the Future of Chatbots & Conversational AI Summit.
Credit: Source link
Comments are closed.