Ryan McDonald is the Chief Scientist at ASAPP. He is responsible for setting the direction of the research and data science groups in order to achieve ASAPP’s vision to augment human activity positively through the advancement of AI. The group is currently focused on advancing the field of task-oriented dialog in real world situations like customer care. In such dynamic environments there are many interacting parts: the conversation between customer and agent; the environment and tools the agent is using; different measures of success; a wide range of customer needs and situations. Optimizing this environment in order to lead to quality outcomes for customers, agents and companies requires significant research investment in retrieval, language generation, constrained optimization, learning and, critically, evaluation.
Ryan has been working on language understanding and machine learning for over 20 years. His PhD work at the University of Pennsylvania focused on novel machine learning methods for structured prediction in NLP, most notably information extraction and syntactic analysis. At Penn, his research was instrumental in growing the fields of dependency parsing and domain adaptation in the NLP community. After his PhD, Ryan joined Google’s Research group. There he researched sentiment analysis and summarization models for consumer reviews, which resulted in one of the first large-scale consumer summarization systems consumed by millions of users every day.
Ryan’s research then turned to massively multilingual language understanding. At first, his research focused on improving machine translation, but the growing importance of multilinguality in information retrieval and digital assistants led his research towards internationalizing these important technologies. Ryan’s team was instrumental in the development of Google Assistant as a global technology. During this time, he also spearheaded the Universal Dependencies consortium with a large set of academic collaborators. This is by far the largest multilingual data consortium aimed at syntactic language resources. After over a decade working on consumer products, Ryan shifted gears towards enterprise and led numerous NLP and ML projects to improve Google’s Cloud services, including the core NLP API, solutions for Call Center AI and Knowledge discovery from scientific literature. Ryan’s research on enterprise NLP and ML continues at ASAPP.
Ryan has published over 100 research papers in top tier journals and conferences which have been cited thousands of times. He has won best paper awards at premier international conferences (EMNLP, NAACL) for his work on multilingual syntactic analysis. His book ‘Dependency Parsing’ has served as one of the main pedagogical resources in syntactic parsing for over a decade. In addition to serving as area chair at almost every tier-1 NLP and ML conference, he has been an editor for the top two journals in the field — Transactions of the Association for Computational Linguistics and Computational Linguistics. He has advised numerous PhD students and interns and given invited talks at top conferences, workshops and premier universities throughout the world.
What initially got you interested in machine learning and more specifically natural language processing?
I was an undergraduate with a dual major in computer science and philosophy. I was particularly interested in the philosophy of AI as well as the more practical aspects of how AI worked. I was fortunate enough to secure a research project with Gerald Penn at the University of Toronto which focused on summarizing news for mobile phones. That was over 20 years ago and used very primitive methods by today’s standards. Gerald was a great mentor and got me interested in the formal models for structure of language (syntax and semantics). I was fascinated by the complexity of language and the mathematical frameworks to explain the phenomena that humans parse naturally every day. Around this time search engines were ubiquitous, and I could see the massive upside for NLP and ML in the future.
You’ve published over 100 research papers in top tier journals and conferences which have been cited thousands of times, in your opinion which of these was the most influential research paper?
This is always a hard question to answer. And as time passes I think my answer changes. If you asked me a few years back it would be work I did on using graph algorithms for dependency parsing. This paper, along with work by Joakim Nivre, spawned over a decade of rapid research on syntactic dependency parsing, which had a big impact on the adoption of NLP at tech companies, including a huge amount of work colleagues and I did at Google for Search, Translate, Assistant, etc.
However, with the rapid adoption of neural networks, the use of discrete syntactic representations of language has receded. I would now say it is seminal work I did with John Blitzer and Fernando Pereira on domain adaptation. The methods we used in that paper are not super relevant today, however, I think that paper (along with contemporary work by Hal Daume) solidified the framing of the problem and its importance to the field. Now that I exclusively work in enterprise, I am very aware of the problems we raised back then and that we still have so much more to solve.
You spent the bulk of your career at Google helping to Build state-of-the-art NLP and ML technologies and pushing them to production. What were some of your key takeaways from this experience?
My key takeaway would be that when building NLP and ML models you can never predict when things will go wrong. One needs to be obsessed with the data, error analysis, metrics, etc. at all points in the process. Trying to guess the pain points is often futile and it is usually more efficient to build out end to end models as quickly as possible and adapt.
As NLP/ML researchers we focus on where the model might break and what modeling assumptions won’t hold when the rubber meets the road. But often it is data processing or UX that is the key to making successful NLP/ML products. Only when these are solid can we really iterate on quality to squeeze out all the value of the models.
In early 2021, you joined ASAPP as chief scientist a company that is offering AI tools to empower customer service agents, what attracted you to this position?
For a more complete answer read my blog post on this topic. But to synthesize, the main reasons are:
- The problems are hard. Agents who are engaged in difficult customer service interactions need to bring a lot of information and experience to bear on their conversations to solve customer problems. Building models that add value over huge amounts of domain knowledge is hard, in particular when the signals are weak. E.g., was the customer satisfaction score low because the agent did something wrong, or was the customer just generally upset?
- The companies and agents who work in this space are super excited about AI that will help them improve customer experience. They do not fight advancements but see them as critical tools to solve real problems they have. Having that kind of engaged partner is amazing.
- Finally, unlike consumer technologies, the set of domains and problems in an enterprise setting are so heterogeneous that adaptation (see my answer to question 2) is the problem that needs to be solved. We can’t just build services for one company or one industry, but all of them. This is such a great stress test for the state of AI today.
- ASAPP is singularly focused on this problem.
Could you discuss ASAPP’s vision to augment human activity positively through the advancement of AI?
Our central hypothesis at ASAPP is that AI should not replace humans, but augment them in positive and productive ways. This vision is broad and we have ambitions to apply it to all relevant human activity. However, as this is a broad mandate, the first area we’ve chosen to focus on is the customer experience domain.
The customer experience domain embodies all the challenges and rewards that comes with augmenting human activity. Agents are engaged in complicated problem solving tasks that require them to follow work flows, retrieve relevant information from customer and knowledge bases, and adapt to nuanced situations that a customer might find themselves in. This gives rise to a huge number of opportunities for AI to improve that process. However, we think it is important to do this in a positive way, by which we mean:
- Augmentation happens at points that are natural and fluid during the course of the agent’s job. This is critical. If AI is interfering or interjecting at awkward moments or with poor latency, this will actually have a negative effect on the agent’s experience as they will need to consciously ignore the AI.
- More critically, we want the AI to achieve positive outcomes for all humans involved. In this case it is the customer, the agent and the organization. Customers want their issues handled efficiently and effectively. Agents want to do that for customers. Additionally, agents are doing a hard job, often dealing with difficult unsatisfied customers. AI should help them balance work and cognitive load in order to decrease fatigue and burnout and increase job satisfaction. Afterall, agents at call centers have one of the worst attrition rates (as high as 100% annually in some call centers) of any job in America. Finally, we want positive business outcomes for the company who runs the call center. This can be customer satisfaction, the throughput of issues that can be handled in a day or even the amount of sales.
For call centers, we often think of the positive outcomes between the customer, agent and company as being in conflict with each other. But good AI will help to optimize for all three.
Could you discuss the current AI anatomy of a call center?
Today, almost every stage of your contact with a call center has some form of AI driving or informing how the issue is addressed.
This first step is an interactive voice response (IVR) or chat bot. This is fully automatic and its main purpose is to understand why a customer is calling and to route them accordingly. Potentially these systems will try to gather as much information as possible before sending the customer to an agent in order to maximize the chance the agent can quickly solve the problem. Many modern bots may also directly solve the users’ problem without needing an agent–called “containment” since the call never needs human intervention. This can be achieved by recommending FAQs or simply executing some simple task for the customer.
After this, the call goes to the agent. Once at the agent, the main role of AI is to guide and make suggestions to the agent. What should they say next? What flow should they follow? What knowledge base articles will help solve the problem? These models are usually trained on historical data and optimized for some key performance indicator, which can be handle time (how quickly the problem was solved) or customer satisfaction score (was the customer happy with the experience).
Once the call or chat is over, AI is still at work. In most call centers the agent will leave structured information and notes about what happened during the call. This is for analytics purposes but also for any subsequent agent who picks up the issue if it has not been resolved. AI helps with all these steps.
Finally, in a call center, there are supervisors who are there to help assist agents and grow their skills. AI can be critical here. In a call center with hundreds of agents handling thousands of calls a day. How can supervisors identify the issues that need their intervention? How can they understand what happened during the day? How can they find areas of improvement for agents in order to grow their skillset?
How does ASAPP help to reduce call center employee turnover?
Large companies offering consumer goods and services spend millions, and sometimes billions of dollars each year on contact centers that serve their customers, with the labor cost representing 80-90% of total costs. It’s a big problem driving agent turn over to be 40%—and sometimes 100% or more—every year.
There is often a caricature that agents are indifferent to your problems and are going through the motions. In the worst case even obstructing your ability to solve a problem. Nothing can be further from the truth. Agents, as with all people, derive satisfaction from helping customers solve their problems. How would you rather spend your day, hearing robust ‘thank you’s or screaming customers? In a recent study we conducted, we found that 90% of agents reported that calls with customers made their day, and the majority say they are happy with their jobs. But, agents want the tools and training required in order to make customers happy. Unhappy customers lead to frustrated, fatigued and stressed agents. This is the primary driver of turnover.
AI to augment the agents during a call (as described previously) already helps. If the agent has the tools and guidance on how to effectively and quickly solve a problem for a customer, then the odds that the customer is happy can only be higher, which in turn should lead to higher job satisfaction.
However, ASAPP doesn’t start there. While real-time dynamic guidance is critical, more structured training, coaching and feedback is also important. Many agents train on new issues or procedures ‘live’. That is, they get a description of the procedure, but then only see it in practice when they take a call with a real customer. Imagine we gave pilots the manual of the plane and then told them to fly 300 passengers to Denver? Because of this, we are focusing on using AI to help build tools for agents to practice procedures and handle difficult situations before they deal with live customers. When this is couple with targeted feedback (either by a supervisor or automatically) this will allow the agent to grow their skills in a less stressful environment.
Better AI to improve customer satisfaction in dynamic situations as well as AI for grounded training — that is how ASAPP puts focus on the agent with the ultimate goal of reducing turnover.
What are some examples of the types of results that can be achieved by incorporating AI in a call center?
As shared above, AI can be a transformative technology in boosting productivity. For a US Airline we work with, we saw agent productivity increase 86% and a rise of organizational throughput (total number of interactions across all customer service channels divided by labor spent to satisfy those needs) by 127%. For a global network operator using ASAPP services, net promoter scores (the willingness of customers to recommend a company’s products or services to others) increased 45%. For a top 3 cable company using ASAPP, cost per interaction decreased 52%. These examples show how AI can increase productivity, improve the quality of customer service, and decrease business costs.
What is your personal vision for the future of AI in the workplace?
AI is already pretty prevalent in the workplace. As I write this spelling and grammar checkers as well as text autocomplete are helping me. I have spam filters and message classifiers on my email/messaging tools. I use AI powered search to find the relevant information I need to execute. This will grow as well as my adoption as the number of AI powered features and their quality increases.
However, I would call this kind of AI augmentation atomic. It is certainly assisting me, but in very precise moments that allow for high precision predictions. I certainly cannot ask an AI to answer these questions for instance — yet 🙂
More seriously, my vision is to see the adoption of end-to-end AI throughout the workspace. I don’t mean end-to-end in the machine learning modeling sense. What I mean is that the AI will power holistically large and complex tasks being optimized for the overall goal and not just atomic points during the process. ASAPP is already bringing this to bear in call centers. For instance, we optimize what the agent will say next based on a holistic set of factors about where the agent is in the conversation and what the ultimate goal is. But beyond that, imagine a scientist trying to write a systematic review of an important topic, a software engineer building a platform or integrating complex systems, a lawyer writing a legal brief, etc. In the future, each of these professionals will rely on AI to rapidly increase their effectiveness at these tasks and optimize desired outcomes, freeing them up for more critical challenges.
Is there anything else that you would like to share about ASAPP?
Our research team at ASAPP has a clear focus: we’re advancing AI to augment human activity to address real-world problems for enterprises. Researchers at ASAPP work to fundamentally advance the science of NLP and ML toward our goal of deploying domain specific real-world AI solutions, and to apply those advances to our products. They leverage the massive amounts of data generated by our products, and our ability to deploy AI features into real-world use to ask and address fundamental research questions in novel ways.
Discover our recent papers at https://www.asapp.com/ai-research/.
Thank you for the detailed answers, readers who wish to learn more should visit ASAPP.
Credit: Source link
Comments are closed.