Stefan Schaffer, Senior Researcher, German Research Center for AI (DFKI)

Stefan Schaffer is a Senior Researcher and Group Leader at the Cognitive Assistants department of the German Research Center for Artificial Intelligence (DFKI). His works have resulted in several conversational interfaces for domains such as mobility, automotive, tax information, customer service, etc. Currently, he is working on AI chatbots for value chains and hybrid events. Before joining DFKI, Stefan worked as a product manager at Linon Medien. He studied communication science and computer science and did his doctorate at the Technical University of Berlin in the field of multimodal human-computer interaction.

What initially attracted you to machine learning?

Already during my studies, I had a great interest in speech recognition and took courses in which we built speech recognizers from scratch.

What initially attracted you to speech recognition?

I was already fascinated by speech-based human-computer interaction when Captain Picard spoke to “the computer” and received meaningful answers.

One of your most recent projects was building a chatbot interface for a museum that would anticipate what visitors would ask. Could you discuss how your team approached this?

To integrate the question answering functionality into the museum chatbot, we first collected a lot of questions and used them to improve the system’s question answering capabilities. This was done by categorizing the questions as well as the answer material we received from our project partner Linon Medien, a company specializing in the production of speech and text content for exhibitions.

Your team also discovered that content type annotations can improve accuracy, what type of accuracy differences are seen from annotations?

The content type annotations improved the overall accuracy of the chatbot in natural language understanding. This means that due to the additional annotations, the system was able to give more correct answers.

What are some of the core challenges behind building a conversational AI?

Without this data, in most cases one can only offer scripted experiences that mimic conversations between real humans, but are static and thus highly unnatural. Another challenge is that the process of developing a conversational AI interface requires special expertise in the specific area in which the system will be used. Sharing the needed information between conversational design experts and domain experts is sometimes a difficult process that requires the support of additional experts in user-centered design methods.

What’s your approach for building a user friendly chatbot and conversational user interface?

We strictly follow the paradigm of user-centered design. This means that we engage with our customers and users in early project phases, when a system is not yet available. We start with focus groups and data collection and have stakeholders review system variants in early development phases.

What are your views on ChatGPT and GPT-4, is there anything you would do differently?

Currently we use ChatGPT and GPT4 as tools for data generation. However, we usually try to avoid supporting the closed nature of these products through our use in our research projects. We expect that comparable open-source models will become available in the near future.

You’ll be speaking at the upcoming Future of Chatbots & Conversational AI Summit, what will you be discussing?

I’ll be speaking about connections between User Experience and conversational AI. I will have a focus on user-centered design, data-driven user-centered implementation, and evaluation of conversational user interfaces.

Thank you for the great interview, readers who wish to hear Stefan Schaffer speak should attend the Future of Chatbots & Conversational AI Summit.

Credit: Source link

Comments are closed.