With a sizeable working population of the world going virtual, resulting in information overload from multiple online meetings, imagine how convenient it would be to just hover over past calendar invites and get concise summaries of the meeting proceedings? How about automatically minuting a multimodal multi-party meeting? Are minutes and multi-party dialogue summaries the same? We believe Automatic Minuting is challenging. There are possibly no agreed-upon guidelines for taking minutes, and people adopt different styles to record meeting minutes. The minutes also depend on the meeting's category, the intended audience, and the goal or objective of the meeting. We hosted the First SummDial Special Session at SIGDial 2021. Several significant problems and challenges in multi-party dialogue and meeting summarization came from the discussions in the first SummDial, which we documented in our event report. You can read the report of the First SummDial @ SIGDial 2021 here.
Since we witnessed enthusiastic participation of the dialogue and summarization community in the first SummDial Special Session, we are hosting the Second SummDial special session at SemDial 2022. This year, we intend to continue the discussions on these challenges and lessons learned from the previous SummDial. Our goal for this special session would be to stimulate intense discussions around this topic and set the tone for further interest, research, and collaboration in both Speech and Natural Language Processing communities. Our topic of interests are Dialogue Summarization, including but not confined to Meeting Summarization, Chat Summarization, Email Threads Summarisation, Customer Service Summarization, Medical Dialogue Summarziation, and Multi-modal Dialogue Summarization. Our shared task on Automatic Minuting (AutoMin) at Interspeech 2021 was another community effort in this direction. Our shared task on Automatic Minuting (AutoMin) at Interspeech 2021 was another community effort in this direction. We are pleased to annouce that the second iteration of the Automatic Minuting (AutoMin) shared task will happen with INLG 2023. More updates soon on the AutoMin website.
Heriot-Watt University, Edinburgh, UK
Verena Rieser leads research on Conversational AI and Natural Language Generation. Verena is a full professor in Computer Science at Heriot-Watt University in Edinburgh, co-founder of ALANA AI, and Director of Ethics at the UK National Center for Robotics. She received her PhD from Saarland University in 2008 and then joined the University of Edinburgh as a postdoctoral research fellow, before taking up a faculty position at Heriot-Watt in 2011 where she was promoted to full professor in 2017. She is the PI of several UKRI-funded research projects and industry awards including Apple, Amazon, Google and Adobe. Her team is a double prize winner of the Amazon Alexa Prize challenge, and they currently compete as a sponsored entry to the Amazon SimBot challenge. Verena was recently awarded a Leverhulme Senior Research Fellowship by the Royal Society in recognition of her work in developing multimodal conversational systems.
Institute for Infocomm Research (I2R), Agency for Science, Technology, and Research (A*STAR), Singapore
Dr. Nancy Chen is a laboratory head, principal investigator and senior scientist at the Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore, where she leads research on conversational AI and language intelligence with applications in education, healthcare, journalism, and defense. Speech evaluation technology developed by her team is deployed at the Ministry of Education in Singapore to support home-based learning, and their low-resource spoken language processing system was one of the top performers in the NIST Open Keyword Search Evaluations (2013-2016). She has received numerous awards, including Singapore 100 Women in Tech (2021), Young Scientist Award at MICCAI 2021, Best Paper Award at SIGDIAL 2021, the 2020 P&G Connect + Develop Open Innovation Award, the 2019 L’Oréal Singapore For Women in Science National Fellowship, Best Paper at APSIPA ASC (2016), MOE Outstanding Mentor Award (2012), the Microsoft-sponsored IEEE Spoken Language Processing Grant (2011), and the NIH (National Institute of Health) Ruth L. Kirschstein National Research Award (2004-2008). Technology from her team has also resulted in spin-off companies such as nomopai to help engage customers with confidence and empathy. She received her Ph.D. from MIT and Harvard in 2011, and worked at MIT Lincoln Laboratory before joining I2R.
Trinity College, Dublin
Yvette Graham is a Natural Language Processing (NLP) researcher and Assistant Professor in AI at Trinity College Dublin, Ireland. Her work includes development of systems for a wide range of AI/NLP tasks, including Machine Translation, Dialogue Systems, Sentiment Analysis, Video Captioning, and Lifelong Retrieval. Besides NLP, Dr. Graham is also widely known for her work on NLP evaluation that has revealed misconceptions and bias in system evaluations and has been adopted by high profile competitions including the Conference on Machine Translation and TRECvid. She has published upwards of 70 papers in venues such as EMNLP, ACL and JNLE, and was previously awarded best paper at the Annual Conference for the Association of Computational Linguistics in 2015.
Heriot-Watt University, UK
Yannis Konstas is an Assistant Professor of Computer Science at Heriot-Watt University and Head of Machine Learning at Alana AI. His research focuses on Natural Language Processing and in particular Natural Language Generation with an empahsis on scalable machine learning models.
Microsoft Search, Assistant and Intelligence (MSAI) group, US
Budhaditya Deb is a Principal Researcher in the Language, Learning and Privacy lab, Microsoft Research, Redmond. His current research interests are in Natural Language Generation, with focus on learning from natural interactions and feedback in zero and few shot learning scenarios. Budhaditya has also led the research and development of several AI based products for Microsoft. Recent applications include the Suggested Replies in Outlook and Teams conversations, and Meeting Insights and Summarization for Teams meetings. Prior to Microsoft, Budhaditya spent several years at GE-Research and BBN Technologies as a researcher working on various industrial, academic and government projects after receiving his Ph.D. from Rutgers University in 2005.
Charles University, CZ
Ondřej Dušek is an assistant professor at the Institute of Formal and Applied Linguistics, Faculty of Mathematics and Physics, Charles University. His research is in the areas of dialogue systems and natural language generation, including summarization; he specifically focuses on neural-networks-based approaches to these problems and their evaluation. Ondřej got his PhD in 2017 at Charles University. Between 2016 and 2018, he worked at Heriot Watt University in Edinburgh and co-supervised a two-time finalist team in the Amazon Alexa Prize competition. There he also co-organized the E2E NLG text generation challenge, and since then he has been involved in multiple efforts around the evaluation of generated text. He is now in the early stages of his ERC Starting Grant aiming to develop new, fluent and accurate methods for language generation.
|GMT+1 (Ireland Time)|
|Keynote: Verena Rieser||14:05-14:50|
|Break - 5 minutes||14:50-14:55|
|Break - 10 minutes||16:25-16:35|
|(Invited Talk 1) Simple Conversational Data Augmentation for Semi-supervised Abstractive Conversation Summarization||16:35-16:55|
|(Invited Talk 2) CONFIT: Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning||16:55-17:15|
|(Invited Talk 3) DialSummEval: Revisiting Summarization Evaluation for Dialogues||17:15-17:35|
|(Invited Talk 4) Using Dialogue Summarization for Few-shot Dialogue State Tracking||17:35-17:55|