ML Explainability and Language Model UI
Date: 09 March 2023, Thursday
Time: 11:00 AM ET / 8:00 AM PT
Format: Virtual via Zoom
Assistant Professor - Harvard University
Director, Engineering Fellow (NLP) - Cohere
Should we care about machine learning model interpretability?
Is it more relevant for some scenarios than others? How can we say we actually achieved model understanding?
In this session, Professor Hima Lakkaraju answers these questions as well as demonstrates TalkToModel, an interactive dialogue system for explaining machine learning models through conversations.
Besides demonstrating a compelling conversational explainable user interface (XAI), TalkToModel demonstrates using language models to interact with complex systems and make them more accessible to a wide audience.
Himabindu (Hima) Lakkaraju is an assistant professor at Harvard University focusing on explainability, fairness, and robustness of machine learning models. She has also been working with various domain experts in policy and healthcare to understand the real-world implications of explainable and fair ML. Hima has been named as one of the world’s top innovators under 35 by both MIT Tech Review and Vanity Fair.
Her research has also received best paper awards at SIAM International Conference on Data Mining (SDM) and INFORMS, and grants from NSF, Google, Amazon, and Bayer. Hima has given keynote talks at various top ML conferences and workshops including CIKM, ICML, NeurIPS, AAAI, and CVPR, and her research has also been showcased by popular media outlets including the New York Times, MIT Tech Review, TIME magazine, and Forbes.
More recently, she co-founded the Trustworthy ML Initiative to enable easy access to resources on trustworthy ML and to build a community of researchers/practitioners working on the topic.
Learn more on her website: https://himalakkaraju.github.io/