X

News & Events

News Events
...

Events

Academic Report Notice (Number: 2023-12)

Release time:2023-06-13 clicks:

Report Title: Model Interpretation and Alignment

Presenter: Wang Xiting, Senior Researcher

Affiliation: Microsoft Asia Research Institute

Report Time: June 14, 2023 (Wednesday) at 10:00 AM

Report Location: Room 804, Science and Education Building A, Feicui Lake Campus

 

 

Abstract: In the era of large models, interpretability and model alignment have become crucial. Large models have an increasing impact on people's work and lives, but they are becoming more difficult to understand and control. Among the seven research directions supported by OpenAI, interpretability and model alignment are two of them. How can we make deep learning models more transparent, understandable, easier to train, debug, and optimize, and ensure their alignment with human intent? This report will discuss these issues and introduce our recent work on explainable artificial intelligence (XAI) and reinforcement learning from human feedback (RLHF) published in ICML, NeurIPS, and KDD.

 

Speaker Biography: Wang Xiting is a Senior Researcher in the Social Computing Group at MSRA, with research interests in interpretable and responsible artificial intelligence. He has published over 50 papers, including 40 CCF-A class papers. Two of his papers were selected as cover papers by the CCF-A class journal IEEE TVCG. He has an H-Index of 24 and over 2300 citations on Google Scholar. His research achievements have been widely adopted by the second-largest global search engine, Bing. He has been invited to serve as the area chair for IJCAI and AAAI, and has joined the IEEE VIS organizing committee as an archive chair. He was recognized as an Outstanding Senior Program Committee Member at AAAI 2021. He has been invited to give keynote speeches at the SIGIR Workshop on Explainable Recommendation twice and is a senior member of CCF and IEEE.

 

TOP