报告人简介
Yinglun Zhu is an assistant professor in the ECE department at the University of California, Riverside; he is also affiliated with the CSE department, the Riverside Artificial Intelligence Research Institute, and the Center for Robotics and Intelligent Systems. Yinglun’s research focuses on machine learning, particularly in developing efficient and reliable learning algorithms and systems for large-scale, multimodal problems. His work not only establishes the foundations of various learning paradigms but also applies them to practical settings, addressing real-world challenges. His research has been integrated into leading machine learning libraries such as Vowpal Wabbit and commercial products like Microsoft Azure Personalizer Service. More information can be found on Yinglun’s personal website at https://yinglunz.com/.
内容简介
This presentation focuses on extending the success of large language models (LLMs) to sequential decision making. Existing efforts either (i) re-train or finetune LLMs for decision making, or (ii) design prompts for pretrained LLMs. The former approach suffers from the computational burden of gradient updates, and the latter approach does not show promising results. In this presentation, I’ll talk about a new approach that leverages online model selection algorithms to efficiently incorporate LLMs agents into sequential decision making. Statistically, our approach significantly outperforms both traditional decision making algorithms and vanilla LLM agents. Computationally, our approach avoids the need for expensive gradient updates of LLMs, and throughout the decision making process, it requires only a small number of LLM calls. We conduct extensive experiments to verify the effectiveness of our proposed approach. As an example, on a large-scale Amazon dataset, our approach achieves more than a 6x performance gain over baselines while calling LLMs in only 1.5% of the time steps.