The International Workshop on Large Language Models
for Next-generation Education
(LLMNE 2025)

Collocated with International Conference on Web-Based Learning 2025 (ICWL 2025) &
International Symposium on Emerging Technologies for Education 2025 (SETE 2025)

HOME PROGRAM SUBMISSION CONTACT

9:00 - 9:30 : Openning Speech

LLM-based Multi-Agent System for Language Learning: Personalized Tutoring and Contextual Simulation

Dr. Zhiyuan Wen, The Hong Kong Polytechnic University, Hong Kong SAR


9:30 - 9:50 : Research Sharing

SGSimEval: A Comprehensive Multifaceted and Similarity-Enhanced Benchmark for Automatic Survey Generation Systems

Mr. Beichen Guo, The Hong Kong Polytechnic University, Hong Kong SAR


9:50 - 10:10 : Workshop Paper Oral Presentation

An Approach for Effective Remote Assistance in Practical STEM Courses

Dr. Bruno Silva, City University of Hong Kong, Hong Kong SAR


10:10 - 10:30 : Workshop Paper Oral Presentation

Privacy-Preserving Pronunciation Assessment: Implementation and Validation of a GOPT-Based English Pronunciation Assessment System

Mr. Enpin Ren, The Hong Kong Polytechnic University, Hong Kong SAR


10:30 - 11:00 : Coffee Break


11:00 - 12:00 : Keynote Speech

LLM for Code Generation: From Correctness to Efficiency

Dr. Zhijiang Guo, The Hong Kong University of Science and Technology (Guangzhou)

Abstract: This talk focuses the critical and often overlooked issue of code efficiency in LLMs. We will first present two benchmarks: EffiBench, which reveals that LLM-generated code is significantly less efficient than human-written solutions, and EffiBench-X, a multi-language benchmark that shows a wide efficiency gap across various programming languages. To tackle this, we will introduce two methods: EffiLearner, a self-optimization framework that uses execution profiles to iteratively improve code, and EffiCoder, a fine-tuning approach that enhances LLMs by training them on high-quality, efficient code data. Together, these benchmarks and methods provide a comprehensive look at the current state of LLM-driven code generation and offer pathways to develop models that are not only correct but also efficient.

Bio: Dr. Zhijiang Guo is an Assistant Professor at the DSA Thrust, Information Hub, HKUST (GZ). His research primarily focuses on natural language processing and machine learning, with a keen interest in large language models. Prior to joining HKUST (GZ), Dr. Guo was a Senior Researcher at Huawei's Noah’s Ark Lab. Before that, he was a Postdoctoral Researcher at the University of Cambridge. He earned his Ph.D. from SUTD, where he was also a visiting student at the University of Edinburgh, after completing his undergraduate studies at Sun Yat-sen University. He has published papers in leading conferences and journals such as ICML, NeurIPS, ICLR, COLM, TACL, ACL, EMNLP, and NAACL, with several selected for Oral or Spotlight presentations. These publications have gathered 4,500+ citations on Google Scholar. He was recognized as Stanford/Elsevier Top 2% Scientists in 2025. He has served as an Area Chair (AC) for NeurIPS, ICLR, ACL, EMNLP, NAACL, and COLING, as well as a Senior Program Committee (SPC) member for AAAI and IJCAI. He has also been an Action Editor (AE) for the ACL Rolling Review and co-organized the FEVER workshop at ACL/EMNLP/EACL, and the AI for Math workshop at ICML.

Venue:

P305, Mong Man Wai Building, The Hong Kong Polytechnic Univeristy

Date:

2025.11.30

To ICWL & SETE 2025

http://www.icwl-sete.com/