loading...
114/05/01 Next-Generation RAG: Leveraging Expanded Contexts, Explicit Reasoning, and Multimodal Integration in LLMs

時間:114年5月1日(星期四) 15:10-17:00

 

地點:大仁樓301室

 

主持人:張宏慶老師

 

演講者:黃瀚萱研究員

 

演講題目:Next-Generation RAG: Leveraging Expanded Contexts, Explicit Reasoning, and Multimodal Integration in LLMs

 

演講摘要:Retrieval-Augmented Generation (RAG) has emerged as a powerful approach for enhancing the capabilities of Large Language Models (LLMs), enabling them to provide precise, relevant, and contextually grounded responses. In 2025, advancements in LLMs have transformed RAG in three pivotal ways. First, significantly expanded context windows allow models to ingest and integrate larger sets of reference data directly, greatly reducing the demand on traditional Information Retrieval (IR) systems and increasing the accuracy and relevance of generated content. Second, the rise of explicit reasoning LLMs has deepened the trustworthiness and interpretability of AI-generated responses, providing users with transparent explanations and clearly articulated logical processes as evidence for decision-making. Finally, multimodal RAG extends the traditional text-based retrieval and generation paradigm, incorporating diverse data modalities such as tables, photos, infographics, audio, and video. This multimodal integration enables RAG systems to address complex, real-world queries requiring synthesis across varied informational formats. Together, these innovations mark a significant evolution in RAG methodologies, reshaping human-AI interactions and setting new standards for transparency, reliability, and versatility in intelligent systems.