Multimodal Pretraining, Adaptation, and Generation for Recommendation

In conjunction with KDD 2024

Time: Aug. 25, 2024 (10:00 AM - 1:00 PM)

Location: Centre de Convencions Internacional de Barcelona

KDD 2024 Tutorial on "Multimodal Pretraining, Adaptation, and Generation for Recommendation"

 

Personalized recommendation serves as a ubiquitous channel for users to discover information or items tailored to their interests. However, prevalent recommendation models primarily rely on unique IDs and categorical features for user-item matching, potentially overlooking the nuanced essence of raw item contents across multiple modalities such as text, image, audio, and video. This underutilization of multimodal data poses a limitation to recommender systems, especially in multimedia services like news, music, and short-video platforms. The recent advancements in pretrained language/multimodal models offer new opportunities and challenges in developing content-aware recommender systems.


This tutorial seeks to provide a comprehensive exploration of the latest advancements and future trajectories in multimodal pretraining, adaptation, and generation techniques, as well as their applications to recommender systems. The tutorial comprises multimodal pretraining, multimodal adaptation, multimodal generation, and open challenges & furture directions in the field of recommendation. By providing a succinct overview of the field, we aspire to facilitate a swift understanding of multimodal recommendation and promote meaningful discussions on the future development of this evolving landscape.


We welcome researchers, practitioners, and students interested in multimodal recommendation to join us and engage in this exciting tutorial.

Program (UTC+2, Barcelona Time)

Our tutorial will be held on Aug. 25, 2024, 10:00 AM - 1:00 PM (Beijing Time: 4:00 PM - 7:00 PM) at the Centre de Convencions Internacional de Barcelona, in conjunction with KDD 2024. You are welcome to join our tutorial either in-person or virtually via Zoom.


Zoom Link: (Updated) https://polyu.zoom.us/j/81166113081?pwd=7I47QapuPt1tiL95BdJhRkEzuzIDkO.1


Checkout our survey paper: Qijiong Liu, Jieming Zhu, Yanting Yang, Quanyu Dai, Zhaocheng Du, Xiao-Ming Wu, Zhou Zhao, Rui Zhang, and Zhenhua Dong. Multimodal Pretraining, Adaptation, and Generation for Recommendation: A Survey. In KDD 2024.


The program schedule is as follows:

Time Event Speaker
10:00 AM - 10:40 AM Multimodal Representation Pretraining and Adaptation for Recommendation [slides] Jieming Zhu
10:40 AM - 11:00 AM Coffee Break
11:00 AM - 11:40 AM Multimodal Generation for Recommendation [slides] Rui Zhang
11:40 AM - 12:20 PM Enhancing Multimodal Retrieval and Generation with Unified Vision-Language Models [slides] Xiao-Ming Wu
12:20 PM - 13:00 PM Benchmarking Recommendation Ability of Foundation Models: Legommenders and RecBench [slides] Qijiong Liu

Tutorial Speakers

 

Contact

Please contact Jieming Zhu for general inquiries.

Previous Tutorials

WWW 2024 Tutorial: Multimodal Pretraining and Generation for Recommendation.