Second Workshop on Foundation Models

Paper submission:

We invite submissions of two types: workshop proceedings and extended abstracts. We invite submissions on any aspect of 
foundation models. This includes, but is not  limited to:

     • Multi-modality foundation model
     • Model design of foundation model
     • The conflict between multi tasks of foundation model
     • Training strategy of foundation model
     • The deployment of foundation model
     • Reproducibility of foundation model
     • Resource constrained foundation model
     • Automatic data augmentation and hyperparameter optimization of foundation model
     • Unsupervised learning, domain transfer and life-long learning of foundation model
     • Computer vision datasets and benchmarks for foundation model
     • 
Performance of downstream tasks of foundation model                                                                      
     • Probabilistic based foundation model
     • Search space design of foundation model

Important Dates
For workshop proceedings (4-8 pages excluding references), 
    • Paper Submission Deadline: Mar 20th, 2024 (11:59 p.m. PST)
    • Notification to Authors: extended to April 3th, 2024 (11:59 p.m. PST)
    • Camera-ready Paper Deadline: extended to April 8th, 2024 (11:59 p.m. PST)

    • Submission Guidelines: The submissions should follow the same policy as the main conference

For extended abstracts (4 pages including references),
    • Paper Submission Deadline: May 25, 2024 (11:59 p.m. PST)
    • Notification to Authors: June 1, 2024 (11:59 p.m. PST)
    • Camera-ready Paper Deadline: June 6, 2024 (11:59 p.m. PST)
    • Submission Guidelines: We solicit short papers in the length of 4 pages (including references) and accepted papers will be linked 
      online at the workshop webpage. Submitted works can be shorter versions of work presented at the main conference or 
      work in progress on relevant topics of the workshop. Each paper accepted to the workshop will be allocated either a contributed 
      talk or a poster presentation and one paper will be awarded as the best paper, recommended during the peer review period by the 
      workshop program chairs.

Manuscripts should follow the CVPR 2023 paper template and should be submitted through the CMT link below.  
    • Paper submission Link: To be updated
    • Review process: Single-blind (i.e., submissions need not be anonymized)
    • Supplementary Materials: Authors can optionally submit supplemental materials for the paper via CMT. 

Accepted papers on CVPR 2023 foundation model workshop:

Accepted proceedings papers:
https://openaccess.thecvf.com/CVPR2023_workshops/WFM

Winner solutions:
■    First Place Solution of Track 1                          
Weiwei Zhou*, Chengkun Ling*, Jiada Lu*, Xiaoyun Gong, Lina Cao, Weifeng Wang [PDF]                         
■    Second Place Solution of Track 1                              
Zelun Zhang*, Xue Pan [PDF]                                         
■    Third Place Solution of Track 1                                        
Yantian Wang, Defang Zhao [PDF
■    First Place Solution of Track 2
Haonan Xu, Yurui Huang, Sishun Pan, Zhihao Guan, Yi Xu *, and Yang Yang * [PDF]                               
■    Second Place Solution of Track 2
Zhenghai He* Fuzhi Duan* Jun Lin Yanxun Yu Yayun Wang Zhongbin Niu Xingmeng Hao Youxian Zheng Zhijiang Du [PDF]
■   Third Place Solution of Track 2
Jing Wang, Shuai Feng, Kaiqi Chen, Liqun Bai [PDF]                                                                                                                

Accepted extended abstracts paper: 
■   Enriching Visual Features via Text-driven Manifold Augmentation
 Moon Ye-Bin, Jisoo Kim, Hongyeob Kim, Kilho Son, Tae-Hyun Oh [PDF]
■   Self-Enhancement Improves Text-Image Retrieval in Foundation Visual-Language Models
Yuguang Yang, Yiming Wang, Shupeng Geng, Runqi Wang, Yimi Wang, Sheng Wu, Baochang Zhang* [PDF]  
■   Enhancing Comprehension and Perception in Traffic Scenarios via Task Decoupling and Large Models                                     Xiaolong Huang, Qiankun Li*  [PDF]