WACV 2024 Daily - Friday

10 DAILY WACV Friday Poster Presentation This work explores the parameterefficient fine-tuning method to adapt large Vision Transformer models for downstream tasks using less computational resources. “We have many good models, but the issue is they’re large, like LLMs and ViTs,” Imad begins. “We’d like to adapt these large models for some specific tasks but don’t want to use too much computational power. There is a method known as parameter-efficient fine-tuning, where the objective is to get really good performance using these large models on tasks but with less computational power.” The title ‘Mini but Mighty’ (MiMi), credited to Imad’s supervisor Enzo, encapsulates the work’s objective to achieve robust performance with large models using minimal computational resources. Addressing the practical applications of the work, Mini but Mighty: Finetuning ViTs with Mini Adapters Imad Eddine Marouf is a second-year PhD student at Télécom-Paris in France under the supervision of Enzo Tartaglione and Stephane Lathuiliֻѐre. In this paper, he proposes MiMi, a parameter-efficient training framework for Vision Transformers (ViTs). He speaks to us following yesterday’s poster presentation.

RkJQdWJsaXNoZXIy NTc3NzU=