WACV 2024 Daily - Friday

11 DAILY WACV Friday Mini but Mighty he highlights its ability to overcome the computational inefficiency associated with adapting large models for image classification, where each time the model is finetuned on a specific task, it loses computational power, resulting in suboptimal performance. Imad observes that adapters perform poorly when their dimensions are small. To solve this, the method starts with large adapters that can reach high performance and iteratively reduces their size. However, another challenge is determining which parameters of the large models should be finetuned for optimal performance on specific downstream tasks. With models boasting millions of parameters, selecting the right layers becomes a critical consideration. “Since we have a very large model, we don’t know which layers to focus on for the downstream task,” he explains. “Each task is different, so the layers in the model are quite different. We were grateful to find a way to do this dynamically. In our approach, given any downstream task, the model itself will decide which layers to focus on to get the best performance.” In terms of the downstream tasks, Imad focused mainly on image classification. “We evaluated our method on 29 image classification datasets featuring medical images, cars, and sketches,” he reveals. “A well-known benchmark we evaluated on was DomainNet, with specific images for 365 classes but in different formats – some are sketches, some are real images, and

RkJQdWJsaXNoZXIy NTc3NzU=