Computer Vision News - February 2024

5 Computer Vision News Ruyu Wang was to train a classifier on each dataset, including a real CIFAR-10 dataset and synthetic CIFAR-10 datasets from a diffusion-based and GAN-based model. She uses these classifiers to classify a real test set. “If you train a classifier on real data and then test it on the real dataset, the performance is on par with your observation during training, so it’s around 90% accurate,” she confirms. “But if you train on synthetic data, the performance drops. You test on the real dataset and get a performance of 85% or even 70%. This is a known issue. I said, let’s use the classifier trained on the real dataset to classify the synthetic test set, and do you know what? The real classifier can achieve more than 90% accuracy on the synthetic dataset! The domain gap is not mutual. If you train on real data, then your classifier is super strong. The synthetic data seems to have some problem.” Ruyu’s second experiment showed training on the synthetic dataset demonstrated rapid accuracy convergence, reaching 99% in just a few epochs, contrasting with slower progress observed on the real dataset. The classifier trained on synthetic data was solving the same task, yet it somehow picked up a signal to classify the images more easily. Taking the hypothesis that the synthetic dataset was simpler than the real dataset, Ruyu’s third experiment focused on assessing the information content of the synthetic samples. “I trained a classifier on the real dataset and used it to examine the cross-entropy loss each sample brings to the classifier,” she explains. “The assumption is that if a synthetic sample contains some new information or it’s different from what’s already contained in the real dataset, it should stimulate a high loss for the classifier to update toward this direction.” The majority of the 45,000 synthetic samples exhibited negligible loss, indicating a lack of substantial new information or divergence from the real dataset. The performance drop observed when training on synthetic data likely stems from it being a limited subset of the real dataset, missing crucial rare cases and unique attributes. Therefore, the classifier trained on top of it lacks the robustness to be effective when applied to a real-world dataset. “If a generative model trained on a relatively large dataset, like a CIFAR-10 dataset, is having this problem, what will happen if we train our data on limited defective samples?” she poses. “The problem will be more severe, our data diversity will be less, and data quality will probably drop to some extent.” Despite the challenges, Ruyu is optimistic about the potential of integrating not-so-perfect synthetic data with real data to train a defect classifier, as even a modest improvement is valuable considering the scarcity of available BEST OF WACV 2024

RkJQdWJsaXNoZXIy NTc3NzU=