Trade-offs in Cross-Domain Generalization of Foundation Model Fine-Tuned for Biometric Applications

Fraunhofer IGD1 & Technische Universität Darmstadt2
Average verification accuracies

Generalization Loss (GELO) comparison across models under zero-shot and linear-probe settings for cross-task generalization on diverse benchmarks. GELO is defined as the ratio of the average performance after fine-tuning to the average performance before fine-tuning, providing a measure of generalization loss after fine-tuning.

Abstract

Foundation models such as CLIP have demonstrated exceptional zero- and few-shot transfer capabilities across diverse vision tasks. However, when fine-tuned for highly specialized biometric tasks, face recognition (FR), morphing attack detection (MAD), and presentation attack detection (PAD), these models may suffer from over-specialization. Thus, they may lose one of their foundational strengths, cross-domain generalization. In this work, we systematically quantify these trade-offs by evaluating three instances of CLIP fine-tuned for FR, MAD and PAD. We evaluate each adapted model as well as the original CLIP baseline on 14 general vision datasets under zero-shot and linear-probe protocols, alongside common FR, MAD and PAD benchmarks. Our results indicate that finetuned models suffer from over-specialization, especially when finetuned for complex tasks of FR. Also, our results pointed out that task complexity and classification head design, multi-class (FR) vs. binary (MAD and PAD), correlate with the degree of catastrophic forgetting. The FRoundation model with the ViT-L backbone outperforms other approaches on the large scale FR benchmark IJB-C, achieving an improvement of up to 58.52%. However, it experiences a substantial performance drop on ImageNetV2, reaching only 51.63% compared to 69.84% achieved by the baseline CLIP model. Moreover, the larger CLIP architecture consistently preserves more of the model’s original generalization ability than the smaller variant, indicating that increased model capacity may help mitigate over-specialization.

BibTeX


      ...