Chest X-ray (CXR) segmentation is an important step for computer-aided diagnosis, and though large vision foundation models exhibit strong generalization, they are computationally challenging to deploy in resource-constrained clinical settings. We propose AdaLoRA-QAT, a two-stage fine-tuning framework that couples adaptive low-rank encoder adaptation with full model quantization-aware training. Adaptive rank allocation improves parameter efficiency for clinical deployment, while selective mixed-precision 8-bit quantization preserves structural fidelity for clinical trustability. Across large-scale CXR datasets, AdaLoRA-QAT attains state-of-the-art fine-tuning accuracy (95.6% Dice) with a 16.6× reduction in trainable parameters and 2.24× model compression, thereby outperforming recent competing methods. AdaLoRA-QAT bridges the trade-off between accuracy and efficiency while preserving structural alignment, enabling compact, clinically deployable foundation models for medical image segmentation.
This work was supported by IHub-Data, IIIT Hyderabad. Tapabrata Chakraborti acknowledges support from the Turing-Roche Strategic Partnership and the UCL NIHR Biomedical Research Centre.