Fine-tuning large language models (LLMs) has emerged as a crucial technique to adapt these architectures for specific domains. Traditionally, fine-tuning relied on massive datasets. However, Data-Centric Fine-Tuning (DCFT) presents a novel strategy that shifts the focus from simply increasing dataset size to improving data quality and appropriatene