Fine-tuning large language models (LLMs) on domain-specific text corpora has emerged as a crucial step in enhancing their performance on scientific tasks. This study investigates various fine-tuning methods for LLMs when applied to technical text. We analyze the impact of different parameters, such as dataset size, architecture, and configuration s