Abstract
Recent advances in deep learning have enabled the development of highly powerful models, but these models are also extremely resource-intensive. This results in high financial costs, prolonged training times, and a significant environmental impact. Consequently, optimizing the training process is essential to mitigate these costs while maintaining or enhancing performance. In this context, software-level optimization plays a crucial role by improving training pipelines, memory management, and overall code efficiency. Fine-tuning hyperparameters, as part of this code optimization process, is the focus of this review, which examines how to effectively combine and adjust these parameters to boost overall model performance while reducing both costs and environmental impact.
Get full access to this article
View all access options for this article.
