Abstract: A significant number of users depend on Large Language Models (LLMs) for downstream tasks, but training LLMs from scratch remains prohibitively expensive. Sparse finetuning (SFT) has emerged ...