It is feasible to fine-tune LoRA using tools like PEFT and QLoRA. However, the base Mistal models format determines LoRA compatibility; typically, training is done using the full-precision or AWQ versions rather than GGUF.

Was this answer helpful? 0 Users Found This Useful (0 Votes)