Indeed. Fine-tuning and parameter-efficient modification (LoRA, QLoRA, DPO, etc.) are supported by self-hosted LLM models, particularly on:

  •         Hugging Face Transformers + PEFT
  •         OpenChatKit and Axolotl

·         Custom LoRA adaptor loading in Ollama or llama.cpp

Was this answer helpful? 0 Users Found This Useful (0 Votes)