GitHub - turboderp/exllamav2: A fast inference library for running LLMs locally on modern consumer-class GPUs

GitHub - turboderp/exllamav2: A fast inference library for running LLMs locally on modern consumer-class GPUs

turboderpgithub.com
Thumbnail of GitHub - turboderp/exllamav2: A fast inference library for running LLMs locally on modern consumer-class GPUs

promptslab GitHub - promptslab/LLMtuner: Tune LLM in few lines of code

unslothai GitHub - unslothai/unsloth: 5X faster 50% less memory LLM finetuning

GitHub - mistralai/mistral-finetune

young-geng GitHub - young-geng/EasyLM: Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.