NVIDIA Technical Blog | News and tutorials for developers, data ...
koboldcpp
🤗 Transformers
Huggingface is an open source platform and community for deep learning models for language, vision, audio and multimodal. They develop and maintain the transformers library, which simplifies the process of downloading and training state of the art deep learning models.
This is the best library if you have a background in m... See more
🤗 Transformers
Huggingface is an open source platform and community for deep learning models for language, vision, audio and multimodal. They develop and maintain the transformers library, which simplifies the process of downloading and training state of the art deep learning models.
This is the best library if you have a background in m... See more
Moyi • 10 Ways To Run LLMs Locally And Which One Works Best For You
Glaive-coder-7b
Glaive-coder-7b is a 7B parameter code model trained on a dataset of ~140k programming related problems and solutions generated from Glaive’s synthetic data generation platform.
The model is fine-tuned on the CodeLlama-7b model.
Usage:
The model is trained to act as a code assistant, and can do both single instruction following and mult... See more
Glaive-coder-7b is a 7B parameter code model trained on a dataset of ~140k programming related problems and solutions generated from Glaive’s synthetic data generation platform.
The model is fine-tuned on the CodeLlama-7b model.
Usage:
The model is trained to act as a code assistant, and can do both single instruction following and mult... See more
glaiveai/glaive-coder-7b · Hugging Face
eneral-purpose models
- 1.1B: TinyDolphin 2.8 1.1B. Takes about ~700MB RAM and tested on my Pi 4 with 2 gigs of RAM. Hallucinates a lot, but works for basic conversation.
- 2.7B: Dolphin 2.6 Phi-2. Takes over ~2GB RAM and tested on my 3GB 32-bit phone via llama.cpp on Termux.
- 7B: Nous Hermes Mistral 7B DPO. Takes about ~4-5GB RAM depending on contex