
10 Ways To Run LLMs Locally And Which One Works Best For You

LlamaHub is a library of data loaders, readers, and tools created by the LlamaIndex community. It provides utilities to easily connect LLMs to diverse knowledge sources.
Ben Auffarth • Generative AI with LangChain: Build large language model (LLM) apps with Python, ChatGPT, and other LLMs
llamafile lets you distribute and run LLMs with a single file. (announcement blog post)
Our goal is to make open source large language models much more accessible to both developers and end users. We're doing that by combining llama.cpp with Cosmopolitan Libc into one framework that collapses all the complexity of LLMs down to a single-file executa... See more
Our goal is to make open source large language models much more accessible to both developers and end users. We're doing that by combining llama.cpp with Cosmopolitan Libc into one framework that collapses all the complexity of LLMs down to a single-file executa... See more
Mozilla-Ocho • GitHub - Mozilla-Ocho/llamafile: Distribute and run LLMs with a single file.
LangChain is an open-source Python framework for building LLM-powered applications. It provides developers with modular, easy-to-use components for connecting language models with external data sources and services.
Ben Auffarth • Generative AI with LangChain: Build large language model (LLM) apps with Python, ChatGPT, and other LLMs
am using my own hardware at home to infer, train, and fine-tune (or trying to; my training efforts have been pretty disasterous so far, but inference works very well).
My current uses of LLM inference are:
My current uses of LLM inference are:
- Asking questions of a RAG system backed by a locally indexed Wikipedia dump, mainly with Marx-3B and PuddleJumper-13B-v2,
- Code co-pilot with Rift-C