Sublime
An inspiration engine for ideas
Sugarcane AI is an open-source framework designed to simplify and accelerate LLM app development. With a focus on fine-tuned Language Models (LLMs), prompt management, and Workflow Plugins, Sugarcane AI empowers developers to build, train, and manage complex LLM applications effortlessly. 🎉
Why Sugarcane AI? : Key Features of Microservices Framewor... See more
Why Sugarcane AI? : Key Features of Microservices Framewor... See more
sugarcane-ai • GitHub - sugarcane-ai/sugarcane-ai.github.io
💸🤑 Announcing our Bounty Program: Help the Julep community fix bugs and ship features and get paid. More details here.
Start your project with conversation history, support for any LLM, agentic workflows, integrations & more.
Explore the docs »
Report Bug · Request Feature · Join Our Discord · X · LinkedIn
Why Julep?
We've built a lot of AI ap... See more
Start your project with conversation history, support for any LLM, agentic workflows, integrations & more.
Explore the docs »
Report Bug · Request Feature · Join Our Discord · X · LinkedIn
Why Julep?
We've built a lot of AI ap... See more
GitHub - julep-ai/julep: Open-source alternative to Assistant's API with a managed backend for memory, RAG, tools and tasks. ~Supabase for building AI agents.
No longer hiring junior or even mid-level software engineers.
Our tokens per codebase:
Gumroad: 2M
Flexile: 800K
Helper: 500K
Iffy: 200K
Shortest: 100K
Both Claude 3.5 Sonnet and o3-mini have context windows of 200K tokens, meaning they can now write 100% of our Iffy and Shortest code if prompted well.
It won’t be long until AI will be writing... See more

HoneyHive is a collaboration platform to test and evaluate, monitor and debug your LLM apps, from prototype to production. It enables you to continuously improve LLM apps in production with human feedback, quantitative rigour and safety best-practices.
Carlos • Data Machina #222
slowllama
Fine-tune Llama2 and CodeLLama models, including 70B/35B on Apple M1/M2 devices (for example, Macbook Air or Mac Mini) or consumer nVidia GPUs.
slowllama is not using any quantization. Instead, it offloads parts of model to SSD or main memory on both forward/backward passes. In contrast with training large models from scratch (unattainable... See more
Fine-tune Llama2 and CodeLLama models, including 70B/35B on Apple M1/M2 devices (for example, Macbook Air or Mac Mini) or consumer nVidia GPUs.
slowllama is not using any quantization. Instead, it offloads parts of model to SSD or main memory on both forward/backward passes. In contrast with training large models from scratch (unattainable... See more
okuvshynov • GitHub - okuvshynov/slowllama: Finetune llama2-70b and codellama on MacBook Air without quantization
SkyPilot is a framework for running LLMs, AI, and batch jobs on any cloud, offering maximum cost savings, highest GPU availability, and managed execution.
SkyPilot abstracts away cloud infra burdens :
SkyPilot... See more
SkyPilot abstracts away cloud infra burdens :
- Launch jobs & clusters on any cloud
- Easy scale-out: queue and run many jobs, automatically managed
- Easy access to object stores (S3, GCS, R2)
SkyPilot... See more
skypilot-org • GitHub - skypilot-org/skypilot: SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.
Clean & curate your data with LLMs
databonsai is a Python library that uses LLMs to perform data cleaning tasks.
Features
databonsai is a Python library that uses LLMs to perform data cleaning tasks.
Features
- Suite of tools for data processing using LLMs including categorization, transformation, and extraction
- Validation of LLM outputs
- Batch processing for token savings
- Retry logic with exponential backoff for handling rate limits an