Sublime
An inspiration engine for ideas
Sugarcane AI is an open-source framework designed to simplify and accelerate LLM app development. With a focus on fine-tuned Language Models (LLMs), prompt management, and Workflow Plugins, Sugarcane AI empowers developers to build, train, and manage complex LLM applications effortlessly. 🎉
Why Sugarcane AI? : Key Features of Microservices Framewor... See more
Why Sugarcane AI? : Key Features of Microservices Framewor... See more
sugarcane-ai • GitHub - sugarcane-ai/sugarcane-ai.github.io
💸🤑 Announcing our Bounty Program: Help the Julep community fix bugs and ship features and get paid. More details here.
Start your project with conversation history, support for any LLM, agentic workflows, integrations & more.
Explore the docs »
Report Bug · Request Feature · Join Our Discord · X · LinkedIn
Why Julep?
We've built a lot of AI ap... See more
Start your project with conversation history, support for any LLM, agentic workflows, integrations & more.
Explore the docs »
Report Bug · Request Feature · Join Our Discord · X · LinkedIn
Why Julep?
We've built a lot of AI ap... See more
GitHub - julep-ai/julep: Open-source alternative to Assistant's API with a managed backend for memory, RAG, tools and tasks. ~Supabase for building AI agents.

Everyone is talking about MCP.
I summarized all the MCP announcements/launches from Composio, Firecrawl, Cursor, Jira, Langchain, Firebase, and more
Send to your engineering team
(save for later) https://t.co/x5zgVcariR
HoneyHive is a collaboration platform to test and evaluate, monitor and debug your LLM apps, from prototype to production. It enables you to continuously improve LLM apps in production with human feedback, quantitative rigour and safety best-practices.
Carlos • Data Machina #222
Hypotenuse AI: AI Writing Assistant & Text Generator
hypotenuse.ai
slowllama
Fine-tune Llama2 and CodeLLama models, including 70B/35B on Apple M1/M2 devices (for example, Macbook Air or Mac Mini) or consumer nVidia GPUs.
slowllama is not using any quantization. Instead, it offloads parts of model to SSD or main memory on both forward/backward passes. In contrast with training large models from scratch (unattainable... See more
Fine-tune Llama2 and CodeLLama models, including 70B/35B on Apple M1/M2 devices (for example, Macbook Air or Mac Mini) or consumer nVidia GPUs.
slowllama is not using any quantization. Instead, it offloads parts of model to SSD or main memory on both forward/backward passes. In contrast with training large models from scratch (unattainable... See more
okuvshynov • GitHub - okuvshynov/slowllama: Finetune llama2-70b and codellama on MacBook Air without quantization
SkyPilot is a framework for running LLMs, AI, and batch jobs on any cloud, offering maximum cost savings, highest GPU availability, and managed execution.
SkyPilot abstracts away cloud infra burdens :
SkyPilot... See more
SkyPilot abstracts away cloud infra burdens :
- Launch jobs & clusters on any cloud
- Easy scale-out: queue and run many jobs, automatically managed
- Easy access to object stores (S3, GCS, R2)
SkyPilot... See more
skypilot-org • GitHub - skypilot-org/skypilot: SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.
High Fructose corn sugar was the result of [neomania](https://www.shortform.com/blog/neomania-antifragile/), financed by a Nixon administration in love with technology and victim of some urge to subsidize corn farmers