GitHub - varunshenoy/super-json-mode: Low latency JSON generation using LLMs ⚡️
Koheesio
CI/CD
Package
Meta
Koheesio, named after the Finnish word for cohesion, is a robust Python framework for building efficient data pipelines. It promotes modularity and collaboration, enabling the creation of complex pipelines from simple, reusable components.
The framework is versatile, aiming to support multiple implementations and working sea... See more
CI/CD
Package
Meta
Koheesio, named after the Finnish word for cohesion, is a robust Python framework for building efficient data pipelines. It promotes modularity and collaboration, enabling the creation of complex pipelines from simple, reusable components.
The framework is versatile, aiming to support multiple implementations and working sea... See more
GitHub - Nike-Inc/koheesio: Python framework for building efficient data pipelines. It promotes modularity and collaboration, enabling the creation of complex pipelines from simple, reusable components.
Deep learning at the speed of light.
Luminal is a deep learning library that uses composable compilers to achieve high performance.
use luminal::prelude::*;
// Setup graph and tensors
let mut cx = Graph::new();
let a = cx.tensor().set([[1.0], [2.0], [3.0]]);
let b = cx.tensor().set([[1.0, 2.0, 3.0, 4.0]]);
// Do math...
let mut c = a.matmul(b).retrieve();
... See more
Luminal is a deep learning library that uses composable compilers to achieve high performance.
use luminal::prelude::*;
// Setup graph and tensors
let mut cx = Graph::new();
let a = cx.tensor().set([[1.0], [2.0], [3.0]]);
let b = cx.tensor().set([[1.0, 2.0, 3.0, 4.0]]);
// Do math...
let mut c = a.matmul(b).retrieve();
... See more
jafioti • GitHub - jafioti/luminal: Deep learning at the speed of light.
2-5x faster 50% less memory local LLM finetuning
- Manual autograd engine - hand derived backprop steps.
- 2x to 5x faster than QLoRA. 50% less memory usage.
- All kernels written in OpenAI's Triton language.
- 0% loss in accuracy - no approximation methods - all exact.
- No change of hardware necessary. Supports NVIDIA GPUs since 2018+. Minimum CUDA Compute Cap