GitHub - alibaba/data-juicer: A one-stop data processing system to make data higher-quality, juicier, and more digestible for LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大语言模型提供更高质量、更丰富、更易”消化“的数据!
Open source, high-throughput, fault-tolerant vector embedding pipeline
Simple API endpoint that ingests large volumes of raw data, processes, and stores or returns the vectors quickly and reliably
Simple API endpoint that ingests large volumes of raw data, processes, and stores or returns the vectors quickly and reliably
dgarnitz • GitHub - dgarnitz/vectorflow: VectorFlow is a high volume vector embedding pipeline that ingests raw data, transforms it into vectors and writes it to a vector DB of your choice.
LLM-PowerHouse: A Curated Guide for Large Language Models with Custom Training and Inferencing
Welcome to LLM-PowerHouse, your ultimate resource for unleashing the full potential of Large Language Models (LLMs) with custom training and inferencing. This GitHub repository is a comprehensive and curated guide designed to empower developers, researche... See more
Welcome to LLM-PowerHouse, your ultimate resource for unleashing the full potential of Large Language Models (LLMs) with custom training and inferencing. This GitHub repository is a comprehensive and curated guide designed to empower developers, researche... See more
ghimiresunil • GitHub - ghimiresunil/LLM-PowerHouse-A-Curated-Guide-for-Large-Language-Models-with-Custom-Training-and-Inferencing: LLM-PowerHouse: Unleash LLMs' potential through curated tutorials, best practices, and ready-to-use code for custom training and inferencing.
End to end ML Project
Project setup:
Model training:
Run: python main.py
Model... See more
Project setup:
- Open this in VSCode
- Install Dev Containers
- Do Cmd + Shift + P -> Dev Containers: Rebuild Container Without Cache
- Activate the conda virtual environment: source activate endtoend
- Inside Dev Container, run mlflow and prefect local servers: nohup bash ./start_backend.sh
Model training:
Run: python main.py
Model... See more
arghhjayy • GitHub - arghhjayy/EndToEndML: End to end ML pipeline written with open source tools exclusively
In streaming settings, StreamingLLM outperforms the sliding window recomputation baseline by up to 22.2x speedup.