Show HN: Collider – the platform for local LLM debug and inference at warp speed https://ift.tt/uHAlwXn

Show HN: Collider – the platform for local LLM debug and inference at warp speed ChatGPT turns one today :) What a day to launch the project I'm tinkering with for more than half a year. Welcome new LLM platform suited both for individual research and scaling AI services in production. GitHub: https://ift.tt/YxRCItf Some superpowers: - Built with performance and scaling in mind thanks Golang and C++ - No more problems with Python dependencies and broken compatibility - Most of modern CPUs are supported: any Intel/AMD x64 platofrms, server and Mac ARM64 - GPUs supported as well: Nvidia CUDA, Apple Metal, OpenCL cards - Split really big models between a number of GPU (warp LLaMA 70B with 2x RTX 3090) - Not bad performance on shy CPU machines, fast as hell inference on monsters with beefy GPUs - Both regular FP16/FP32 models and their quantised versions are supported - 4-bit really rocks! - Popular LLM architectures already there: LLaMA, Starcoder, Baichuan, Mistral, etc... - Special bonus: proprietary Janus Sampling for code generation and non English languages https://ift.tt/YxRCItf December 1, 2023 at 02:02AM

Comments

Popular Posts