7 items
7 tools
High-throughput inference server for LLMs. PagedAttention memory management. The go-to for serious local or self-hosted serving.
Run 50,000+ ML models with a simple API. No infrastructure management. Pay-per-second billing. Deploy custom models with Cog. Popular for image generation and audio.
Fastest inference for open-source models. 200+ models via unified API. Ranks #1 on speed benchmarks for DeepSeek, Qwen, Kimi, and Llama. Serverless pay-per-token pricing.
Wafer-scale AI inference at 3,000+ tokens/sec. The WSE-3 chip has 4 trillion transistors and 900K AI cores. 20x faster than GPU providers. OpenAI partnership for inference.

New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.