AI & LLMs
Local models, workflows, prompt systems, toolchains, experiments, and real-world AI testing.
3 articles
All articles
AI & LLMsSetup Guide
Picking the Right Local Model for the Job
A decision framework for choosing between 7B, 13B, 32B, and 70B local models based on task, latency budget, and VRAM. No hype, just tradeoffs.
Apr 8, 20262 min read
AI & LLMsSetup Guide
RTX 3090 for Local LLMs in 2026: Is It Still Worth It?
A practical look at the RTX 3090 for running local LLMs today. 24 GB of VRAM at used-market prices, real tokens-per-second numbers, and where it stops being enough.
Apr 2, 20263 min read