Local Llm
5 articles about local llm.
Apple's On-Device AI as a Local Fallback for Cloud LLM APIs
Using Claude API as the primary LLM provider but having Apple's on-device AI as a local fallback that speaks the same OpenAI-compatible format is a game changer for macOS apps.
Learning Path for Local LLMs - From Ollama to Desktop Agents
A practical learning path for running local LLMs: start with Ollama basics, learn prompting, understand quantization, build workflows, then automate your desktop.
Why Self-Hosting AI Matters: Your Agent Sees Your Emails, Documents, and Browsing History
AI agents interact with your most sensitive data - emails, documents, browsing history. Self-hosting with local LLMs keeps that data on your machine where it belongs.
VS Code Claude Extension vs Terminal with Ollama - Why the Terminal Route Wins
The VS Code Claude extension is locked to Anthropic's API. Running Claude Code in the terminal with Ollama gives you local models, more control, and zero API costs.
Local LLMs Are Not Just for Inference Anymore - Real Workflows on Your Machine
The shift to local LLMs is moving beyond chat and inference into real desktop automation. Browser control, CRM updates, document generation - all without cloud APIs.