Ollama

15 articles about ollama.

Open Source AI Projects GitHub Releases: Last Day, April 2026

·12 min read

Complete roundup of open source AI project GitHub releases from the last day of April 2026. ComfyUI v0.19, Ollama v0.20.6, Transformers v5.5.4, llama.cpp b8779, CrewAI security patches, and more.

open-sourceai-projectsgithub-releasesapril-2026comfyuiollamatransformersllama-cppcrewailitellm

Open Source AI Projects Releases and Updates: April 11-12, 2026

·8 min read

Every open source AI project release and update from April 11-12, 2026. Archon launches as the first coding harness builder, OpenAI Codex CLI ships Realtime V2, Ollama v0.20.6 lands, and llama.cpp optimizes CUDA kernels.

open-sourceai-projectsreleasesupdatesapril-2026llmai-agentsarchoncodex-cliollamallama-cpp

Another CLI? What Makes It Different from Ollama's Built-In

·2 min read

Why a dedicated AI agent CLI differs from ollama's built-in commands - tool calling, desktop integration, and persistent memory make the difference.

cliollamalocal-aideveloper-toolsdesktop-agent

Codex-Like Functionality with Local Ollama - Qwen 3 32B Is the Sweet Spot

·2 min read

Running Qwen 3 32B locally on M-series Macs for Codex-like coding agent capabilities. Why 32B is the sweet spot for Apple Silicon.

ollamaqwencodexlocal-aiapple-silicon

Function Calling Reliability Is the Real Bottleneck for AI Agents

·2 min read

Benchmarking LLM function calling matters more than raw intelligence. An agent that picks the wrong tool 5% of the time will fail 40% of multi-step workflows.

function-callingbenchmarkingai-agentsreliabilityllmollama

Hybrid AI Agent Architectures - Local Models for Sensitive Data

·2 min read

Why the best AI agent setup uses local models for sensitive data and cloud models for everything else, with practical patterns for routing between them.

local-modelshybrid-aiprivacysensitive-dataollamaarchitecture

Built a Local AI Coding Agent with Qwen 3.5 9B

·2 min read

How to build a local AI coding agent using Qwen 3.5 9B for desktop automation, and why tool calling format matters more than model size.

local-aiqwentool-callingcoding-agentollama

Building a macOS Tray App with Ollama as Your Knowledge Base

·2 min read

How to build a macOS menu bar app that uses Ollama for a personal AI knowledge base - global shortcut UX, local model inference, and keeping everything on

macosollamatray-appmenu-barknowledge-baselocal-ai

Learning Path for Local LLMs - From Ollama to Desktop Agents

·2 min read

A practical learning path for running local LLMs: start with Ollama basics, learn prompting, understand quantization, build workflows, then automate your

ollamalocal-llmlearningdesktop-agentautomationtutorial

How to Cut AI Agent Costs 50-70% with Model Routing

·2 min read

Route simple tasks to local Ollama models, complex ones to Claude. Combine that with aggressive state summarization and context pruning to keep token usage

model-routingcost-reductionollamaclaudeoptimizationartificialinteligence

LLM Observability for Desktop Agents - Beyond Logging Model Outputs

·2 min read

Traditional LLM observability focuses on model outputs. For desktop agents, watching what the agent actually does on screen - logging actions, not just

llm-observabilityollamaagentsmonitoringdebugging

VS Code Claude Extension vs Terminal with Ollama - Why the Terminal Route Wins

·2 min read

The VS Code Claude extension is locked to Anthropic's API. Running Claude Code in the terminal with Ollama gives you local models, more control, and zero

vs-codeclaudeollamaterminallocal-llmdevelopment

Using Ollama for Local Vision Monitoring on Apple Silicon

·3 min read

Local vision models through Ollama handle real-time monitoring tasks like watching your parked car. Apple Silicon M-series makes local inference fast enough

ollamalocal-visionmonitoringapple-siliconprivacy

Build a Local-First AI Agent with Ollama - No API Keys, No Cloud, No Signup

·3 min read

How to run an AI desktop agent entirely on your Mac using Ollama for local inference. No API keys needed, no data leaves your machine, works offline.

ollamalocal-firstprivacymacostutorial

Local LLMs Are Not Just for Inference Anymore - Real Workflows on Your Machine

·2 min read

The shift to local LLMs is moving beyond chat and inference into real desktop automation. Browser control, CRM updates, document generation - all without

local-llmollamadesktop-automationprivacyworkflow

Browse by Topic