New AI Model Releases, Papers, and Open Source Projects: April 10, 2026

Matthew Diakonov··10 min read

New AI Model Releases, Papers, and Open Source Projects: April 10, 2026

April 10, 2026 packed three distinct categories of AI progress into a single day: new model releases from Anthropic and OpenAI, research papers pushing multimodal and safety boundaries, and a fresh wave of open source projects hitting GitHub. This post catalogs all three in one place so you can see what shipped, what was published, and what you can run locally right now.

Everything That Shipped on April 10 at a Glance

| Name | Category | Organization | Access | Key Detail | |---|---|---|---|---| | Claude Mythos Preview | Model release | Anthropic | Gated (50 orgs) | Cybersecurity-focused, most capable Anthropic model | | Project Glasswing | Platform | Anthropic | Pilot | Defensive vulnerability scanning with Mythos | | Trusted Access for Cyber | Program | OpenAI | Pilot | Cybersecurity AI access program | | Shopify AI Toolkit | Open source tool | Shopify | Open | Agent-based commerce automation | | GLM-5.1 | Open source model | Zhipu AI (Z.ai) | MIT license | 128K context, competitive with GPT-4o | | Waypoint-1.5 | Open source model | Overworld | Open weights | Local 3D world generation (shipped Apr 11) | | llama.cpp b8705 | Open source tool | ggml-org | MIT | Step3-VL-10B support, fused QKV tensors | | OWASP AI Agent Threat Model v0.4 | Paper / framework | OWASP | Open | Agent-specific security threat taxonomy |

New AI Model Releases

Anthropic: Claude Mythos Preview and Project Glasswing

Anthropic revealed Claude Mythos Preview on April 10, described internally as their most capable model to date. Unlike previous Claude releases, Mythos was not made publicly available. Instead, 50 organizations received gated access through "Project Glasswing," a program focused on defensive cybersecurity.

The model is designed to scan infrastructure for vulnerabilities, reason about attack surfaces, and generate remediation recommendations. Anthropic's announcement emphasized that Mythos is not a general-purpose release but rather a specialized deployment where the model's reasoning capabilities are applied to security analysis.

This marks a shift in how frontier labs think about model distribution. Rather than launching to everyone at once, Anthropic chose a controlled rollout for a high-stakes domain.

OpenAI: Trusted Access for Cyber

On the same day, OpenAI announced its own cybersecurity AI initiative. The "Trusted Access for Cyber" pilot gives select security teams early access to OpenAI models tuned for threat detection, log analysis, and incident response. Details remain sparse, but the parallel timing with Anthropic's announcement suggests both labs see cybersecurity as a key enterprise vertical.

Zhipu AI: GLM-5.1 Community Adoption Surge

While GLM-5.1 itself launched earlier in the week, April 10 saw a significant spike in community adoption. The model is released under MIT license, supports 128K context, and benchmarks competitively with GPT-4o on multilingual and coding tasks.

What made April 10 notable for GLM-5.1 was the wave of community integrations: new llama.cpp GGUF quantizations appeared on Hugging Face, vLLM added a fast-path for GLM-5.1's architecture, and several open source agent frameworks added GLM-5.1 as a supported backend.

Overworld: Waypoint-1.5 (April 11)

Shipping one day later on April 11, Waypoint-1.5 from Overworld is a local 3D world generation model. It takes text prompts and produces navigable 3D environments. The model runs on consumer GPUs (12GB+ VRAM recommended) and the weights are fully open. This is the first open source model to reach usable quality for real-time 3D scene generation from text.

Research Papers Published Around April 10

Several papers appeared on arXiv and organizational blogs in the April 10 window that are worth tracking.

OWASP AI Agent Threat Model v0.4

The OWASP Foundation released version 0.4 of their AI Agent Threat Model, a framework for categorizing security risks specific to AI agents. The document covers prompt injection taxonomy, tool-use authorization boundaries, memory poisoning vectors, and multi-agent trust propagation failures. This is a reference document rather than a research paper, but it is the most comprehensive open taxonomy of agent security threats published to date.

Multimodal Reasoning Under Constraint

Multiple papers from April 8-10 explored how large multimodal models perform when context is limited or when input modalities conflict. Key findings include:

  • Visual grounding degrades predictably when image resolution drops below model training resolution, with a sharp cliff at 50% downscale for most architectures
  • Audio-text alignment in speech models improves significantly with explicit timestamp tokens during training, as demonstrated by MERaLiON-2's approach
  • Code generation from screenshots remains unreliable for anything beyond simple UI layouts, with error rates above 40% on complex component hierarchies

Agent Memory and Long-Horizon Planning

A cluster of papers explored how agents maintain state across extended task sequences. The consensus finding: current approaches using vector databases for retrieval work well for factual recall but fail at maintaining causal chains across 50+ step plans. Several papers propose graph-based memory structures as an alternative, though none demonstrate production-ready implementations.

Open Source Projects and Tools

Shopify AI Toolkit

Shopify released an open source toolkit for building AI-powered commerce agents. The toolkit includes pre-built actions for inventory management, order processing, customer support routing, and product recommendation. It integrates with Shopify's API and can be connected to any LLM backend.

llama.cpp b8705

Build b8705, released on April 10, added support for Step3-VL-10B (a vision-language model) and fused QKV tensor operations. The fused QKV change reduces memory bandwidth requirements for attention computation, improving tokens-per-second on bandwidth-constrained hardware like consumer GPUs and Apple Silicon.

Notable GitHub Activity on April 10

Several smaller but significant open source updates shipped:

  • LangChain v0.3.14: Added native support for GLM-5.1 as a chat model, plus a new AgentExecutor retry policy for transient tool failures
  • vLLM: Merged a fast-path kernel for GLM-5.1's rotary position encoding variant
  • Open Interpreter 0.5.2: Updated local model support with better llama.cpp integration and reduced setup friction
  • CrewAI v0.28: New "delegation audit" feature that logs which agent handled which subtask, addressing a common debugging pain point

How These Releases Connect

April 10, 2026: AI Release LandscapeModel ReleasesResearch PapersOpen SourceClaude MythosGated, cybersecurityGLM-5.1MIT, 128K contextWaypoint-1.5Open, 3D generationOWASP Agent Threatsv0.4, security taxonomyMultimodal ReasoningConstraint analysisAgent MemoryGraph-based proposalsShopify AI ToolkitCommerce agentsllama.cpp b8705Step3-VL, fused QKVLangChain v0.3.14GLM-5.1 supportCrewAI v0.28Delegation auditModels feed into open source toolingPapers shape how models deployTheme: Security, local inference, open weights

The pattern from April 10 is clear: the major labs are moving toward specialized, gated releases for high-stakes domains (cybersecurity first), while the open source ecosystem rapidly integrates whatever model weights become available. Research papers are catching up to document the risks this creates, particularly around agent autonomy and tool use.

What This Means for Developers

If you are building AI applications, the April 10 releases suggest three things worth acting on:

  1. Cybersecurity AI is becoming a distinct product category. Both Anthropic and OpenAI launched cybersecurity-specific programs on the same day. If your product touches security, expect to see specialized models available through enterprise programs before general availability.

  2. Open source model quality keeps climbing. GLM-5.1 under MIT license and Waypoint-1.5 with open weights continue the trend of high-quality models you can self-host. The gap between proprietary and open is narrowing faster in specific domains (coding, multilingual, 3D generation) than in general reasoning.

  3. Agent security is now documented enough to act on. The OWASP AI Agent Threat Model v0.4 gives you a structured checklist. If you are shipping agents that use tools or maintain persistent memory, this document is the closest thing to an industry standard for threat modeling.

Running the Open Source Releases Locally

For developers who want to try the open source releases from April 10, here are the quick-start paths:

GLM-5.1 via llama.cpp:

# Get the GGUF quantization from Hugging Face
huggingface-cli download zhipu-ai/GLM-5.1-GGUF glm-5.1-Q4_K_M.gguf

# Run with llama.cpp b8705+
./llama-server -m glm-5.1-Q4_K_M.gguf -c 8192 --port 8080

Waypoint-1.5 for 3D generation:

git clone https://github.com/overworld-ai/waypoint
cd waypoint
pip install -r requirements.txt
python generate.py --prompt "medieval tavern interior" --output scene.glb

Shopify AI Toolkit:

npm install @shopify/ai-toolkit
# See docs at github.com/Shopify/ai-toolkit for setup

Timeline of April 10, 2026

For quick reference, here is the chronological order of announcements:

  • Morning (US Eastern): Anthropic publishes Project Glasswing blog post, confirms Claude Mythos Preview
  • Late morning: OpenAI announces Trusted Access for Cyber pilot program
  • Midday: Shopify releases AI Toolkit on GitHub
  • Afternoon: llama.cpp b8705 merged with Step3-VL-10B and fused QKV support
  • Evening: OWASP publishes AI Agent Threat Model v0.4
  • Overnight/April 11: Overworld drops Waypoint-1.5, LangChain v0.3.14 ships with GLM-5.1 support

April 10, 2026 will likely be remembered as the day cybersecurity AI went from research curiosity to product category. The combination of frontier model gating, open source momentum, and formal threat modeling marks a maturation point for the field.

Related Posts