Tiny AI Models for Game NPCs - A Survival Game Experiment
Tiny AI Models for Game NPCs - A Survival Game Experiment
Someone tried running a 500M parameter model as the brain for NPCs in a survival game. The results were hilarious and surprisingly useful at the same time.
What Tiny Models Actually Do
Models under 1B parameters cannot reason about complex plans. But they can do something useful - generate plausible-sounding dialogue and make simple reactive decisions. An NPC that says "I see fire, I should run" does not need GPT-4 level reasoning. It needs pattern matching and basic cause-effect logic.
The 500M model handled simple survival behaviors well. "I am hungry, I should find food." "There is a threat, I should hide." "The player gave me an item, I should express gratitude." These are pattern completions, not reasoning, and tiny models excel at pattern completion.
Where It Breaks Down
Planning is where tiny models fall apart. Ask the NPC to coordinate with other NPCs, plan a multi-step strategy, or respond to novel situations, and the output becomes nonsensical. The model does not have enough parameters to hold a coherent world model.
The fix is to not ask the model to plan. Use a traditional behavior tree for strategy and decision-making. Use the tiny model only for dialogue generation and flavor text. The behavior tree decides what the NPC does. The model decides how the NPC talks about what it is doing.
Why This Matters Beyond Games
The same principle applies to desktop agents. Not every AI task needs a 70B model. Reading a notification and deciding if it is important? A tiny model can handle that. Generating a draft email? You probably want something bigger. Matching the model size to the task complexity is how you keep things fast and cheap.
Running ten tiny models simultaneously for different subtasks can outperform one large model trying to do everything.
Fazm is an open source macOS AI agent. Open source on GitHub.