kaiserlich blog

Just Build It?

Always look around. Today I saw Amp release their line of models used for various tasks. I knew they do this, but while looking at it I got thinking: we can just build that as skills for Claude or basically any other agent. Some examples:

Session Analyst. A skill that sifts through Claude Code sessions automatically, analyzing patterns and usage. Relying on Gemini’s large context window makes sense because we want to feed as many sessions as possible to see patterns. Where do we keep yelling the same thing at the model? Is there potential for a small change in the CLAUDE.md? Does it make sense to make a skill for this? How can we be better at communicating context? So much power in this. And all we need is to use the data that’s stored anyway and pump it into Gemini.

Handoff System. Know that feeling where you remember a session where you worked on XYZ but it’s hard to find it again? At the same time, compaction leaves you with leaky data about what was really going on and you have no way of fine-tuning it for your own taste. But you could just have Gemini summarize the session on demand in a more structured way. Enrich the data with the projects you worked in, the files you edited, and you can build a lookup system on top of it to find relevant data when you need it.

Session Summaries. After building Claude Sessions TUI I realized that the summaries in session files are a mess and just don’t give you enough context to know what was going on. Again, no ownership, no steerability. You don’t own the prompt so you can’t steer the outcome. But it’s just files, so not that hard to build yourself and tune it like a guitar.

Model-specific Skills. The Gemini Look skill and Painter skill I had already implemented a while ago serve me well. There is no reason for complete model provider lock-in for every task. We can extend the tools and pick the right model for the right job.

This isn’t a new idea. Peter Steinberger built Oracle , a tool to invoke GPT-5 Pro with custom context when you’re stuck. OpenCode lets you configure different models per agent , using a faster model for planning and a more capable one for implementation. Claude Code also lets you specify models per agent and skill, but only Claude models. If you want Gemini’s 1M context window or GPT-5 for specific tasks, you build it yourself.

On a practical note: I first implemented this using the Gemini CLI, but it adds massive overhead. Same prompt, same model: 2 minutes with the CLI, 3 seconds with OpenRouter. 40x faster. The CLI has significant overhead running in agentic mode, even with --sandbox. I now use OpenRouter with a small JavaScript wrapper.


Want help building AI automations? Let's talk