Skill Trees and Hooks: How I Automated My Entire Claude Code Setup
Finlay's screen. The `/hg-setup` skill is running. It scans his machine, compares what he has installed against the recommended Hourglass stack, then starts asking questions. Not menu prompts. Actual questions: "You're missing the brain sync setup - want me to walk through that now?" He says yes. It routes to a sub-skill. That sub-skill asks another question. Answers branch into different paths. The whole thing feels like a conversation with someone who already knows how our setup should work.
I built most of this. And watching it run end-to-end on someone else's machine for the first time, I thought: this isn't a script. This is what a system feels like.
That was Session 4 of Build Hour. Finlay and I ran through the full Hourglass Claude Stack live - the skills, the hooks, the brain, the experimental tier. This post is the architecture behind it. Specifically, why skills and hooks together are what turn Claude Code from an assistant into something that persists across every session.
Two primitives. Very different jobs. Once you get the distinction, you start seeing automation opportunities everywhere.
The one distinction that changes everything
A skill fires when Claude thinks the moment is right. A hook fires when an event happens - no matter what.
That's the whole distinction. It's load-bearing.
Skills are probabilistic. You describe a trigger: "when I say ship it" or "whenever we start a new project." Claude picks it up most of the time. When a session is fresh, it's reliable. When a session has been running for two hours and context is compressing under the load of everything you've discussed, it starts to miss triggers. That's fine for convenience features. Not fine for requirements.
Hooks are deterministic. They're wired to events: every file write, every session start, every commit attempt. They don't read the conversation. They don't care what you said or how long the session has been running. They just run.
This means: hooks are for the rules you can't afford to miss. Skills are for the workflows you want to make convenient. Use the wrong primitive for the wrong job and you end up with rules that slip through when things get busy, or hooks firing indiscriminately when you only wanted them sometimes.
The real power is using both. Let that framing do the work.
Skill trees - what happens when skills call skills
A single skill is useful. A skill that calls other skills is a different kind of thing entirely.
The `hg-setup` skill is the clearest example. When Finlay ran it, it didn't execute a flat script. It scanned his machine, compared what he had against the recommended stack, and presented each gap as a choice: "Want me to install this? Want to skip this one? Want me to configure that?" Each path routed to a different sub-skill. Each sub-skill did its specific job and handed back control.
That's a skill tree: one parent orchestrating many children, routing based on your answers.
The critical unlock is pairing a skill with a deterministic script behind it. The skill handles the human layer - the questions, the choices, the branching, the confirmation messages. The script behind it handles the reliable stuff: installations, file writes, config changes. You get the warmth of a conversation on top and the reliability of code underneath.
This pattern opens up genuinely useful things you couldn't build with a script or a prompt alone:
- A deployment skill that asks about the environment, checks your code, then runs the deploy script when you say go
- An onboarding wizard that walks a new team member through your entire setup step by step, adapting to what they already have
- A review skill that routes your code to a second AI model for adversarial feedback before it ships
All of these feel like conversations. All of them have hard logic running underneath. That combination is what makes them trustworthy - you're not hoping Claude does the right thing, you're giving it a script to run when it's time.
Hooks - the rules that survive context collapse
Here's the practical problem with skills in long sessions: Claude forgets.
Not literally - the context window is there. But as a session runs long, the model compresses older context to make room. Instructions from an hour ago carry less weight than instructions from five minutes ago. A skill that was triggering reliably at session start starts to miss things toward the end. That's the nature of probabilistic systems.
Hooks don't have this problem. They're not reading the conversation. They respond to events.
The em dash hook is the simplest example I can give. Claude loves em dashes. Every draft it produces, every article, every client proposal - littered with them. I don't use em dashes. Every time I'd share a draft, I'd be doing a find-and-replace run before it went anywhere. So Finlay built a hook that fires every single time Claude writes or edits a file. It scans for em dashes. If it finds one, it blocks the write and forces Claude to fix it before proceeding.
No reminding. No "please don't use em dashes" in every prompt. No hoping Claude catches it when the session is tired. Just a rule in the environment that enforces itself.
Other hooks worth building:
- Session-start loader - fires before you type a word, pulls in your business brain so Claude starts with full context already loaded
- Pre-commit checks - typecheck and lint before any commit goes through; the commit fails until they pass
- Hook-based skill triggers - catch the conversational triggers a skill might miss under high context load
The rule of thumb: if it's a preference, use a skill. If it's a requirement, use a hook.
Keep reading
Get the best of batko.ai on AI Automation - straight to your inbox
Free. Unsubscribe anytime.
HOURGLASS AI
Want AI that actually runs your workflows - not just assists with them?
Hourglass builds custom AI automation for Australian businesses. Skills, hooks, context systems - the full stack, designed for how your team actually works.
See how Hourglass works→Starting every session ready
The session-start hook is the highest-leverage thing in the stack for an actual business.
Most teams use Claude Code like a blank slate. Every session, someone briefs Claude on the project, the conventions, the current state. Some context comes from reading the files in the repo. Most comes from manual pasting or hoping Claude figures it out. It's a tax you pay at the start of every session, forever.
A session-start hook changes the starting condition. Instead of briefing Claude, Claude comes in briefed.
For us, that means the Hourglass brain loads automatically when a session begins - everything about how we work, what we've built, our clients, our stack, decisions we've made and why. It's a structured markdown knowledge base covering every domain we operate in.
The quality difference is immediate. Claude working from generic training data versus Claude working from your actual context are different tools. One is a generalist assistant. The other is something closer to a colleague who's been paying attention.
The session-start hook is the simplest thing to build: one hook, one context file. The compounding effect of starting every session with your full context already in place keeps growing as your brain grows.
This is the core of what Hourglass builds for Australian businesses. Not just the hook, but the knowledge architecture behind it - the brain that makes Claude actually useful for your specific company, your clients, your operations.
From one-shot to persistent - why this changes everything
Most people experience Claude Code as a sequence of one-shot interactions. Ask a question, get an answer, ask another. The assistant resets between sessions. Your preferences reset. Your conventions reset. Your context resets. Every session starts cold.
Skills and hooks together break this pattern.
Your preferences live in hooks. They run every session, every time, without you invoking them. Your workflows live in skills. They trigger on context or on command. Your knowledge lives in a brain that loads at session start. Nothing resets. The system accumulates.
This is a fundamentally different product than "AI assistant."
The compounding effect is real. A skill takes 30 minutes to build. If it saves five minutes every session, it's paid for itself by the end of the week. But the bigger return is what you stop losing: the context that evaporates between sessions, the preferences you forget to mention, the rules that slip through when things get busy.
The question worth sitting with: what's the cost of Claude not knowing how you work? For most teams, it's hidden but significant. Wrong formatting, missed conventions, extra prompting, rework. The invisible overhead of re-teaching the same things constantly.
Skills and hooks solve that overhead. Not by making Claude smarter. By making your environment smarter about Claude.
This article is licensed under CC BY-NC 4.0. Share freely with attribution.
The best place to start is one hook and one skill. Pick one rule you enforce manually every time - a formatting preference, a naming convention, something you find yourself correcting in every draft. Make it a hook. Pick one workflow you repeat every session. Make it a skill. Run that for a week.
If you're an Australian business that wants to build this properly - the skills, the hooks, the brain, the full automation stack - that's exactly what Hourglass does. We design and implement AI workflow systems for teams that want AI running their operations, not just assisting with them. Get in touch and lmk what you're trying to automate.
HOURGLASS AI
Want AI that actually runs your workflows - not just assists with them?
Hourglass builds custom AI automation for Australian businesses. Skills, hooks, context systems - the full stack, designed for how your team actually works.
See how Hourglass works→