AI & Building7 min read

How I Onboarded My Co-founder Into My Claude Code Stack (Live)

Share

Most Claude Code setups are one person's project. Built up slowly. A CLAUDE.md here, a skill file there, a hook that catches something you kept doing wrong. After a few months it works well and you stop noticing the pieces.

Then you watch someone else use it for the first time.

This week I ran Build Hour live with Finlay - my co-founder at Hourglass - and watched him step through our team onboarding wizard on his own machine. Claude compared his setup against ours, flagged every gap, and walked him through fixing each one in under an hour. It took less time than writing a doc. But watching him encounter the whole system cold was the real education. I had built something I'd never actually had to explain.

This post is about what we walked through on air: how the onboarding wizard works, what you absolutely must not share on screen when you're showing an AI setup live, and the mechanics that make a Claude Code stack transferable - from one person's system to a shared setup that compounds for everyone.

The portability test

Here's a question worth asking: if your co-founder opened your Claude Code setup tomorrow, would they know what to do?

Not the obvious things. The context files that load before every session. The hooks running silently on every edit. The brain folder with your business knowledge. The rules in CLAUDE.md that exist because something went wrong once and you never wanted it to happen again.

Most setups fail this test. Not because they're bad - because they were built by one person over time, and all the implicit knowledge is in that person's head.

The fix isn't documentation. It's a setup wizard. When Finlay ran `/hg-setup` on his machine, Claude read his current environment, compared it against our team configuration, and stepped him through every gap - missing skills, paths pointing to the wrong places, env vars that weren't set. No doc to read. No guessing what "set it up like mine" means. Just questions, answers, and changes.

That distinction matters more than it sounds. A doc goes stale the moment you update the setup. A wizard runs against the actual current state every time.

What the onboarding wizard does

`/hg-setup` is a skill. But it's also a machine inspection.

When it runs, it reads your environment: what's installed, what's missing, which brain folders exist, which env vars are set. It compares that against a known-good team baseline. Then it steps through the delta, one item at a time.

Some things it flags are obvious - a skill that isn't installed yet. Others are subtler: a brain path pointing to a folder that only exists on one machine, or a session-start hook that runs fine on macOS but breaks on a different setup.

The philosophy: pair every skill with a deterministic script. The skill handles the conversation - questions, choices, branching logic. The script handles the actual changes. You get the clarity of a guided flow with the reliability of code. Nothing gets configured from memory. Nothing gets skipped because the new person didn't know to ask.

By the end of the session, Finlay had the same setup I have. Same brain access. Same hooks running on every edit. Same skills in the tree. Same starting point for every session. That's the goal - portable by design, not portable by accident.

Security: what not to put on screen

We built this session live - screen shared, 50+ people watching. There are things you should never show when demonstrating a Claude Code setup.

Three rules I follow:

1. Close your CLAUDE.md files before sharing your screen. These files often contain API patterns, internal service names, exact prompt templates, and occasionally credentials that were meant to be temporary and weren't. They're dense and easy to skim past on a stream.

2. Clear your terminal history. If you've been setting env vars or running API calls in your current session, your terminal history contains that information. `export GROQ_API_KEY=...` or `history | grep KEY` scroll by fast on a stream. Clear the history before you go live.

3. Never run external connections and private data in the same Claude session. This is the prompt injection risk. Three conditions create genuine exposure: you're connected to an external data source, you're working with private internal data, and some of that input is untrusted. Have all three active at once and a malicious instruction embedded in a document or webpage can redirect Claude in ways you didn't ask for. The fix is straightforward - keep external-fetching sessions separate from sessions that touch your private context.

Beyond those three, the basics: env vars in `.env` files only, never echoed into output. API calls go through Anthropic's servers, which have strong data controls - opt out of training data collection if you're handling client information.

Most Claude Code workflows are safe by default. The job is not accidentally unmaking that.

Keep reading

Get the best of batko.ai on AI & Building - straight to your inbox

Free. Unsubscribe anytime.

AI AUTOMATION FOR TEAMS

Want a Claude Code setup your whole team can use from day one?

Hourglass builds custom AI systems for Australian businesses - shared knowledge bases, automated workflows, and skill trees that mean the whole team compounds together.

See what we build

One brain, two people

The most underrated part of the Hourglass setup is the brain.

Not because of what's in it - our market context, product decisions, client files, operational processes. That stuff matters but it's the obvious part. The underrated part is that both of us write to it.

When Finlay makes a design decision, it goes to the brain. When I close a sales call and learn something new about how a prospect describes their problem, that goes in too. Every session we run starts with that context loaded. Neither of us re-explains what the other already figured out.

The write-back system is what makes this work. Without it, the brain is a snapshot - useful for a while, then increasingly stale. We use `/hg-brain-write` to route new learnings through an inbox before they're committed to the vault. Claude doesn't write directly. There's a review step. Nothing half-formed gets in.

For a two-person team, the practical result is this: you're not doubling your context - you're multiplying it. Every insight either of you captures becomes available to both of you in every future session. The compounding advantage builds fast.

The test I use: could Finlay run a client session without me and still have Claude answer in the right context? After this onboarding session - yes.

When one person's discovery becomes everyone's

Here's what makes this genuinely a team setup and not just two people using the same tool independently.

When one person improves the stack, everyone gets the option to use it.

Not automatically pushed - that would be annoying and would mean accepting someone else's experiments without trying them. We use an experimental tier: when someone builds a skill or hook they find valuable, they share it into a pool the team can opt into. Try it, keep it, promote it to the shared stack. Or don't. No one gets forced into someone else's work.

During the Build Hour stream, I ran the stack update and pulled in skills that had been promoted since my last sync - new capabilities in a few seconds. It feels like a package manager for AI behaviour, which is exactly what it is.

The deeper point: your Claude Code setup should be a shared codebase, not a personal document. Version controlled. One person's clever solution to a recurring problem should be available to everyone on the team without anyone having to remember to share it.

We're taking this into a broader cohort soon - our Claude Code course, opening with a founding group of builders. Every participant will have the shared skill tree running, plus the `/hg-daily-builds` skill that reads your session history daily and posts one transferable insight to a shared channel. Automatically. Your discovery, surfaced to the whole group. That's the compounding loop that makes a team's AI setup compound faster than any individual's ever will.

If you want to set this up for your own team - Hourglass is exactly where we help with this. The onboarding wizard, the shared brain, the security model, the skill tree. We've built it for ourselves and now we help other teams do the same.

Share

This article is licensed under CC BY-NC 4.0. Share freely with attribution.

The onboarding session with Finlay took under an hour. By the end, we had the same setup, the same brain, the same starting point for every session. That's what a shareable AI stack looks like in practice.

If you want to build this for your team - or you're already running Claude Code and want to compare notes - talk to us at Hourglass. We work with Australian businesses to set up AI systems that compound across the whole team, not just the one person who built them. lmk what you're working with.

AI AUTOMATION FOR TEAMS

Want a Claude Code setup your whole team can use from day one?

Hourglass builds custom AI systems for Australian businesses - shared knowledge bases, automated workflows, and skill trees that mean the whole team compounds together.

See what we build

Related articles

Get the newsletter

Weekly systems, coaching lessons, and AI strategies for founders.

Subscribe to Batko OS →

Work with me

1:1 coaching for founders. Strategy, ops, and your personal operating system.

Learn about coaching →

Explore more on batko.ai

Founder SignalSydney startupsMelbourne startupsBrisbane startups

5 Frameworks I Use With Every Founder I Coach

The exact frameworks from 300+ coaching sessions - goal-setting, reflection, CEO prioritisation, investor updates, and better 1:1s. Free PDF.

By downloading you agree to our privacy policy and subscribe to Batko OS. Unsubscribe anytime.

300+ founders have downloaded this guide.

NEWSLETTER

Batko OS

Writing on startups, leadership, AI, and building a personal operating system.One email, whenever I have something worth saying.

  • Lessons from coaching founders - fundraising, ops, strategy
  • How I use AI to build, write, and think faster
  • Systems for productivity, leadership, and life

By subscribing you agree to our privacy policy. Unsubscribe anytime.

4,000+ founders and operators read it. Free. No spam. Unsubscribe anytime.