Claude Code Practitioner's Handbook
A field-tested reference covering every major feature β modes, hooks, agents, SDK, MCP, and multi-model workflows.
Install & Setup
# Requires Node.js 18+ npm install -g @anthropic-ai/claude-code claude --version # Auth (browser OAuth or API key) claude # opens browser auth export ANTHROPIC_API_KEY=sk-ant-... claude --api-key $ANTHROPIC_API_KEY # Update claude update # or: npm update -g @anthropic-ai/claude-code # Run diagnostics claude /doctor
CUDA_VISIBLE_DEVICES is set before launching Claude Code when you need it to invoke local GPU workloads via Bash hooks or MCP tools.IDE Integrations
| Surface | How to activate | Docs |
|---|---|---|
| VS Code | Install Claude Code extension, Ctrl+Shift+C | β IDE docs |
| JetBrains | Plugin marketplace β "Claude Code" | β IDE docs |
| Terminal | claude in any directory | β CLI ref |
| GitHub Actions | uses: anthropic/claude-code-action | β GHA docs |
| Web / Desktop | claude.ai β Claude Code tab | β claude.ai |
Model Selection
Switch at any time with /model. Claude Code 2.1+ supports four model slots, including Plan-mode Opus.
Sonnet (exec) Option 4 in /model β Opus thinks, Sonnet executes. Ideal for large refactors Mixed $$$
# Switch models interactively /model # CLI flags claude --model claude-opus-4-6 claude --model claude-sonnet-4-6 claude --model claude-haiku-4-5-20251001 # Ollama / local model (via OPENAI_BASE_URL compatibility) ANTHROPIC_BASE_URL=http://localhost:11434/v1 \ ANTHROPIC_API_KEY=ollama \ claude --model qwen3.5:9b
Ollama / GLM4 / Qwen3.5 Local Routing
Claude Code can route to any OpenAI-compatible endpoint. Useful for Arthavidya distillation runs where you want the teacher (Qwen3.5-9B) reviewed locally before pushing to cloud.
# ~/.claude/settings.json β environment overrides
{
"env": {
"ANTHROPIC_BASE_URL": "http://localhost:11434/v1",
"ANTHROPIC_API_KEY": "ollama"
}
}
All Modes β When to Use What
Use when: starting a non-trivial feature, refactor, or architecture change. Claude reads, analyzes, and produces a structured plan β no file writes until you approve.
In plan mode Claude only has access to read-only tools: Read, LS, Glob, Grep, Task, TodoRead/Write, WebFetch, WebSearch, NotebookRead. It automatically spawns an Explore subagent to scan the repo so main context stays clean.
# Activate plan mode then share a spec Shift+Tab Shift+Tab > "Read CLAUDE.md and plan how to add curriculum-based LoRA training to the pipeline" # Or via command /plan # Custom plan command β .claude/commands/plan.md # Runs plan subagent with your project-specific checklist
Use when: debugging gnarly issues, complex algorithm design, security review, architecture trade-offs. Opus 4.6 has thinking enabled by default.
| Trigger phrase | Effort level | When to use |
|---|---|---|
think | Low | Simple reasoning, quick decision |
think harder / think more | Medium | Multi-step problems, debugging |
think a lot / deep think | High | Architecture, complex refactor |
ultrathink | Maximum | Hard research, security audit, novel algorithm |
Use when: running Opus 4.6 for interactive work where you need 2.5Γ faster responses and accept higher cost-per-token. Available since v2.1.36.
Not useful for local Ollama models β latency is bound by hardware, not API inference.
Use when: you fully trust the plan and want fully autonomous execution β Claude runs tool calls without per-step confirmation. Only use on branches, never on main.
Use when: integrating Claude Code into CI scripts, shell pipelines, GitHub Actions, or MLOps automation. Non-interactive, stdout-only output.
# Print mode in CI claude -p "Review this diff for security issues" --max-turns 3 < diff.patch # Budget-capped automation claude -p "run the test suite and fix failures" --max-budget-usd 2.00 --model haiku
CLAUDE.md β Project Memory
CLAUDE.md is the agent's "constitution" β loaded at session start. Place at project root, in subdirectories (loaded when Claude touches files there), or globally at ~/.claude/CLAUDE.md.
# CLAUDE.md example β Arthavidya project ## Project: Arthavidya Finance LLM ### Architecture Teacher: Qwen3.5-9B (RTX 5070, 12GB VRAM) Student: Qwen3.5-4B via curriculum LoRA (4 tiers) Training stack: HuggingFace PEFT + TRL + Unsloth Experiment tracking: W&B (project: arthavidya) ### Coding standards - Python 3.11+, type hints required - Config-driven: no magic numbers in code, use YAML configs - All training scripts: pause-and-resume checkpoints mandatory - Tests: pytest + pytest-torch - Never hardcode API keys; use .env + python-dotenv ### Build commands !`cat Makefile` # live-reads Makefile into context ### Do NOT - Modify production configs in /configs/prod/ - Run training without first calling validate_dataset.py - Push to main directly; always use feature branches
@filename imports for large reference docs. Move library-specific docs into skills. Use !`command` for dynamic context (e.g., !`git log --oneline -5`).# Auto-generate CLAUDE.md from existing codebase /init # Reference another file inline @./docs/architecture.md @./configs/base_config.yaml
Settings & Permissions
Settings resolve hierarchically: Enterprise policy β User (~/.claude/settings.json) β Project (.claude/settings.json). Project settings override user, policy overrides all.
# .claude/settings.json β project-level { "model": "claude-sonnet-4-6", "permissions": { "allow": [ "Bash(git add:*)", "Bash(git commit:*)", "Bash(make:*)", "Bash(python:*)", "Write(src/**)", "Edit(*.py)", "Edit(*.yaml)" ], "deny": [ "Bash(rm -rf:*)", "Bash(curl * | bash:*)", "Write(.env)", "Write(configs/prod/**)" ] }, "fastMode": false, "enableToolSearch": "auto:5" }
# Interactive permission management /permissions # Useful CLI flags claude --allowedTools "Read,Grep,Glob" # restrict tools claude --disallowedTools "Bash,Write" # deny specific tools claude --permission-mode acceptEdits # auto-accept all edits
Custom Slash Commands
Commands are markdown files in .claude/commands/ (project) or ~/.claude/commands/ (global). They're surfaced as /command-name with tab-completion.
# Create a command mkdir -p .claude/commands nano .claude/commands/review-pr.md # Use it /review-pr /fix-issue 123 # $ARGUMENTS passes "123" /deploy-staging main # $ARGUMENTS passes "main"
Command Anatomy
--- file: .claude/commands/commit.md --- --- allowed-tools: Bash(git add:*), Bash(git status:*), Bash(git commit:*) description: Create a semantic git commit from staged changes --- ## Context - Current status: !`git status` - Current diff: !`git diff HEAD` - Branch: !`git branch --show-current` - Recent commits: !`git log --oneline -5` Create a conventional commit message (type(scope): description) for these changes. Stage all relevant files and commit.
Parameterized Commands
--- file: .claude/commands/fix-issue.md ---
Fix GitHub issue #$ARGUMENTS following our coding standards in CLAUDE.md.
1. Read the issue details with gh issue view $ARGUMENTS
2. Identify affected files
3. Implement the fix with tests
4. Open a PR using /review-pr
Built-in Commands Reference
| Command | What it does |
|---|---|
/init | Scan codebase and auto-generate CLAUDE.md |
/clear | Reset context (use between unrelated tasks) |
/compact | Summarize and compress context |
/continue | Resume last session |
/model | Switch model interactively |
/permissions | View / edit tool permissions |
/hooks | List configured hooks |
/skills | List available skills |
/cost | Show token usage and spend |
/context | View context usage as colored grid |
/doctor | Run diagnostics |
/bug | Report a bug (sends transcript to Anthropic) |
/batch | Orchestrate parallelizable changes across codebase |
/review-pr | Review open GitHub PR (built-in) |
/pr-comments | Fetch and display PR comments |
/schedule | Schedule remote Claude Code agents on cron |
/fast | Toggle fast mode (Opus 4.6) |
/vim | Enter vim keybinding mode |
/output-style | Set output verbosity / format |
Skills
Skills are packaged workflow libraries β directories under .claude/skills/ with a SKILL.md. They're the evolution of commands with richer structure: supporting files, templates, and subagent delegation. They become /skill-name commands.
# Structure .claude/skills/ code-review/ SKILL.md # entrypoint, metadata checklist.md # supporting doc templates/ review.md
--- .claude/skills/train-eval/SKILL.md --- --- name: train-eval description: Run a training evaluation loop for Arthavidya. Use when asked to evaluate, benchmark, or compare model checkpoints. allowed-tools: [Bash, Read, Write, Grep] agent: general-purpose context: fork --- # Training Evaluation Skill ## Steps 1. Read the active config from configs/active.yaml 2. Run: python scripts/evaluate.py --checkpoint $ARGUMENTS 3. Parse W&B run URL from stdout 4. Summarize eval metrics in a markdown table 5. Compare against baseline in configs/baselines.yaml Include ultrathink to deeply analyze any regression.
ultrathink anywhere in skill content to enable extended thinking for that skill.Hooks β Deterministic Automation
Hooks intercept Claude Code's lifecycle and run your code β shell scripts, HTTP endpoints, LLM prompts, or agents β at specific events. They're the "must always happen" layer vs CLAUDE.md's "should do" suggestions.
# .claude/settings.json β hooks section { "hooks": { "PreToolUse": [ { "matcher": "Bash", "hooks": [{ "type": "command", "command": "python hooks/block_dangerous.py" }] } ], "PostToolUse": [ { "matcher": "Edit|Write", "hooks": [{ "type": "command", "command": "ruff format $CLAUDE_TOOL_INPUT_FILE_PATH" }] } ], "Stop": [ { "matcher": "*", "hooks": [{ "type": "command", "command": "notify-send 'Claude Code' 'Session complete'" }] } ] } }
Hook Events
Hook Handler Types
stdin. Returns decision via exit code + stdout. Most common. Full OS access.Practical Hook Patterns
--- hooks/block_dangerous.py --- import json, sys, re data = json.load(sys.stdin) cmd = data.get("tool_input", {}).get("command", "") BLOCKED = [r"rm\s+-rf", r"curl.*\|\s*bash", r":(){:|:&};:"] for pattern in BLOCKED: if re.search(pattern, cmd): print(json.dumps({"decision": "block", "reason": f"Blocked: {pattern}"})) sys.exit(0) sys.exit(0) # allow
--- hooks/pre-commit-gate.sh (block-at-submit pattern) --- #!/bin/bash # Fires on PreToolUse matching Bash(git commit:*) # Only allows commit if /tmp/agent-pre-commit-pass exists if [[ ! -f /tmp/agent-pre-commit-pass ]]; then python -m pytest tests/ --tb=short -q if [[ $? -eq 0 ]]; then touch /tmp/agent-pre-commit-pass else echo '{"decision":"block","reason":"Tests failed β fix before commit"}' exit 0 fi fi
Agent System Overview
Claude Code has a layered agent architecture. Understanding it lets you route tasks to the right level.
Built-in Agents
plan.md.Custom Subagents
# Create a custom subagent
mkdir -p .claude/agents
--- .claude/agents/distill-reviewer.md --- --- name: distill-reviewer description: Reviews knowledge distillation training logs and checkpoints. Use proactively after any training run completes, or when asked to review, compare, or audit model checkpoints. tools: Read, Grep, Glob, Bash model: sonnet --- You are an expert MLOps reviewer specializing in LLM knowledge distillation. When reviewing a training run: 1. Parse the W&B run logs or local log file passed to you 2. Check for loss divergence, gradient norm spikes, or NaN values 3. Compare student vs teacher KL divergence across curriculum tiers 4. Flag any checkpoint where eval loss > train loss by >0.3 (overfitting signal) 5. Output a structured markdown report with: status, key metrics, recommendations Return only the report β no preamble.
# Invoke a custom agent directly @distill-reviewer check the run from last night # Or from a command/skill, the agent is auto-matched by description > "Review the training logs" # Claude delegates if description matches
Multi-Agent Patterns
Builder + Validator
--- .claude/agents/validator.md --- --- name: validator description: Validates code changes for correctness, security, and test coverage. Use after implementation is complete. tools: Read, Bash, Grep model: sonnet --- Run: pytest, ruff, mypy, bandit Return: PASS or FAIL with specific findings.
# In a session > "Implement the data augmentation module, then validate it" # Claude builds β spawns validator β reports consolidated result
Parallel Worktree Agents
# Run 3 agents on different features simultaneously claude -w feature-tokenizer & claude -w feature-curriculum & claude -w feature-eval-harness & wait # Each works in isolated git worktree β no conflicts git merge feature-tokenizer feature-curriculum feature-eval-harness
Meta-Agent (Agent that builds agents)
--- .claude/agents/meta-agent.md --- --- name: meta-agent description: Generates new subagent definitions from natural language descriptions. Use when asked to "build a new agent" or "create a subagent for..." --- Given a description, produce a properly formatted .claude/agents/*.md file following the project's agent standards. Ask clarifying questions if needed.
> "Build a new sub-agent that monitors VRAM usage during training runs"
# meta-agent creates .claude/agents/vram-monitor.md automatically
MCP Servers β External Integrations
Model Context Protocol gives Claude Code access to databases, APIs, and services through a standardized interface. Over 300 integrations available.
# .claude/mcp.json β project MCP config { "mcpServers": { "github": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"], "env": { "GITHUB_TOKEN": "${GITHUB_TOKEN}" } }, "wandb": { "command": "python", "args": ["-m", "wandb_mcp_server"], "env": { "WANDB_API_KEY": "${WANDB_API_KEY}" } }, "postgres": { "command": "npx", "args": ["@modelcontextprotocol/server-postgres", "${DATABASE_URL}"] } } }
# Manage MCP servers interactively /mcp # list connected servers claude --mcp-server github # enable specific server # Use in prompts naturally > "Create a GitHub issue for the tokenizer bug and assign it to me" > "Pull my last 5 W&B runs for the arthavidya project and compare metrics"
| MCP Server | Use case | Install |
|---|---|---|
| GitHub | PRs, issues, code review | @modelcontextprotocol/server-github |
| PostgreSQL | Query prod/dev databases | @modelcontextprotocol/server-postgres |
| Filesystem | Extended file access outside project | @modelcontextprotocol/server-filesystem |
| Playwright | Browser automation, regression testing | @executeautomation/playwright-mcp |
| Slack | Send notifications, read messages | @modelcontextprotocol/server-slack |
| W&B | Experiment tracking, run comparison | wandb-mcp-server (community) |
| MLflow | Experiment and model registry | Community server |
Agent SDK
The Agent SDK lets you build custom agent orchestrators in TypeScript or Python β with full control over tool access, hooks, subagents, and permissions. Use when Claude Code's interactive mode isn't enough.
TypeScript SDK
import { query, type ClaudeCodeOptions } from "@anthropic-ai/claude-code"; const options: ClaudeCodeOptions = { maxTurns: 10, allowedTools: ["Read", "Write", "Bash", "Grep"], systemPrompt: "You are an MLOps specialist for Arthavidya...", hooks: { PreToolUse: async (input) => { if (input.tool_name === "Bash" && input.tool_input.command.includes("rm -rf")) { return { decision: "block", reason: "Destructive command blocked" }; } }, Stop: async (input) => { console.log(`Session ended: ${input.session_id}`); } } }; for await (const message of query({ prompt: "Review training logs in ./logs/ and flag any anomalies", options })) { if (message.type === "text") process.stdout.write(message.text); }
Python SDK
import anyio from claude_code_sdk import query, ClaudeCodeOptions, PreToolUseHookInput async def block_destructive(inp: PreToolUseHookInput): if inp.tool_name == "Bash": cmd = inp.tool_input.get("command", "") if "rm -rf" in cmd: return {"decision": "block", "reason": "blocked"} async def main(): options = ClaudeCodeOptions( max_turns=20, allowed_tools=["Read", "Write", "Bash"], hooks={"PreToolUse": block_destructive}, system_prompt="MLOps assistant for Arthavidya distillation pipeline" ) async for msg in query( prompt="Run evaluate.py on the latest checkpoint", options=options ): print(msg) anyio.run(main)
SDK Docs Links
- TS SDK docs.claude.com β TypeScript SDK
- Python SDK docs.claude.com β Python SDK
- Hooks SDK platform.claude.com β Hooks in SDK
- Subagents platform.claude.com β Subagents in SDK
- Slash cmds platform.claude.com β Slash Commands SDK
- Skills platform.claude.com β Skills in SDK
CI/CD & GitHub Actions
--- .github/workflows/claude-code-review.yml --- name: Claude Code Review on: [pull_request] jobs: review: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: anthropic/claude-code-action@v1 with: prompt: | Review this PR for: 1. Security vulnerabilities (SQL injection, path traversal) 2. Missing error handling 3. Hardcoded credentials 4. Test coverage gaps Report findings as PR comments. model: claude-sonnet-4-6 max-turns: 8 allowed-tools: Read,Grep,Glob,Bash(pytest:*) env: ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Trigger from Slack/Jira/CloudWatch β returns a tested PR
$ query-claude-gha-logs --since 5d | \
claude -p "see what the other Claudes were getting stuck on and fix it, then put up a PR"
Ollama / Local Model Integration
Use local models (GLM4, Qwen3.5, Mistral) for sensitive codebase analysis, cost-zero summarization, or Arthavidya student model self-review.
# Start Ollama with your model ollama serve ollama pull qwen2.5-coder:7b ollama pull glm4:9b # Route Claude Code to Ollama (OpenAI-compat API) export ANTHROPIC_BASE_URL=http://localhost:11434/v1 export ANTHROPIC_API_KEY=ollama claude --model qwen2.5-coder:7b # Or per-session alias alias claude-local='ANTHROPIC_BASE_URL=http://localhost:11434/v1 ANTHROPIC_API_KEY=ollama claude' claude-local --model qwen3.5:4b "Review this config file"
| Task | Recommended local model | Notes |
|---|---|---|
| Code review / summarize | Qwen2.5-Coder:7B | Good instruction following |
| Finance domain Q&A (Arthavidya) | Arthavidya student (custom) | Your fine-tuned 4B |
| Doc generation | GLM4:9B | Strong Chinese+English |
| Complex reasoning | Qwen3.5:9B (teacher) | Best local quality |
Context Management
Claude Code has a 200K token context window. Mismanage it and tasks fail from exhaustion. Manage it well and you can sustain 100+ turn workflows.
# Monitor context usage /context # colored grid: green β yellow β red /cost # token count + spend # Force context limit per run claude -p "..." --max-turns 5 claude -p "..." --max-budget-usd 1.50 # Use subagents for verbose operations # Their output stays in subagent context, not main thread > "Run the full test suite and tell me what failed" # Claude spawns subagent β only summary returns # Limit context bloat from MCP ENABLE_TOOL_SEARCH=auto:5 # auto-defer tool defs > 5% of context
Workflow Patterns
Specification-First Pattern
PR-from-Anywhere Pipeline
Multi-Agent Production Pipeline (3-stage)
Read+Write docs only
validates constraints
Bash+Edit+Write
Orchestrator coordinates all three. Each has minimal tool access β principle of least privilege for AI agents.
Ralph Loop (autonomous completion)
# Run agent against a prompt file until task marked complete # or iteration limit reached # prompt.md Task: Implement LoRA fine-tuning for curriculum tier 2 of Arthavidya. Done when: tests pass, W&B shows loss < 0.5, PR is open. Limit: 20 iterations. claude --headless -p "$(cat prompt.md)"
MLOps Workflows with Claude Code
Specific patterns for your Arthavidya / model-fine-tuner-template work.
Training Run Review Hook
--- hooks/post-train-review.sh --- #!/bin/bash # PostToolUse hook on Bash β triggers after any python training script INPUT=$(cat) CMD=$(echo "$INPUT" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('tool_input',{}).get('command',''))") if echo "$CMD" | grep -q "train.py"; then # Trigger distill-reviewer subagent after training completes echo '{"systemMessage": "Training complete. Run distill-reviewer on ./logs/latest/"}' fi
Curriculum-Aware Slash Command
--- .claude/commands/train-tier.md --- --- allowed-tools: Bash(python:*), Read, Write description: Run curriculum training for a specific LoRA tier --- Train curriculum tier $ARGUMENTS for Arthavidya: 1. Load configs/tier_$ARGUMENTS.yaml 2. Validate dataset: !`python scripts/validate_dataset.py --tier $ARGUMENTS` 3. Run: python train.py --config configs/tier_$ARGUMENTS.yaml --resume 4. After completion, invoke distill-reviewer on the latest checkpoint 5. Log results to W&B project arthavidya
DVC + Claude Code Integration
# Give Claude access to DVC commands via permissions { "permissions": { "allow": [ "Bash(dvc add:*)", "Bash(dvc push:*)", "Bash(dvc pull:*)", "Bash(dvc repro:*)" ] } } # In session > "Add the new training dataset to DVC, push to remote, and update dvc.yaml"
Troubleshooting
| Symptom | Fix |
|---|---|
| Context limit hit mid-task | /compact then continue; use subagents for verbose ops |
| Claude ignoring CLAUDE.md rules | Move critical rules to Hooks (deterministic); CLAUDE.md is probabilistic |
| Hook not firing | /hooks to verify; check matcher regex with /debug |
| Ollama tool calls broken | Local models lack structured tool use; use for conversational tasks only |
| Subagent context bleed | Ensure skill uses context: fork in frontmatter |
| Slow Opus responses | Enable /fast (2.5Γ faster, higher cost); or switch to Sonnet |
| Plan mode not exiting | Shift+Tab once to toggle back; or explicitly approve plan |
| Permissions errors | /permissions β add allow rule; use --allowedTools flag |
| Session lost after crash | claude --continue or claude --resume <session-id> |
| MCP server not connecting | Check mcp.json, run /mcp, verify env vars are exported |
# Useful debug commands /doctor # full diagnostics /debug # session debug info /context # token grid claude --version # verify version claude update # update to latest # Read raw session transcripts ls ~/.claude/projects/ cat ~/.claude/projects/<project-hash>/<session-id>.jsonl | jq .
Reference Links
Official Docs
- Overview code.claude.com/docs/en/overview
- CLI ref docs.claude.com β CLI Reference
- Hooks code.claude.com/docs/en/hooks
- Hooks SDK platform.claude.com β Hooks in Agent SDK
- Subagents SDK platform.claude.com β Subagents
- Slash cmds SDK platform.claude.com β Slash Commands
- Skills SDK platform.claude.com β Skills
- MCP docs.claude.com β MCP
- GHA docs.claude.com β GitHub Actions
- IDE docs.claude.com β IDE Integrations
- CLAUDE.md docs.claude.com β Memory & CLAUDE.md
- Permissions docs.claude.com β Security & Permissions
- Plan mode claudelog.com β Plan Mode deep dive
Community Resources
- Awesome CC github.com/hesreallyhim/awesome-claude-code
- Hooks mastery github.com/disler/claude-code-hooks-mastery
- System prompts github.com/Piebald-AI/claude-code-system-prompts
- CC guide github.com/Cranot/claude-code-guide (auto-updated)
- howto github.com/luongnv89/claude-howto
Prompt Engineering for Claude Code
- Prompting docs.claude.com β Prompt engineering overview
- Extended thinking docs.claude.com β Extended thinking