Claude Code Hooks is AMAZING: "Text Message Me When AI Agent is Done"
Based on All About AI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Cloud Code hooks register shell commands that run at specific agent lifecycle moments (pre-tool use, post tool use, notification, stop).
Briefing
Cloud Code’s new “hooks” feature lets developers attach custom shell-command automation to key moments in an AI agent’s workflow—so tasks can trigger real-world notifications, tests, and even Git operations without manual babysitting. In the most practical demo, a “stop” hook fires whenever Claude Code finishes, sending a Twilio text message to a phone. To prove it works end-to-end, the creator runs an agent to generate a small ping-pong simulator web project; once the agent completes, the phone receives a message confirming the task is done. For long-running agent work, that kind of push notification matters because it turns “check back later” into “get alerted immediately when something changes.”
The hook system also supports “post tool use” automation, enabling immediate follow-up actions after the agent calls a tool. One example generates unit tests automatically after Python code is produced: after creating a simple prime-checking tool, the agent writes a corresponding test file (test.py) and the workflow verifies correct behavior for prime inputs like 2 and 17 and non-primes like 1 and 4. The point isn’t that every project should auto-generate tests this way, but that hooks make it possible to chain quality checks directly into the agent’s output loop.
Next comes a more operational use case: using hooks to stage and commit changes to a GitHub repository. Because Cloud Code lacks checkpoints, the creator sets up a hook that runs shell commands to automatically `git add` changes and push them. After restarting Cloud Code to ensure the hook configuration in settings.json takes effect, the agent modifies prime-checking code; the repo status shows files staged and ready to commit. The workflow then expands to multiple iterations—adding a README, updating its author line, and refreshing the repository to confirm new commits appear. The demo even includes a “revert” style idea: using a hook-driven workflow to roll the repo back to an earlier commit, effectively acting like an automated checkpoint.
Finally, hooks can trigger multimedia feedback. A “post tool use” hook plays an MP3 sound from 11 Labs whenever the agent completes a tool call—so the creator hears a cat meow after running a small Python update. The overall takeaway is that hooks turn Claude Code from a purely interactive coding assistant into an automation layer: notifications for completion, tests for correctness, Git for version control hygiene, and audio/other side effects for immediate, human-friendly feedback.
Cornell Notes
Cloud Code hooks let users register shell commands that run at specific points in an AI agent’s lifecycle—such as before tools, after tools, when the agent stops, or when it halts. The demo shows a “stop” hook sending a Twilio SMS when Claude Code finishes a project, proving long tasks can notify users automatically. Hooks also enable post-generation quality workflows, including generating and running unit tests for a prime checker. A more advanced example uses hooks to stage and commit changes to a GitHub repo, so iterative edits (code and README updates) become new commits without manual `git` steps. The same mechanism can trigger side effects like playing an MP3 after each tool call via a post tool use hook.
How does the “stop” hook create a real-world notification when Claude Code finishes work?
What does the post tool use hook enable beyond notifications?
Why is the GitHub staging/commit hook useful in this workflow?
What operational detail is required for hooks to work reliably?
How can hooks provide immediate feedback during tool execution?
Review Questions
- What are the four hook timing categories mentioned, and which ones were demonstrated with Twilio SMS and MP3 playback?
- Describe the sequence of events in the GitHub automation demo: what changes were made, how were they staged, and how did commits show up in the repo?
- In the prime checker example, what inputs were used to validate correctness, and how did the hook-related workflow produce the test results?
Key Points
- 1
Cloud Code hooks register shell commands that run at specific agent lifecycle moments (pre-tool use, post tool use, notification, stop).
- 2
A stop hook can send a Twilio SMS when Claude Code completes a task, turning long agent runs into push notifications.
- 3
Post tool use hooks can trigger automated follow-ups such as generating and running unit tests for newly created Python code.
- 4
Hooks can automate Git workflows—staging changes and creating commits/pushes—helping compensate for the lack of built-in checkpoints.
- 5
Hook configuration in settings.json may require restarting Cloud Code before the commands take effect.
- 6
Hooks can trigger non-text side effects, including playing an MP3 after tool calls via a post tool use hook.