Get AI summaries of any video or article — Sign up free
Claude Code Hooks is AMAZING: "Text Message Me When AI Agent is Done" thumbnail

Claude Code Hooks is AMAZING: "Text Message Me When AI Agent is Done"

All About AI·
4 min read

Based on All About AI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Cloud Code hooks register shell commands that run at specific agent lifecycle moments (pre-tool use, post tool use, notification, stop).

Briefing

Cloud Code’s new “hooks” feature lets developers attach custom shell-command automation to key moments in an AI agent’s workflow—so tasks can trigger real-world notifications, tests, and even Git operations without manual babysitting. In the most practical demo, a “stop” hook fires whenever Claude Code finishes, sending a Twilio text message to a phone. To prove it works end-to-end, the creator runs an agent to generate a small ping-pong simulator web project; once the agent completes, the phone receives a message confirming the task is done. For long-running agent work, that kind of push notification matters because it turns “check back later” into “get alerted immediately when something changes.”

The hook system also supports “post tool use” automation, enabling immediate follow-up actions after the agent calls a tool. One example generates unit tests automatically after Python code is produced: after creating a simple prime-checking tool, the agent writes a corresponding test file (test.py) and the workflow verifies correct behavior for prime inputs like 2 and 17 and non-primes like 1 and 4. The point isn’t that every project should auto-generate tests this way, but that hooks make it possible to chain quality checks directly into the agent’s output loop.

Next comes a more operational use case: using hooks to stage and commit changes to a GitHub repository. Because Cloud Code lacks checkpoints, the creator sets up a hook that runs shell commands to automatically `git add` changes and push them. After restarting Cloud Code to ensure the hook configuration in settings.json takes effect, the agent modifies prime-checking code; the repo status shows files staged and ready to commit. The workflow then expands to multiple iterations—adding a README, updating its author line, and refreshing the repository to confirm new commits appear. The demo even includes a “revert” style idea: using a hook-driven workflow to roll the repo back to an earlier commit, effectively acting like an automated checkpoint.

Finally, hooks can trigger multimedia feedback. A “post tool use” hook plays an MP3 sound from 11 Labs whenever the agent completes a tool call—so the creator hears a cat meow after running a small Python update. The overall takeaway is that hooks turn Claude Code from a purely interactive coding assistant into an automation layer: notifications for completion, tests for correctness, Git for version control hygiene, and audio/other side effects for immediate, human-friendly feedback.

Cornell Notes

Cloud Code hooks let users register shell commands that run at specific points in an AI agent’s lifecycle—such as before tools, after tools, when the agent stops, or when it halts. The demo shows a “stop” hook sending a Twilio SMS when Claude Code finishes a project, proving long tasks can notify users automatically. Hooks also enable post-generation quality workflows, including generating and running unit tests for a prime checker. A more advanced example uses hooks to stage and commit changes to a GitHub repo, so iterative edits (code and README updates) become new commits without manual `git` steps. The same mechanism can trigger side effects like playing an MP3 after each tool call via a post tool use hook.

How does the “stop” hook create a real-world notification when Claude Code finishes work?

A stop hook is configured with a curl command that calls Twilio to send a text message. The hook runs every time Claude Code stops or completes a task. In the demo, Claude Code generates a small ping-pong simulator HTML/CSS project; once the agent completes, the phone receives a Twilio SMS confirming the task is done.

What does the post tool use hook enable beyond notifications?

Post tool use hooks let automation run immediately after the agent calls a tool. One showcase uses this to generate a unit test file for newly created Python code. After creating a prime-checking function, the hook-driven workflow produces test.py, then the tests verify primes (2, 17) and non-primes (1, 4), with the test passing successfully.

Why is the GitHub staging/commit hook useful in this workflow?

The creator highlights that Cloud Code doesn’t provide checkpoints. A hook can compensate by automatically staging and committing changes to a repository. After restarting Cloud Code so settings.json hook configuration takes effect, the agent edits files (e.g., prime checker code), and `git status` shows changes staged and ready to commit. Subsequent edits—like updating the README author—result in additional commits appearing after refresh.

What operational detail is required for hooks to work reliably?

The demo notes that Cloud Code needed a restart after adding the hook configuration in settings.json; otherwise the hooks didn’t activate. After restarting, the hook began picking up repository changes and performing the configured git automation.

How can hooks provide immediate feedback during tool execution?

A post tool use hook can play an audio file. The creator uses 11 Labs to generate a cat meow MP3 and configures the hook so that after each tool call completes, the cat sound plays—audible confirmation that the agent reached a tool-use milestone.

Review Questions

  1. What are the four hook timing categories mentioned, and which ones were demonstrated with Twilio SMS and MP3 playback?
  2. Describe the sequence of events in the GitHub automation demo: what changes were made, how were they staged, and how did commits show up in the repo?
  3. In the prime checker example, what inputs were used to validate correctness, and how did the hook-related workflow produce the test results?

Key Points

  1. 1

    Cloud Code hooks register shell commands that run at specific agent lifecycle moments (pre-tool use, post tool use, notification, stop).

  2. 2

    A stop hook can send a Twilio SMS when Claude Code completes a task, turning long agent runs into push notifications.

  3. 3

    Post tool use hooks can trigger automated follow-ups such as generating and running unit tests for newly created Python code.

  4. 4

    Hooks can automate Git workflows—staging changes and creating commits/pushes—helping compensate for the lack of built-in checkpoints.

  5. 5

    Hook configuration in settings.json may require restarting Cloud Code before the commands take effect.

  6. 6

    Hooks can trigger non-text side effects, including playing an MP3 after tool calls via a post tool use hook.

Highlights

A stop hook sends a Twilio text message every time Claude Code finishes, demonstrated by completing a ping-pong simulator project and receiving the SMS immediately.
Post tool use hooks can generate unit tests automatically; the prime checker demo validates primes (2, 17) and non-primes (1, 4) with a passing test.
A Git automation hook stages and commits repository changes automatically, producing new commits after README and code edits without manual git steps.
A post tool use hook plays an MP3 cat meow after tool execution, providing instant audio feedback during agent runs.

Topics

  • Claude Code Hooks
  • Twilio Notifications
  • Automated Unit Tests
  • GitHub Auto-Commit
  • Post Tool Use Audio Alerts