Get AI summaries of any video or article — Sign up free
3 Notion Devs Use a Kanban To Orchestrate Coding Agents thumbnail

3 Notion Devs Use a Kanban To Orchestrate Coding Agents

Notion·
5 min read

Based on Notion's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Notion AI meeting notes can be converted into many structured Kanban task cards that trigger parallel coding runs.

Briefing

A Notion-based Kanban board orchestrates a parallel “coding jam” where AI meeting notes turn into ready-to-build tasks, each automatically shipped through Cursor cloud agents and GitHub pull requests—then validated via Cloudflare deploy previews. The core payoff is speed and iteration: multiple game ideas (manual raking, object previews, randomized variants, HD sanding, footsteps, streams/water, atmosphere) move from brainstorm to playable prototypes without leaving the Notion interface.

The workflow starts with a Zen garden builder game being played live in-browser. Players surface concrete UX and gameplay tweaks: object placement buttons need clearer “what happens next” cues and previews; rocks/shrubs/tea houses should show size/space before committing; raking should feel more satisfying (smoothing jitter, optional straight-line drawing via Shift); and the visual mood could shift from dark monotony to something more Zen—fog, day/night, rain, or other atmosphere. Others push for mechanics that create meaningful interaction, like manual raking by a tiny character that leaves footsteps, forcing the player to plan routes.

Those ideas get converted into tasks using Notion AI. Each task card gets an image generated through Notion’s built-in image generation and a custom “AI skill” that standardizes the art style and adds distinct accent colors so dozens of simultaneous tasks remain visually distinguishable. The board then acts like a factory: dragging cards into “ready for agent” triggers a custom Notion AI agent (“Zen Garden Builder”) that moves cards to “in progress” and kicks off Cursor cloud agents to implement the changes. Progress bars and status fields update automatically, giving a zoomed-out view of what’s happening across many parallel builds.

Under the hood, the custom agent is configured with triggers, a short prompt, and tools—critically including a Cursor integration (via API key) and GitHub connectivity (via MCP) for pull request handling. Deploy previews are hosted on Cloudflare Pages: for each PR branch, Cloudflare posts a deploy preview URL as a PR comment, and the Notion agent waits for that URL before populating it on the corresponding task card. Review becomes lightweight: instead of reading code, the team plays the deploy preview; if it passes, dragging the card to “done” merges the PR and ships to production. If something is close but not right, comments on the task feed back into the agent, returning the card to “in progress” for another coding pass.

The jam also demonstrates real-world friction and fixes. A merged change initially failed because the agent trigger didn’t fire on “done”; adjusting the trigger resolved future merges. Some builds failed due to Cloudflare issues, while other features landed quickly—like object placement ordering and a streams/water feature with peaceful dripping animation. When a visual change (HD sanding) produced a checkerboard-like texture and reduced Zen feel, the team pivoted to a “playground pattern” approach: parameterize the look with sliders so they can collaboratively tune numeric settings, then finalize.

By the end, the Kanban is complemented with alternative Notion views (grouping by status, focus on review-only items, potential embedding of deploy previews directly in cards). The result is a practical blueprint for orchestrating agent-driven development: brainstorm in natural language, convert to structured tasks, run parallel coding sessions, validate through hosted previews, and iterate—all while staying inside Notion’s collaborative workspace.

Cornell Notes

A Notion Kanban board coordinates a “factory” of AI coding agents: meeting notes become task cards, each card triggers a Cursor cloud coding run, and GitHub pull requests are validated through Cloudflare Pages deploy previews. The system keeps work inside Notion—dragging cards moves them through ready/in-progress/review/done, and “done” merges PRs to production. Task cards include generated images (standardized via a custom Notion AI skill) so many parallel ideas remain distinguishable. Progress and summaries are updated by the managing agent, and comments on a task can send new requirements back into the coding loop. The workflow also shows how to debug agent automation when triggers don’t fire as expected.

How does the Kanban board turn a brainstorm into code changes without leaving Notion?

Meeting notes are recorded, then Notion AI converts them into multiple task cards on a Zen garden tasks board. Each card is dragged into a “ready for agent” column, where a custom Notion AI agent (“Zen Garden Builder”) automatically moves it to “in progress” and starts a Cursor cloud agent to implement the requested feature. As the coding run proceeds, the board shows progress bars and status fields, and the task ends up linked to a GitHub PR and a Cloudflare deploy preview for human review.

What makes the deploy-preview step work reliably for each task card?

Deploy previews are generated per PR branch by Cloudflare Pages. Cloudflare posts the preview URL as a comment on the GitHub pull request. The Notion agent is set up to find the PR, wait for the Cloudflare comment containing the deploy preview URL, and then write that URL onto the corresponding task card. Review then becomes: open the preview, play the game change, and decide whether to merge.

How does the system handle iteration when a change is “close but not right”?

Instead of reopening code locally, the team uses task-level comments. When feedback is added to a task’s thread, the managing agent sends updated instructions back into the Cursor run. The card returns to “in progress,” triggering another coding pass. This loop continues until the preview matches the desired behavior or look.

Why are generated images on task cards more than decoration in this workflow?

With many tasks running in parallel, visual differentiation prevents confusion. The images are generated in a consistent style using a custom Notion AI skill, while each task also gets a distinct random accent color. The result is a “human brain” aid: it’s easier to remember which preview corresponds to which idea when the cards have distinct, standardized artwork.

What automation bug surfaced during merging, and how was it fixed?

A PR merge didn’t happen when a card was moved to “done,” because the agent trigger was configured to react only to tasks entering “ready for agent” or “in progress,” not when they became “done.” Updating the agent configuration to include “done” as a trigger restored the expected behavior: moving to done then merged the PR and shipped the change.

How did the team respond when a visual feature didn’t match the Zen aesthetic?

HD sanding produced a texture that looked like a checkerboard pattern and reduced Zen clarity. Rather than guessing further in prose, the team requested a “playground pattern”: parameterize the rock/texture look with numeric controls and add sliders so they can collaboratively tune the parameters live. The goal is to make aesthetic iteration measurable and adjustable.

Review Questions

  1. If a task card is moved to “done” but no PR merges, what parts of the agent configuration should be checked first?
  2. Describe the chain from Notion task card → Cursor coding run → GitHub PR → Cloudflare deploy preview. Where does the Notion agent “wait” for information?
  3. Why might parameterized “playground” sliders be preferable to purely descriptive feedback for visual design changes?

Key Points

  1. 1

    Notion AI meeting notes can be converted into many structured Kanban task cards that trigger parallel coding runs.

  2. 2

    A custom Notion AI agent manages the board by moving cards through states and launching Cursor cloud agents for implementation.

  3. 3

    Cursor cloud sessions and GitHub PRs are integrated so “done” can merge changes to production after preview-based review.

  4. 4

    Cloudflare Pages deploy previews are linked to PRs via PR comments, and the Notion agent populates preview URLs onto task cards.

  5. 5

    Task cards use Notion image generation plus a custom AI skill to create consistent, distinguishable visuals for fast human review.

  6. 6

    Agent automation can fail if triggers are misconfigured; updating triggers (e.g., to include “done”) restores merge behavior.

  7. 7

    When visual output is hard to specify, parameterized sliders (“playground pattern”) make collaborative tuning faster than prose-only iteration.

Highlights

Drag a task to “ready for agent” and it automatically becomes “in progress,” starts a Cursor cloud coding run, and ends up with a playable Cloudflare deploy preview linked back on the card.
Review is intentionally code-light: play the deploy preview, then drag the card to “done” to merge the PR and ship.
The workflow’s reliability depends on the PR→Cloudflare comment→preview URL loop, which the Notion agent waits for before updating the card.
A merge failure traced back to a missing “done” trigger shows how agent orchestration needs careful state-based automation.
When HD sanding looked wrong, the team switched from descriptive feedback to a slider-driven playground to tune the aesthetic with numeric parameters.

Topics

Mentioned

  • PR
  • MCP