Get AI summaries of any video or article — Sign up free
Monthly Reviews In Notion — Master Level Life Alignment (Life OS) thumbnail

Monthly Reviews In Notion — Master Level Life Alignment (Life OS)

August Bradley·
5 min read

Based on August Bradley's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Monthly reviews shift from tactical execution to strategic alignment by ensuring the right projects are active for the goal outcomes that matter.

Briefing

Monthly reviews in Notion are positioned as the strategic bridge between weekly execution and long-term aspirations: they ensure the “right things” are queued up so day-to-day work doesn’t drift into efficiency without direction. Weekly reviews handle tactical alignment—what tasks to do now—while monthly reviews shift attention to where to go, why it matters, and which projects should be active to deliver measurable progress toward higher-level goal outcomes. The core priority is not productivity for its own sake, but choosing the correct targets; doing the right work inefficiently beats doing the wrong work at maximum speed.

The system is built around a connected set of databases and review cycles inside a “command center” flow. Daily tracking rolls into weekly insights, which then roll into monthly and quarterly review dashboards. The monthly review largely aligns with goal outcomes: it’s where goal outcomes are checked for progress, projects are confirmed and reprioritized, and the relationships between value goals, goal outcomes, and execution mechanisms stay intact. A separate “alignment zone” provides guiding principles—core values and identity—so the monthly strategy stays consistent with who the person wants to become.

A new month starts with a theme and a single prompt: what would make the month awesome if achieved. The rest of the month’s content is designed to populate automatically from weekly work. During weekly reviews, accomplishments, disappointments, focus priorities, gratitude, and effectiveness ratings are tagged to the month. At month-end, rollups pull those weekly entries into the monthly dashboard, giving a fast, at-a-glance view of overall effectiveness, what received attention, and how the month felt in terms of wins and setbacks. Action items are also available, filtered to the month, though the emphasis is on reviewing rather than re-entering data.

After the rollup-driven recap, the monthly checklist moves into assessment and maintenance: breakthroughs, discoveries, and improvements; pillar review; habits and routines tweaks; and quick pattern checks from daily tracking. The review also includes refining mindset/identity sculpting rituals and verifying that goals and goal outcomes remain linked correctly. A key control is a “projects count” field for each goal outcome—if a goal outcome has zero queued projects, the system flags a gap, since progress should come through projects and/or habits and routines.

The final part of the monthly review is “vaults,” a knowledge-management cleanup: processing book notes, transferring relevant Evernote items into Notion vaults, emptying computer trash bins, and optionally reviewing bookkeeping via Hubdoc integrated with Xero. Every third month, a shorter quarterly review reduces repeated work and follows a “12-week year” logic: debrief what worked and what didn’t, check whether quarters are on track, update quarter assignments and timelines, realign pillars with value goals, and force “someday maybe” items into a decision—turn them into active work or let them go.

Underlying the whole cadence is an Eisenhower-matrix framing: urgent tasks keep the lights on, but monthly and quarterly reviews protect time for important, non-urgent work that actually changes direction. The process is meant to make that higher-level work unavoidable—so life doesn’t become a treadmill of short-term reactions.

Cornell Notes

Monthly reviews turn weekly execution into strategic progress by aligning goal outcomes with the projects, habits, and routines that can actually deliver them. The month begins with a theme and an “awesome” target, while most of the dashboard fills automatically from weekly reviews through tagging and rollups (effectiveness ratings, focus priorities, accomplishments, disappointments, gratitude). The checklist then forces a short assessment cycle: breakthroughs/discoveries/improvements, pillar and routine maintenance, pattern checks from daily tracking, and—most importantly—relinking value goals to measurable goal outcomes and ensuring each goal outcome has active support (via a projects count). Quarterly reviews compress the same logic into a 12-week rhythm, including a decision step for “someday maybe” items so they don’t drift forever.

Why does a monthly review matter if weekly reviews already exist?

Weekly reviews keep tasks and projects aligned to execution. Monthly reviews raise the level of thinking: they decide what should be worked on next and ensure the “right things” are queued. The monthly cadence connects less-frequent aspiration/reflection to ongoing day-to-day work, so efficiency doesn’t replace direction. It also maintains “pillars” (ongoing structures that keep life stable) while strategy focuses on goal outcomes and the projects that deliver them.

How does the system avoid re-entering everything at month-end?

During each weekly review, entries like accomplishments, disappointments, focus priorities, gratitude, and effectiveness ratings are tagged to the month. At month-end, rollups pull those tagged weekly records into the monthly dashboard automatically. The result is a fast recap: overall effectiveness for the month, what received attention week to week, and summarized wins and setbacks—without manually rebuilding the month.

What is the role of “value goals” and “goal outcomes” in the monthly checklist?

Value goals represent aspirations (meaningful “why”), while goal outcomes are measurable, trackable objectives (quantifiable “what”). The system keeps them linked: value goals connect to goal outcomes, and goal outcomes connect back to the value goals that give them purpose. The monthly review updates statuses, timelines, and whether new goals should be added, then verifies that the relationships still reflect current priorities.

What does the “projects count” field do, and why is it a safeguard?

Each goal outcome has a calculated “projects count” showing how many projects are queued to deliver it. If a goal outcome shows zero, the system flags a structural gap: progress should come through at least one project or through habits and routines that advance the goal outcome. This prevents a common failure mode where goals exist on paper but have no execution pathway.

What changes in the quarterly review, and how does the “12-week year” idea affect it?

Quarterly reviews are shorter and strip out work that would otherwise repeat in two of every three months. The quarterly rhythm follows a “12-week year” mindset: treat annual evaluation as a three-month cycle to create urgency and momentum. The quarterly debrief asks what worked and what didn’t, checks whether projects and goal outcomes are on track for their quarter review dates, and updates timelines and assignments accordingly.

How are “someday maybe” items handled so they don’t disappear into the future?

In weekly and monthly project views, “someday maybes” are filtered out because they’re not actionable. The quarterly review includes a toggle that surfaces items marked “someday maybe,” then prompts a decision: is today that day to turn them into active work (in progress or queued), or should they remain hypothetical? This forces periodic cleanup so they don’t drift indefinitely.

Review Questions

  1. When you create a new month, what two inputs set the direction, and how does the rest of the month populate automatically?
  2. How does the monthly review verify that goal outcomes have an execution pathway (projects/habits/routines) rather than just a status label?
  3. What decision does the quarterly review require regarding “someday maybe” items, and how does that prevent long-term drift?

Key Points

  1. 1

    Monthly reviews shift from tactical execution to strategic alignment by ensuring the right projects are active for the goal outcomes that matter.

  2. 2

    Weekly review outputs (accomplishments, disappointments, focus priorities, gratitude, effectiveness ratings) are tagged to months so monthly dashboards can populate via rollups.

  3. 3

    A month starts with a theme and an “awesome” prompt, then ends with a structured assessment: breakthroughs, discoveries, and improvements.

  4. 4

    Goal outcomes are kept meaningful by maintaining the link between value goals (aspirations) and goal outcomes (measurable objectives), including timeline and status updates.

  5. 5

    The “projects count” safeguard highlights goal outcomes with zero queued projects, prompting the addition of projects or supporting habits/routines.

  6. 6

    Quarterly reviews compress repeated work and use a 12-week year approach to create urgency, check on-track progress, and realign pillars with value goals.

  7. 7

    “Someday maybe” items receive a quarterly decision so they either become active work or are consciously deprioritized, preventing indefinite limbo.

Highlights

Monthly reviews exist to prevent a common productivity trap: doing work efficiently without ensuring it’s the right work for the desired direction.
Most month-end recap data is automated—tag weeks during weekly reviews, then roll them up into the monthly dashboard for effectiveness, focus, wins, and setbacks.
A goal outcome with zero queued projects is treated as a structural problem, not a neutral status—progress must be supported by projects and/or habits/routines.
Quarterly reviews follow a 12-week year logic to avoid the “too much time” problem of annual planning and to force timely course correction.
Quarterly “someday maybe” review turns vague future ideas into either actionable plans or deliberate release.

Topics

Mentioned