Get AI summaries of any video or article — Sign up free
Mastery List GPT: Chat with your ToDO List | Time Management and Habits with ChatGPT and  LangChain thumbnail

Mastery List GPT: Chat with your ToDO List | Time Management and Habits with ChatGPT and LangChain

Venelin Valkov·
5 min read

Based on Venelin Valkov's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Habit scheduling starts from a Google Sheet loaded as CSV into a pandas DataFrame, then filtered by a selected day of the week in the Streamlit UI.

Briefing

A Streamlit app can turn a simple habit spreadsheet into a working daily schedule—and then let users revise it through chat—by combining ChatGPT-style reasoning with LangChain-driven prompt pipelines. The core workflow starts with habit data (habit name, applicable days or specific dates, preferred start time expressed as exact times or day parts like “morning/afternoon/evening,” and duration in minutes). The app reads this data from a Google Sheet, generates a schedule for a chosen day of the week, and displays the resulting to-do list with checkboxes.

The key technical challenge is getting the model to respect day-specific constraints reliably. A common failure mode appears when a single prompt tries to both decide what belongs on a given day and assign times: habits that should recur daily (like “book reading every evening”) can be omitted. The solution implemented here splits scheduling into two separate prompts inside the LangChain pipeline. First, the system asks the model to decide which habits should be scheduled for the selected day, returning a comma-separated list of habit names. That output is parsed and filtered so only the eligible habits proceed.

Second, a follow-up prompt schedules only the filtered habits using each habit’s preferred time and duration. The prompt instructs the model to act like a “personal assistant” that assigns start and end times in 24-hour format, ensuring each habit is scheduled exactly once and sorted by start time. The resulting markdown-style schedule is then parsed into structured to-do items using regular expressions that extract the habit name and the computed time window.

Beyond one-time generation, the app supports interactive updates: users can type instructions like “move book reading 30 minutes after workouts,” or “delete walk the dog.” The update flow again uses a LangChain chain: it feeds the current to-do list (formatted with 24-hour start/end times) plus the user’s natural-language request into the model, then requests an updated schedule at the end. The output is parsed back into a new to-do list, which immediately updates the UI.

The transcript also emphasizes practical prompt engineering and parsing. Even small changes in prompts or parameters can yield “strange results,” so the project relies on strict output formatting instructions, example-driven templates, and “think step by step” prompting to improve accuracy—at the cost of extra tokens. Parsing is done defensively by iterating through model output lines in reverse to capture the final schedule when multiple candidate placements appear.

A demo run shows the end-to-end effect: after asking to move “book reading” relative to workouts, the schedule shifts accordingly (e.g., moving the book reading start time to 14:30 instead of its original slot). The project is positioned as open source, with the Streamlit UI, helper functions for loading Google Sheets as CSV into pandas, and a dedicated scheduler module where the LangChain + ChatGPT logic lives. The result is a chat-controlled habit scheduler that turns spreadsheet inputs into actionable, editable daily plans.

Cornell Notes

The app builds a daily schedule from a Google Sheet of habits and then lets users modify that schedule through chat. It uses LangChain to orchestrate two-step scheduling: first, a prompt decides which habits apply to the selected day; second, another prompt assigns start/end times using each habit’s preferred time and duration. The model’s markdown output is parsed into structured to-do items via regex, and updates work by sending the current to-do list plus a natural-language instruction back through the model to produce a revised schedule. Splitting the task into “eligibility” and “timing” reduces common errors where daily habits get dropped when everything is requested in one prompt.

Why does the project split scheduling into two prompts instead of one?

A single prompt that both filters “which habits belong on this day” and assigns times can cause omissions—such as a habit that should recur daily (e.g., book reading every evening) not appearing in the Monday schedule. The implemented fix runs two stages: (1) ask the model to output a comma-separated list of habit names that should be scheduled for the selected day, then parse and filter; (2) schedule only those filtered habits by assigning start/end times based on preferred time and duration. This separation makes day constraints more reliable.

What does the habit data format include, and how does it drive scheduling?

Each habit row includes the habit name, the days/dates it should apply (e.g., “book reading daily” plus specific-date items like workout and walking the dog), preferred time (either exact times or day parts like morning/afternoon/evening), and duration in minutes. Two additional fields—easiness of completion and importance—are mentioned but not used in the scheduling logic shown. The scheduler uses preferred time and duration in the second prompt to compute start and end times.

How does the app turn model output into a usable to-do list?

After the model returns a markdown-style schedule table, the code parses it into structured items. For extracting scheduled habits, it expects comma-separated habit names and checks whether the scheduled set is a superset of the expected habits. For time extraction, it uses regular expressions to match the habit name and the computed start/end times in 24-hour format. Parsing iterates through output lines in reverse to capture the final schedule when the model includes multiple placements.

How do chat-based updates work (e.g., moving or deleting tasks)?

When a user types an instruction, the app sends the current to-do list (with 24-hour start/end times) plus the question into a LangChain prompt template. The prompt asks for an updated schedule at the end, again using “think step by step.” The updated schedule is parsed back into a new to-do list, which immediately replaces the displayed schedule—so commands like “move take a bath after workout is complete 10 minutes” or “delete walk the dog” update times and remove items.

What prompt-engineering techniques are used to improve reliability?

The project relies on strict output-format instructions (e.g., comma-separated lists and markdown table templates), includes example formatting inside prompts, and uses “think step by step” to improve accuracy. It also acknowledges a tradeoff: more prompting means higher token cost, but results are more consistent than simpler one-shot prompting.

Review Questions

  1. How would you modify the two-prompt approach if you needed to support multiple occurrences of the same habit in one day (e.g., “water plants” twice)?
  2. What failure modes could arise from parsing markdown with regex, and how might you redesign the model output format to reduce parsing errors?
  3. Why does iterating through model output in reverse help, and what alternative strategy could ensure you always parse the final schedule?

Key Points

  1. 1

    Habit scheduling starts from a Google Sheet loaded as CSV into a pandas DataFrame, then filtered by a selected day of the week in the Streamlit UI.

  2. 2

    Day eligibility and time assignment are handled in separate LangChain prompts to prevent missing recurring habits when constraints are combined.

  3. 3

    The scheduler first returns a comma-separated list of habits that apply to the selected day, then parses and filters to keep only those habits.

  4. 4

    A second prompt assigns start/end times in 24-hour format using each habit’s preferred time and duration, with instructions to schedule each habit exactly once and sort by start time.

  5. 5

    Model outputs are converted into structured to-do items using parsing logic, including regular expressions for time windows.

  6. 6

    Chat updates work by sending the current to-do list plus a natural-language instruction back through the model to generate an updated schedule, then re-parsing it.

  7. 7

    Reliability depends heavily on strict output formatting and prompt structure, with “think step by step” improving accuracy at the cost of extra tokens.

Highlights

The most important reliability fix is splitting scheduling into two prompts: first decide which habits belong on the day, then assign times—preventing daily habits from being dropped.
Chat-based edits operate on the current structured to-do list (24-hour start/end times), enabling commands like “move after workout” and “delete task” to update the schedule immediately.
Parsing is treated as a first-class problem: markdown and comma-separated outputs are converted into structured items using regex and reverse iteration to capture the final model result.

Topics

Mentioned

  • GPT
  • LangChain
  • API
  • CSV
  • UI