Mastery List GPT: Chat with your ToDO List | Time Management and Habits with ChatGPT and LangChain
Based on Venelin Valkov's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Habit scheduling starts from a Google Sheet loaded as CSV into a pandas DataFrame, then filtered by a selected day of the week in the Streamlit UI.
Briefing
A Streamlit app can turn a simple habit spreadsheet into a working daily schedule—and then let users revise it through chat—by combining ChatGPT-style reasoning with LangChain-driven prompt pipelines. The core workflow starts with habit data (habit name, applicable days or specific dates, preferred start time expressed as exact times or day parts like “morning/afternoon/evening,” and duration in minutes). The app reads this data from a Google Sheet, generates a schedule for a chosen day of the week, and displays the resulting to-do list with checkboxes.
The key technical challenge is getting the model to respect day-specific constraints reliably. A common failure mode appears when a single prompt tries to both decide what belongs on a given day and assign times: habits that should recur daily (like “book reading every evening”) can be omitted. The solution implemented here splits scheduling into two separate prompts inside the LangChain pipeline. First, the system asks the model to decide which habits should be scheduled for the selected day, returning a comma-separated list of habit names. That output is parsed and filtered so only the eligible habits proceed.
Second, a follow-up prompt schedules only the filtered habits using each habit’s preferred time and duration. The prompt instructs the model to act like a “personal assistant” that assigns start and end times in 24-hour format, ensuring each habit is scheduled exactly once and sorted by start time. The resulting markdown-style schedule is then parsed into structured to-do items using regular expressions that extract the habit name and the computed time window.
Beyond one-time generation, the app supports interactive updates: users can type instructions like “move book reading 30 minutes after workouts,” or “delete walk the dog.” The update flow again uses a LangChain chain: it feeds the current to-do list (formatted with 24-hour start/end times) plus the user’s natural-language request into the model, then requests an updated schedule at the end. The output is parsed back into a new to-do list, which immediately updates the UI.
The transcript also emphasizes practical prompt engineering and parsing. Even small changes in prompts or parameters can yield “strange results,” so the project relies on strict output formatting instructions, example-driven templates, and “think step by step” prompting to improve accuracy—at the cost of extra tokens. Parsing is done defensively by iterating through model output lines in reverse to capture the final schedule when multiple candidate placements appear.
A demo run shows the end-to-end effect: after asking to move “book reading” relative to workouts, the schedule shifts accordingly (e.g., moving the book reading start time to 14:30 instead of its original slot). The project is positioned as open source, with the Streamlit UI, helper functions for loading Google Sheets as CSV into pandas, and a dedicated scheduler module where the LangChain + ChatGPT logic lives. The result is a chat-controlled habit scheduler that turns spreadsheet inputs into actionable, editable daily plans.
Cornell Notes
The app builds a daily schedule from a Google Sheet of habits and then lets users modify that schedule through chat. It uses LangChain to orchestrate two-step scheduling: first, a prompt decides which habits apply to the selected day; second, another prompt assigns start/end times using each habit’s preferred time and duration. The model’s markdown output is parsed into structured to-do items via regex, and updates work by sending the current to-do list plus a natural-language instruction back through the model to produce a revised schedule. Splitting the task into “eligibility” and “timing” reduces common errors where daily habits get dropped when everything is requested in one prompt.
Why does the project split scheduling into two prompts instead of one?
What does the habit data format include, and how does it drive scheduling?
How does the app turn model output into a usable to-do list?
How do chat-based updates work (e.g., moving or deleting tasks)?
What prompt-engineering techniques are used to improve reliability?
Review Questions
- How would you modify the two-prompt approach if you needed to support multiple occurrences of the same habit in one day (e.g., “water plants” twice)?
- What failure modes could arise from parsing markdown with regex, and how might you redesign the model output format to reduce parsing errors?
- Why does iterating through model output in reverse help, and what alternative strategy could ensure you always parse the final schedule?
Key Points
- 1
Habit scheduling starts from a Google Sheet loaded as CSV into a pandas DataFrame, then filtered by a selected day of the week in the Streamlit UI.
- 2
Day eligibility and time assignment are handled in separate LangChain prompts to prevent missing recurring habits when constraints are combined.
- 3
The scheduler first returns a comma-separated list of habits that apply to the selected day, then parses and filters to keep only those habits.
- 4
A second prompt assigns start/end times in 24-hour format using each habit’s preferred time and duration, with instructions to schedule each habit exactly once and sort by start time.
- 5
Model outputs are converted into structured to-do items using parsing logic, including regular expressions for time windows.
- 6
Chat updates work by sending the current to-do list plus a natural-language instruction back through the model to generate an updated schedule, then re-parsing it.
- 7
Reliability depends heavily on strict output formatting and prompt structure, with “think step by step” improving accuracy at the cost of extra tokens.