How I'm Using AI *WITH* My Obsidian Vault
Based on FromSergio's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Text Generator for Obsidian turns GPT 3.5 turbo into a vault-aware assistant by using Obsidian note context rather than only typed chat prompts.
Briefing
The core breakthrough here is using OpenAI’s GPT 3.5 turbo inside an Obsidian vault through the Text Generator plugin—so AI output can draw from existing notes, not just whatever text gets typed into a chat box. That shift matters because it turns Obsidian from a personal knowledge store into an active writing and research assistant that can summarize, rewrite, and brainstorm using the same ecosystem where ideas already live.
Setup starts with installing Text Generator from Obsidian’s Community plugins, then updating it because the plugin has changed significantly. The plugin requires an OpenAI API key, which is created on OpenAI’s site and pasted into the plugin settings. Cost is framed around tokens: roughly 1,000 tokens equals about 750 words. GPT 3.5 turbo is positioned as a major improvement over prior options, cutting API cost to about 10% of what the most capable model used to cost while also being more capable. The video also emphasizes that the plugin itself is free, avoiding subscription “middle men” that charge monthly and may limit usage.
In the plugin settings, the model selection is the first major lever. GPT 3.5 turbo replaces earlier frequently used models (mentioned as “Curie” and “Davinci 3”). Additional controls include error callouts in the editor, a max-tokens setting (tied to a hotkey later), and temperature—described less as “creativity” and more as randomness. Temperature at 0 yields repeatable answers (e.g., “my favorite animal is” producing the same word every time), while higher values like 2 produce more varied, elaborate outputs. Frequency penalty is used to reduce repeated words within a response.
The workflow then splits into two command styles: “generate text,” which takes plain instructions, and “template” workflows, which are where most of the power sits. Context handling is critical: the API can consider selected text, only the current cursor line, or everything before the cursor. That means the same prompt can behave differently depending on what part of a note is highlighted or where the cursor sits.
Templates are installed via a template packages manager, with a default prompts package as the starting point. Once installed, templates appear as quick commands (often triggered via hotkeys). Examples include simplify (cuts word count while preserving meaning), rewrite (with customizable tone such as “serious and eloquent” or “quirky and fast-paced”), and summarize (useful for long saved articles). The most distinctive feature is “children notes” prompting: by enabling “include children inside considered contexts for templates,” a template can pull in linked notes from the vault. In practice, linking an article note and related highlight notes allows a “brainstorm ideas” template to generate ideas based on those connected sources, with more links producing richer connections.
The closing takeaway is a practical template strategy: simplify for clutter, summarize for quick recall of saved readings, rewrite for improving voice, and brainstorm (especially with linked children notes) for research and ideation. There’s also an ambition to eventually train a model on the vault itself so interacting with it feels like querying one’s own knowledge base.
Cornell Notes
Text Generator for Obsidian connects GPT 3.5 turbo to an existing notes vault, letting AI generate outputs that use the context already stored in Obsidian. After installing the plugin and adding an OpenAI API key, users can control model choice, max tokens, randomness (temperature), and repetition (frequency penalty). Two interaction modes matter: “generate text” for quick prompts and “templates” for repeatable tasks like simplify, rewrite, and summarize. The standout capability is template context that can include “children notes” (linked notes), enabling brainstorming and synthesis based on multiple connected sources in the vault. This turns Obsidian into a knowledge-driven writing and research assistant rather than a standalone chat tool.
Why does GPT 3.5 turbo change the economics of using AI inside Obsidian?
How does temperature affect the quality and consistency of generated text?
What’s the practical difference between “generate text” and template-based commands?
How does the plugin decide what note content to send to the API?
What does “children notes” prompting do, and how is it enabled?
Why does the transcript recommend different templates for different writing tasks?
Review Questions
- When should a user rely on “selected text” versus “cursor line” versus “everything before the cursor” in Text Generator?
- How would you adjust temperature and frequency penalty to reduce repetitive phrasing while still allowing variation?
- What steps are required to make a template use linked “children notes,” and why can’t that behavior be used with the simpler generate-text command?
Key Points
- 1
Text Generator for Obsidian turns GPT 3.5 turbo into a vault-aware assistant by using Obsidian note context rather than only typed chat prompts.
- 2
Create an OpenAI API key and paste it into Text Generator; GPT 3.5 turbo is positioned as both more capable and far cheaper (about 10% of prior pricing).
- 3
Use temperature to control randomness: 0 yields consistent outputs, while higher values increase variation; the plugin default is about 0.7.
- 4
Control what the model sees by selecting text, using the cursor line, or leaving the cursor line empty to send everything before the cursor.
- 5
Install prompt packages and rely on templates for repeatable tasks like simplify, rewrite, and summarize.
- 6
Enable “include children inside considered contexts for templates” to let templates draw from linked notes, enabling richer brainstorming across the vault.
- 7
Keep the vault’s authorship intact by using AI for targeted transformations (summaries, rewrites, ideation) rather than wholesale paragraph generation.