Get AI summaries of any video or article — Sign up free
How I'm Using AI *WITH* My Obsidian Vault thumbnail

How I'm Using AI *WITH* My Obsidian Vault

FromSergio·
5 min read

Based on FromSergio's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Text Generator for Obsidian turns GPT 3.5 turbo into a vault-aware assistant by using Obsidian note context rather than only typed chat prompts.

Briefing

The core breakthrough here is using OpenAI’s GPT 3.5 turbo inside an Obsidian vault through the Text Generator plugin—so AI output can draw from existing notes, not just whatever text gets typed into a chat box. That shift matters because it turns Obsidian from a personal knowledge store into an active writing and research assistant that can summarize, rewrite, and brainstorm using the same ecosystem where ideas already live.

Setup starts with installing Text Generator from Obsidian’s Community plugins, then updating it because the plugin has changed significantly. The plugin requires an OpenAI API key, which is created on OpenAI’s site and pasted into the plugin settings. Cost is framed around tokens: roughly 1,000 tokens equals about 750 words. GPT 3.5 turbo is positioned as a major improvement over prior options, cutting API cost to about 10% of what the most capable model used to cost while also being more capable. The video also emphasizes that the plugin itself is free, avoiding subscription “middle men” that charge monthly and may limit usage.

In the plugin settings, the model selection is the first major lever. GPT 3.5 turbo replaces earlier frequently used models (mentioned as “Curie” and “Davinci 3”). Additional controls include error callouts in the editor, a max-tokens setting (tied to a hotkey later), and temperature—described less as “creativity” and more as randomness. Temperature at 0 yields repeatable answers (e.g., “my favorite animal is” producing the same word every time), while higher values like 2 produce more varied, elaborate outputs. Frequency penalty is used to reduce repeated words within a response.

The workflow then splits into two command styles: “generate text,” which takes plain instructions, and “template” workflows, which are where most of the power sits. Context handling is critical: the API can consider selected text, only the current cursor line, or everything before the cursor. That means the same prompt can behave differently depending on what part of a note is highlighted or where the cursor sits.

Templates are installed via a template packages manager, with a default prompts package as the starting point. Once installed, templates appear as quick commands (often triggered via hotkeys). Examples include simplify (cuts word count while preserving meaning), rewrite (with customizable tone such as “serious and eloquent” or “quirky and fast-paced”), and summarize (useful for long saved articles). The most distinctive feature is “children notes” prompting: by enabling “include children inside considered contexts for templates,” a template can pull in linked notes from the vault. In practice, linking an article note and related highlight notes allows a “brainstorm ideas” template to generate ideas based on those connected sources, with more links producing richer connections.

The closing takeaway is a practical template strategy: simplify for clutter, summarize for quick recall of saved readings, rewrite for improving voice, and brainstorm (especially with linked children notes) for research and ideation. There’s also an ambition to eventually train a model on the vault itself so interacting with it feels like querying one’s own knowledge base.

Cornell Notes

Text Generator for Obsidian connects GPT 3.5 turbo to an existing notes vault, letting AI generate outputs that use the context already stored in Obsidian. After installing the plugin and adding an OpenAI API key, users can control model choice, max tokens, randomness (temperature), and repetition (frequency penalty). Two interaction modes matter: “generate text” for quick prompts and “templates” for repeatable tasks like simplify, rewrite, and summarize. The standout capability is template context that can include “children notes” (linked notes), enabling brainstorming and synthesis based on multiple connected sources in the vault. This turns Obsidian into a knowledge-driven writing and research assistant rather than a standalone chat tool.

Why does GPT 3.5 turbo change the economics of using AI inside Obsidian?

The transcript frames GPT 3.5 turbo as both more capable and dramatically cheaper: API cost drops to about 10% of what the most capable model used to cost. Token pricing is used to estimate usage—1,000 tokens is roughly 750 words—so the same amount of writing can cost less. It also emphasizes that Text Generator is free, so the only ongoing cost is the API usage rather than a separate subscription service.

How does temperature affect the quality and consistency of generated text?

Temperature is described as a randomness control rather than a direct “creativity” slider. At 0, the model becomes highly deterministic: given a prompt like “my favorite animal is,” it returns the same answer (“dog”) every time. At 2, outputs vary substantially and can become more elaborate (e.g., “a sea otter with a baseball cap”). The plugin default is around 0.7, which the transcript treats as a good general setting.

What’s the practical difference between “generate text” and template-based commands?

“Generate text” accepts plain instructions and can be run quickly (e.g., via Command Palette). Templates are installed from prompt packages and provide structured, reusable tasks like simplify, rewrite, and summarize. Templates also integrate more tightly with context rules (selection, cursor line, or cursor position) and can be customized by editing the prompt text behind each template.

How does the plugin decide what note content to send to the API?

Context depends on cursor position and selection. If text is selected, only the selected text is sent. If “cursor line not empty” applies, only the current line is sent. If “cursor line is empty,” everything before the cursor is sent. This means the same template can produce different results depending on what portion of the note is highlighted or where the cursor sits.

What does “children notes” prompting do, and how is it enabled?

Children notes prompting lets a template use linked notes from the vault, not just the current note’s text. Enabling “include children inside considered contexts for templates” is required, and the transcript notes a key constraint: it must be used with templates (not the simpler generate-text command). A custom template (called “children template” in the transcript) is then created so a “brainstorm ideas” template can draw from linked article notes and related highlight notes.

Why does the transcript recommend different templates for different writing tasks?

The workflow is tailored to common needs: simplify reduces clutter and keeps the same core meaning, summarize creates a quick recall box for long saved articles, and rewrite supports tone control (formal, quirky, serious, dramatic). Brainstorm is highlighted as especially valuable when it can incorporate children notes, since it synthesizes across multiple linked sources. The transcript also notes a preference not to use generic “write a paragraph” style templates because the vault is meant to remain primarily user-authored.

Review Questions

  1. When should a user rely on “selected text” versus “cursor line” versus “everything before the cursor” in Text Generator?
  2. How would you adjust temperature and frequency penalty to reduce repetitive phrasing while still allowing variation?
  3. What steps are required to make a template use linked “children notes,” and why can’t that behavior be used with the simpler generate-text command?

Key Points

  1. 1

    Text Generator for Obsidian turns GPT 3.5 turbo into a vault-aware assistant by using Obsidian note context rather than only typed chat prompts.

  2. 2

    Create an OpenAI API key and paste it into Text Generator; GPT 3.5 turbo is positioned as both more capable and far cheaper (about 10% of prior pricing).

  3. 3

    Use temperature to control randomness: 0 yields consistent outputs, while higher values increase variation; the plugin default is about 0.7.

  4. 4

    Control what the model sees by selecting text, using the cursor line, or leaving the cursor line empty to send everything before the cursor.

  5. 5

    Install prompt packages and rely on templates for repeatable tasks like simplify, rewrite, and summarize.

  6. 6

    Enable “include children inside considered contexts for templates” to let templates draw from linked notes, enabling richer brainstorming across the vault.

  7. 7

    Keep the vault’s authorship intact by using AI for targeted transformations (summaries, rewrites, ideation) rather than wholesale paragraph generation.

Highlights

GPT 3.5 turbo is used as the model inside Obsidian via Text Generator, with token-based pricing framed around roughly 1,000 tokens ≈ 750 words.
Context rules determine whether the API receives selected text, only the cursor line, or everything before the cursor—changing outputs without changing prompts.
“Children notes” prompting is the standout feature: linked notes in the vault can be included in template context to drive brainstorming and synthesis.
Templates like simplify, rewrite (tone-customizable), and summarize are positioned as practical building blocks for research and writing workflows.

Topics

Mentioned

  • API
  • GPT
  • GPT 3.5 turbo