Get AI summaries of any video or article — Sign up free
Linking your Images in Obsidian with Excalidraw and ExcaliBrain thumbnail

Linking your Images in Obsidian with Excalidraw and ExcaliBrain

4 min read

Based on Zsolt's Visual Personal Knowledge Management's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Create an embedded Excalidraw drawing in Obsidian so the visual artifact can carry structured metadata rather than remaining a static PNG.

Briefing

Linking Excalidraw images to Obsidian notes through structured metadata turns one-off PNG pastes into searchable, navigable knowledge objects. Instead of dropping an image into a vault where it’s hard to retrieve later, the workflow embeds the drawing inside Obsidian and attaches fields like source, author, and a “parent” concept—so the image becomes part of a connected information system.

The process starts with taking a framework image from Laura Evans Hill’s posts (via a Twitter thread) and recreating it inside Excalidraw. Using Obsidian’s command palette, the creator selects an action to create a new Excalidraw drawing embedded into the active document. The drawing is set to light mode to match the bright backgrounds of the source images, and the framework image is pasted into the canvas. From there, the key move is metadata: the creator adds a “source” field containing the Twitter link, plus tags/fields for export behavior (explicitly setting export dark mode to false), the author name, and a parent thought labeled “visual vocabulary.” A short summary can also be placed above the “text elements” section, with the important detail that anything above that section won’t be overwritten by Excalidraw, while content below it can be replaced.

To make the drawing even easier to scan and retrieve, additional items (including icon-based elements) are inserted into the metadata block. The transcript notes a practical snag: some pasted icons appear larger than others, but the size can be adjusted by adding a pipe character and specifying a pixel value (the example uses 50). Once the metadata is in place, Obsidian search becomes the retrieval engine. Searching for a term like “actionable steps” surfaces the document containing the embedded drawing, and opening it reveals the same linked image object.

The workflow also leverages ExcaliBrain to connect the drawing to the broader note graph. After embedding the Excalidraw content, the “nine abstract frameworks to visualize ideas” item becomes linked to the “visual vocabulary” note. From that parent note, the user can navigate to the related visual items and then jump back out to Laura Evans Hill’s page, including the conference session “Atomic visuals to organize your ideas” and its date. The result is a system where the image isn’t just stored—it’s indexed, categorized, and cross-referenced.

Overall, the approach trades a few minutes of upfront metadata entry for long-term payoff: faster discovery, cleaner navigation, and higher confidence that visual artifacts can be found later when they’re needed for thinking, writing, or planning. The transcript closes by encouraging viewers to join the upcoming conference session if they’re watching before Friday, framing the material as both inspiring and practically useful.

Cornell Notes

Embedding Excalidraw drawings inside Obsidian and attaching structured metadata turns images into searchable, linkable knowledge units. The workflow recreates a framework image in Excalidraw, then adds fields such as source (Twitter link), author (Laura Evans Hill), and a parent concept (“visual vocabulary”), plus export settings like export dark mode set to false. Metadata placed above the “text elements” section is preserved, while content below can be overwritten by Excalidraw. With the metadata in place, Obsidian search can retrieve the drawing by keywords, and ExcaliBrain uses the links to connect the image to related notes and conference/session context. The payoff is easier long-term retrieval compared with pasting a raw PNG.

Why embed the framework image in Excalidraw instead of pasting a PNG into Obsidian?

Embedding in Excalidraw makes the drawing part of the vault’s structured note content rather than a static file. That enables the creator to add metadata fields (like source, author, and parent thought) alongside the drawing so Obsidian search and ExcaliBrain can index and link it. A plain PNG paste would not support the same metadata workflow.

What metadata fields are added to make the drawing retrievable later?

The workflow adds a “source” field containing the Twitter link, an author field set to Laura, and a parent thought field set to “visual vocabulary.” It also includes export-related metadata—specifically setting export dark mode to false—so the image is inserted in light mode when needed. Optionally, a short summary can be added above the text elements section.

How does the workflow avoid Excalidraw overwriting important metadata?

Metadata is inserted above the “text elements” section. The transcript highlights that anything added above text elements won’t be touched, while anything below that section may be overwritten by Excalidraw. This placement ensures the author/source/parent fields remain stable.

How does the creator handle icon sizing issues in the metadata block?

Some icons appear larger than others after pasting. The workaround is to adjust size by adding a pipe character and specifying a pixel value; the example sets the size to 50. Without icons, the metadata entry would be simpler, but the pipe-based sizing keeps the visual elements consistent.

How do Obsidian search and ExcaliBrain work together in this system?

With metadata attached, Obsidian search can find the document by keyword (e.g., searching for “actionable steps” returns the note containing the embedded drawing). ExcaliBrain then uses the linked metadata relationships—such as linking the “nine abstract frameworks to visualize ideas” drawing to the “visual vocabulary” note—so navigation can move from parent concepts to related visual items and back to the original author/session context.

Review Questions

  1. What specific metadata fields in the workflow make an embedded drawing discoverable through Obsidian search?
  2. What rule about the “text elements” section prevents Excalidraw from overwriting metadata?
  3. How does ExcaliBrain use the drawing’s metadata links to connect it to the broader note graph?

Key Points

  1. 1

    Create an embedded Excalidraw drawing in Obsidian so the visual artifact can carry structured metadata rather than remaining a static PNG.

  2. 2

    Add a source field with the original Twitter link to preserve provenance and enable traceability.

  3. 3

    Record author and parent concept fields (e.g., author: Laura; parent thought: “visual vocabulary”) so the drawing fits into a navigable note hierarchy.

  4. 4

    Set export dark mode to false when the source artwork has a bright/light background, ensuring consistent rendering in your documents.

  5. 5

    Place metadata above the “text elements” section to prevent Excalidraw from overwriting it during updates.

  6. 6

    Use Obsidian search to retrieve drawings by keywords found in the metadata and content, not by file names.

  7. 7

    Leverage ExcaliBrain to turn metadata relationships into bidirectional navigation across related notes and conference/session context.

Highlights

The workflow converts pasted images into indexed knowledge objects by embedding them in Excalidraw and attaching metadata fields like source, author, and parent thought.
Metadata placement matters: anything above the “text elements” section stays intact, while content below can be overwritten by Excalidraw.
ExcaliBrain turns those metadata links into a navigable graph, letting users jump from a parent concept (“visual vocabulary”) to specific visual frameworks and back to the original author/session context.

Topics

Mentioned

  • Laura Evans Hill