Get AI summaries of any video or article — Sign up free
A step-by-step guide for crafting your 2022-On-A-Page with Midjourney, Excalidraw, and Obsidian thumbnail

A step-by-step guide for crafting your 2022-On-A-Page with Midjourney, Excalidraw, and Obsidian

5 min read

Based on Zsolt's Visual Personal Knowledge Management's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use Obsidian daily notes as the raw input, then convert diary entries into grouped mind-map nodes before touching any AI art.

Briefing

A year-end comic strip built from AI art becomes a practical blueprint for turning daily journaling into a consistent, publishable “one-page” visual summary. The core workflow links Obsidian daily notes to an Excalidraw canvas, then uses Midjourney prompts (with strict aspect-ratio and quality controls) to generate a grid of image tiles that can be assembled into a coherent storyline—complete with callouts, metadata, and final layout tweaks.

The process starts with idea harvesting. A new Excalidraw mind map serves as a canvas for collecting themes, then the creator reviews an Obsidian daily notes page for January 1, 2022 and follows “tomorrow” links day by day. Each day’s diary entries are scanned for events and angles worth including in the comic; promising items get captured as nodes in the mind map. After the review, the notes are tallied (e.g., counts of YouTube videos and X (formerly Twitter) collateral releases) and then grouped into topic clusters that become the basis for the comic’s panels.

Layout planning comes next, using a 200 by 200 grid inside Excalidraw to keep panel placement consistent. The comic is drafted in a portrait-oriented 7 by 10 sheet, with tiles sized as 2 by 2, 2 by 3, or 3 by 2 rectangles to mix square, portrait, and landscape compositions. Each tile gets a topic label and draft Midjourney prompts plus callout text—so the narrative structure exists before any final art is generated.

The biggest technical hurdle is consistency across multiple AI images. To address it, a reusable five-part Midjourney prompt structure is used: (1) setting, (2) character traits (reused nearly verbatim to keep the character recognizable), (3) action and emotion, (4) atmosphere plus style cues, and (5) Midjourney parameters. Key parameters include “--ar” for aspect ratio, “--no” to exclude unwanted elements, “--q 0.5” to cut GPU time while keeping quality acceptable, and “--v4” (or “--niji”) to select the stable diffusion model variant. The workflow also emphasizes checking aspect ratio carefully to avoid wasted generations.

Each tile is produced through a repeatable 11-step loop: copy the prompt, run “/imagine” in Discord, generate four variants, upscale the best one, re-upscale if needed, open the origin link, optionally convert to transparency via LunaPic, paste into Excalidraw, size using the grid, then use Excalidraw’s deconstruct image script to move the image and callouts into a new “deconstructed” drawing. Metadata is added in markdown (including the Midjourney prompt and a source link) to preserve provenance.

Finally, the comic is refined with divider lines, callout edits, and formatting controls like “row padding 0” to prevent white padding when embedded. Practical tips address Excalidraw quirks (embedded images reverting to 100% size), speech-cloud creation without a pen, and color transparency via opacity sliders or hex alpha values. The payoff is both creative and reflective: the comic becomes a structured way to identify unfinished themes for 2023 while motivating renewed daily journaling and mind-mapping.

Cornell Notes

The workflow turns Obsidian daily journaling into a structured “2022 on one page” comic built in Excalidraw, with AI-generated art produced tile-by-tile in Midjourney. A 200×200 grid and a portrait 7×10 layout keep panel placement consistent, while a reusable five-part Midjourney prompt template (setting, character traits, action/emotion, style/atmosphere, and parameters) improves visual continuity. GPU time is managed using “--q 0.5,” and aspect ratio is controlled via “--ar” to match each tile’s shape. After generation, LunaPic is used for transparent backgrounds when needed, and Excalidraw’s deconstruct image script helps assemble images and callouts cleanly. Metadata (prompt and source links) is stored for later review and reproducibility.

How does daily journaling translate into a comic strip plan rather than just a collection of notes?

The process starts in Obsidian daily notes, where each day’s entry is skimmed by following “tomorrow” links. Any event or theme worth including gets captured into an Excalidraw mind map as a node. After reviewing the diary, the notes are grouped into topic clusters, and only then are those topics converted into panel tiles with draft callouts and Midjourney prompts.

What layout system keeps many AI-generated tiles from becoming visually chaotic?

A 200 by 200 grid is drawn in Excalidraw and locked so it doesn’t interfere while designing. The comic uses a portrait-oriented 7 by 10 sheet, with tiles sized as 2 by 2, 2 by 3, or 3 by 2 rectangles. Each tile’s size determines the aspect ratio used later in Midjourney, reducing mismatches and wasted generations.

Why is AI consistency hard in multi-panel comics, and what prompt structure helps?

Even when prompts are similar, AI can drift in character design and style across panels. The workaround is a five-part prompt template: (1) setting, (2) character traits (reused nearly verbatim), (3) action and emotion, (4) atmosphere and style cues, and (5) Midjourney parameters. Reusing the character description across tiles is the key lever for continuity.

Which Midjourney parameters are used to control both look and compute cost?

Aspect ratio is set with “--ar” to match the tile shape. “--no” is used to explicitly exclude unwanted elements. GPU time is reduced with “--q 0.5,” which generates at half quality while still producing usable images. Model selection is handled with “--v4” (stable diffusion model) or “--niji” for anime-style output.

How are generated images integrated into Excalidraw so they behave correctly during editing?

After generating and upscaling in Discord, the origin image is opened and either copied into Excalidraw or converted to transparency via LunaPic. The image is then sized using the grid. Excalidraw’s deconstruct image script moves the image and draft callout text into a new deconstructed drawing, replacing the moved elements so the final layout can be adjusted cleanly.

What practical fixes prevent common Excalidraw and styling problems in the final comic?

Embedded images can revert to 100% size when reopened; the fix is to resize inside the embedded image or switch to markdown view and remove the “| 100%” portion of the image link. Speech clouds can be drawn with a mouse or tablet if no pen is available. Callout transparency is adjusted using the opacity slider or by adding hex alpha values (e.g., appending “h0” for 50% transparency) or using the modify background opacity script.

Review Questions

  1. How would you decide the “--ar” aspect ratio for each panel tile, and what happens if it’s overlooked?
  2. What parts of the five-part Midjourney prompt should be reused verbatim to keep a character consistent across panels?
  3. Why is storing the Midjourney prompt and source link in markdown metadata useful later, and how does it support iteration?

Key Points

  1. 1

    Use Obsidian daily notes as the raw input, then convert diary entries into grouped mind-map nodes before touching any AI art.

  2. 2

    Draft the comic layout first using a locked 200×200 grid and a 7×10 portrait structure with 2×2, 2×3, and 3×2 tiles.

  3. 3

    Improve multi-panel visual consistency by using a reusable five-part Midjourney prompt template, especially repeating the character traits section.

  4. 4

    Control compute cost and output reliability with “--q 0.5” and by carefully matching each tile’s aspect ratio using “--ar.”

  5. 5

    Use “--no” to exclude recurring unwanted elements and select the model with “--v4” or “--niji” depending on the desired style.

  6. 6

    Integrate images into Excalidraw through a repeatable pipeline: generate in Discord, upscale, optionally make transparency via LunaPic, paste, size on the grid, then deconstruct for editing.

  7. 7

    Prevent layout and styling glitches by handling Excalidraw’s 100% embedded-image behavior and using opacity controls for semi-transparent callouts.

Highlights

A five-part Midjourney prompt template—reusing character traits nearly verbatim—is presented as the main method for keeping a multi-panel character consistent.
Aspect ratio mistakes are treated as a direct GPU-time cost: checking “--ar” before generating prevents wasted quota.
Excalidraw’s deconstruct image script is used as the assembly step that turns raw AI tiles plus callouts into editable comic elements.
Metadata discipline matters: saving the exact Midjourney prompt and source link in markdown helps future review and iteration.
Transparency is handled pragmatically by running images through LunaPic when PNG-style overlays are needed.

Mentioned