Get AI summaries of any video or article — Sign up free
Publish a Logseq graph to a website with Hugo & Github (deep dive with Brian Sunter) thumbnail

Publish a Logseq graph to a website with Hugo & Github (deep dive with Brian Sunter)

CombiningMinds·
5 min read

Based on CombiningMinds's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use Hugo to generate standard HTML for search indexing, and serve the interactive Logseq graph under a separate route for rich exploration.

Briefing

Publishing Logseq knowledge on the open web is less about “exporting a graph” and more about building a reliable pipeline that turns Logseq pages into standard HTML—then hosting both a traditional site and a Logseq-style graph view. Brian Sunter’s setup uses a static site generator (Hugo) to make content indexable by Google, while also serving the interactive Logseq graph at a dedicated route (/graph). The payoff is practical: searchable pages for discovery, plus the linked-reference graph experience for exploration.

The workflow starts with Logseq as the source of truth. Notes live in Logseq pages and journals, stored as plain text. When it’s time to publish, the Logseq Hugo plugin converts those Logseq pages into Hugo-compatible markdown and pulls in related assets like images. Hugo then compiles that markdown into a full set of HTML files. In parallel, Logseq’s export graph function produces the interactive web app version of the knowledge base.

A key detail is that the Hugo plugin output alone isn’t enough to run a site. The Hugo build also needs layout/theme “template” files (for example, a theme such as PaperMod is used for styling). The plugin provides templates/links, and the recommended approach is to clone a Hugo template repo, replace its content with the plugin-generated content, and treat the generated markdown as an intermediate artifact—replacing it wholesale on each publish rather than editing it by hand.

For hosting, Sunter keeps the Logseq graph and the Hugo site in GitHub. A manual step exports Logseq pages to a “public” export, then commits the generated Hugo content into the site repository. From there, GitHub Actions automates the rest: it runs Hugo build/minify commands on pushes so the HTML updates automatically. Another automation layer can publish the full interactive Logseq graph as well, by detecting a Logseq folder in the repo and running the graph export pipeline.

Sunter also addresses common friction points. One is indexing: single-page Logseq web app exports can be harder for Google to index, which is why the Hugo-generated site is served alongside the graph. Another is metadata control: the Logseq Hugo plugin currently expects pages to be marked with public true, making retroactive publishing of older pages labor-intensive. Image stability is another concern—image links can break after Logseq updates unless images are stored locally. A workaround is using Logseq’s “prefer pasting as files” behavior so images are downloaded and referenced reliably.

The conversation closes with broader guidance: publish notes even if they’re imperfect, because search traffic and community feedback can be motivating. Google Search Console is highlighted as a way to see what people search for and which sites link to published notes. Overall, the pipeline turns Logseq’s local, linked knowledge into web-native content without sacrificing the graph experience users expect.

Cornell Notes

Brian Sunter’s publishing pipeline turns Logseq notes into two web experiences: a Hugo-generated, Google-indexable website and an interactive Logseq graph served under a separate route (/graph). The Logseq Hugo plugin converts Logseq pages into Hugo-compatible markdown plus assets; Hugo then compiles that into HTML using a cloned Hugo template/theme (e.g., PaperMod). For the interactive graph, Logseq’s export graph produces a web app that’s harder for search engines to index, so it’s paired with the Hugo site for discoverability. GitHub Actions automates rebuilds on pushes, and practical issues like image link breakage and the need for public true metadata are handled with local image pasting and upfront page tagging.

Why serve both a Hugo site and the interactive Logseq graph instead of only one export method?

The interactive Logseq graph export behaves like a web app and can be difficult for Google to index, which reduces discoverability. Hugo produces standard HTML pages that search engines can crawl more easily. Sunter’s solution keeps a traditional site for indexing and a separate /graph route for the linked-reference graph experience, so users can both find content via search and explore it with Logseq features.

What are the three conversion steps in Sunter’s pipeline from Logseq to a website?

First, notes are written in Logseq pages/journals. Second, the Logseq Hugo plugin exports those pages into Hugo-compatible markdown (and gathers images/assets), including front matter that Hugo understands. Third, Hugo compiles the markdown into a build output of HTML files ready to host. Separately, Logseq’s export graph generates the interactive graph web app.

Why does the Hugo plugin require additional “layout/template” files beyond the generated markdown?

The plugin provides the content layer (markdown and images), but Hugo also needs theme/layout code to control how pages render—home page structure, styling, and overall layout. Sunter notes that people often get only the zip/content output and then wonder why the site doesn’t build; the fix is cloning a Hugo template/theme repo and replacing its content with the plugin-generated content.

How does Sunter keep publishing efficient without manually editing generated files?

He treats the plugin-generated markdown as intermediate output. Each publish replaces the entire generated content folder in the GitHub repo rather than editing individual files. This avoids drift between Logseq source and the exported site and makes updates repeatable: export from Logseq, drop in the new generated content, commit, and let automation rebuild.

What metadata and image-handling issues can break publishing, and what workarounds are used?

For metadata, the Logseq Hugo plugin expects pages to be marked public true; retroactively tagging many pages can be time-consuming. For images, links can break after Logseq updates if images aren’t stored locally. A workaround is enabling “prefer pasting as files” so copied images are downloaded and referenced as local files, improving long-term stability.

How does GitHub Actions fit into the workflow after exports are committed?

Once the generated Hugo-compatible content is committed to GitHub, GitHub Actions runs Hugo build/minify commands to regenerate the HTML site automatically on each push. For the interactive graph, automation can also detect a Logseq folder in the repo and publish the full graph export, enabling two publishing steps: one for the Hugo site and another for the graph web app.

Review Questions

  1. What problem does pairing a Hugo site with the interactive Logseq graph solve, and how does each part address it?
  2. Walk through the pipeline from Logseq pages to hosted HTML: which tool converts what, and what artifacts are produced at each stage?
  3. What are two practical failure modes (metadata and images) mentioned in the workflow, and how are they mitigated?

Key Points

  1. 1

    Use Hugo to generate standard HTML for search indexing, and serve the interactive Logseq graph under a separate route for rich exploration.

  2. 2

    Convert Logseq pages into Hugo-compatible markdown using the Logseq Hugo plugin, then compile with Hugo using a cloned template/theme (e.g., PaperMod).

  3. 3

    Treat plugin-generated markdown as intermediate output: replace generated folders on each publish rather than editing them manually.

  4. 4

    Mark pages with public true for the Hugo plugin workflow; retroactive tagging can be costly.

  5. 5

    Prevent image breakage by pasting images as files so references point to local assets rather than fragile paths.

  6. 6

    Automate rebuilds with GitHub Actions so pushing exported content triggers Hugo builds and keeps the site current.

  7. 7

    Use Google Search Console to track search queries and inbound links, turning publishing into a feedback loop for note-taking motivation.

Highlights

The core strategy is dual publishing: Hugo for indexable pages and the Logseq graph export for interactive linked navigation.
Hugo needs more than the plugin’s markdown output—layout/theme templates are required for the site to render correctly.
Image links can fail over time; “prefer pasting as files” helps keep assets stable after updates.
GitHub Actions can turn a simple “export → commit” routine into an always-updating website.

Topics

  • Logseq Publishing
  • Hugo Static Site
  • GitHub Actions
  • Graph Export
  • Search Indexing

Mentioned

  • Brian Sunter
  • HTML
  • AWS
  • S3
  • CDN
  • GPT