Publish a Logseq graph to a website with Hugo & Github (deep dive with Brian Sunter)
Based on CombiningMinds's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use Hugo to generate standard HTML for search indexing, and serve the interactive Logseq graph under a separate route for rich exploration.
Briefing
Publishing Logseq knowledge on the open web is less about “exporting a graph” and more about building a reliable pipeline that turns Logseq pages into standard HTML—then hosting both a traditional site and a Logseq-style graph view. Brian Sunter’s setup uses a static site generator (Hugo) to make content indexable by Google, while also serving the interactive Logseq graph at a dedicated route (/graph). The payoff is practical: searchable pages for discovery, plus the linked-reference graph experience for exploration.
The workflow starts with Logseq as the source of truth. Notes live in Logseq pages and journals, stored as plain text. When it’s time to publish, the Logseq Hugo plugin converts those Logseq pages into Hugo-compatible markdown and pulls in related assets like images. Hugo then compiles that markdown into a full set of HTML files. In parallel, Logseq’s export graph function produces the interactive web app version of the knowledge base.
A key detail is that the Hugo plugin output alone isn’t enough to run a site. The Hugo build also needs layout/theme “template” files (for example, a theme such as PaperMod is used for styling). The plugin provides templates/links, and the recommended approach is to clone a Hugo template repo, replace its content with the plugin-generated content, and treat the generated markdown as an intermediate artifact—replacing it wholesale on each publish rather than editing it by hand.
For hosting, Sunter keeps the Logseq graph and the Hugo site in GitHub. A manual step exports Logseq pages to a “public” export, then commits the generated Hugo content into the site repository. From there, GitHub Actions automates the rest: it runs Hugo build/minify commands on pushes so the HTML updates automatically. Another automation layer can publish the full interactive Logseq graph as well, by detecting a Logseq folder in the repo and running the graph export pipeline.
Sunter also addresses common friction points. One is indexing: single-page Logseq web app exports can be harder for Google to index, which is why the Hugo-generated site is served alongside the graph. Another is metadata control: the Logseq Hugo plugin currently expects pages to be marked with public true, making retroactive publishing of older pages labor-intensive. Image stability is another concern—image links can break after Logseq updates unless images are stored locally. A workaround is using Logseq’s “prefer pasting as files” behavior so images are downloaded and referenced reliably.
The conversation closes with broader guidance: publish notes even if they’re imperfect, because search traffic and community feedback can be motivating. Google Search Console is highlighted as a way to see what people search for and which sites link to published notes. Overall, the pipeline turns Logseq’s local, linked knowledge into web-native content without sacrificing the graph experience users expect.
Cornell Notes
Brian Sunter’s publishing pipeline turns Logseq notes into two web experiences: a Hugo-generated, Google-indexable website and an interactive Logseq graph served under a separate route (/graph). The Logseq Hugo plugin converts Logseq pages into Hugo-compatible markdown plus assets; Hugo then compiles that into HTML using a cloned Hugo template/theme (e.g., PaperMod). For the interactive graph, Logseq’s export graph produces a web app that’s harder for search engines to index, so it’s paired with the Hugo site for discoverability. GitHub Actions automates rebuilds on pushes, and practical issues like image link breakage and the need for public true metadata are handled with local image pasting and upfront page tagging.
Why serve both a Hugo site and the interactive Logseq graph instead of only one export method?
What are the three conversion steps in Sunter’s pipeline from Logseq to a website?
Why does the Hugo plugin require additional “layout/template” files beyond the generated markdown?
How does Sunter keep publishing efficient without manually editing generated files?
What metadata and image-handling issues can break publishing, and what workarounds are used?
How does GitHub Actions fit into the workflow after exports are committed?
Review Questions
- What problem does pairing a Hugo site with the interactive Logseq graph solve, and how does each part address it?
- Walk through the pipeline from Logseq pages to hosted HTML: which tool converts what, and what artifacts are produced at each stage?
- What are two practical failure modes (metadata and images) mentioned in the workflow, and how are they mitigated?
Key Points
- 1
Use Hugo to generate standard HTML for search indexing, and serve the interactive Logseq graph under a separate route for rich exploration.
- 2
Convert Logseq pages into Hugo-compatible markdown using the Logseq Hugo plugin, then compile with Hugo using a cloned template/theme (e.g., PaperMod).
- 3
Treat plugin-generated markdown as intermediate output: replace generated folders on each publish rather than editing them manually.
- 4
Mark pages with public true for the Hugo plugin workflow; retroactive tagging can be costly.
- 5
Prevent image breakage by pasting images as files so references point to local assets rather than fragile paths.
- 6
Automate rebuilds with GitHub Actions so pushing exported content triggers Hugo builds and keeps the site current.
- 7
Use Google Search Console to track search queries and inbound links, turning publishing into a feedback loop for note-taking motivation.