How To Build a Content Team of SEO AI Agents (n8n, OpenAI, Aidbase)
Based on Simon Høiberg's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use SERP API and AI search models to generate keyword/topic ideas, but plan for less control if you rely on AI-generated ideas rather than manual keyword research.
Briefing
A fully autonomous SEO content pipeline can be built by chaining AI agents for keyword discovery, topic planning, deep research with citations, retrieval of private internal knowledge, long-form drafting, thumbnail generation, and automated publishing—using n8n plus OpenAI, 8base, and Replicate. The practical payoff is speed: daily blog posts for products like Feed Hive, Link Drip, and Abbase can be produced with little to no human intervention after setup. The bigger question is whether this kind of mass-produced AI publishing triggers Google’s spam filters or deranks sites indefinitely; the workflow’s creator argues the risk can be reduced by focusing on usefulness, adding verifiable references, and injecting unique internal knowledge that competitors can’t copy.
The approach starts with topic selection and keyword planning. Instead of relying on brittle “fully autonomous” long-tail keyword research, the workflow uses SERP data via SERP API (Google results returned as JSON) and AI search via Perplexity’s sonar models or OpenAI’s search preview model. Two operating modes are considered: manual keyword research followed by tightly instructed writing, or a more autonomous path where OpenAI search generates content ideas from a product/brand description. The creator favors the latter for autonomy, accepting less control and a smaller chance of ranking well per post in exchange for hands-off publishing.
To prevent the system from repeating itself, the workflow adds a deduplication step. Earlier attempts using vector databases to compare “idea similarity” didn’t work well in practice, so the system instead feeds the research agent a simple archive of summaries from prior posts pulled from the CMS. In the example setup, Strapi provides the blog archive via an HTTP request node, and the agent receives the archive as a JSON string before generating new topic plans.
Drafting initially produced generic, shallow articles—an outcome that would likely resemble spam—until the workflow inserted two additional research layers. One agent performs external research to gather statistics, facts, and citations for the specific post title and outline. Another agent retrieves unique internal material from a private knowledge base using RAG (retrieval-augmented generation). For the internal layer, 8base is used to train a chatbot on selected sources (for example, Feed Hive’s website and help desk, plus internal YouTube resources) and on custom FAQ entries. The workflow then calls the 8base chatbot through an API endpoint inside n8n, so the final draft can include business-specific insights that aren’t available elsewhere.
Once the post is researched and written, the pipeline generates brand-consistent thumbnails. The creator abandons a purely AI-generated “text-on-image” approach and instead uses Replicate’s Flux model (Black Forest Labs’ Flux) to generate the base image, then uses a custom NodeJS API built with the canvas library to compose the thumbnail using a theme, highlighted words, and overlay text. Finally, the system publishes to Strapi (and optionally triggers Feed Hive social posting), with n8n scheduling the whole chain to run daily, weekly, or on any cadence.
The overall message is caution: the setup is experimental and potentially risky for businesses where SEO is the primary revenue channel. For lower-performing or “dead” blogs, the creator frames the automation as a calculated gamble—worth trying if the content can be made genuinely helpful, well-sourced, and uniquely informed by internal knowledge.
Cornell Notes
The workflow builds an end-to-end SEO system that can publish blog posts with minimal human effort by combining multiple AI agents in n8n. It uses SERP API and AI search (OpenAI search preview and/or Perplexity sonar) to generate topic ideas, then pulls prior post summaries from Strapi to reduce duplicate themes. Before writing, it adds two research steps: external research with citations and internal research via RAG using 8base, so drafts include unique business knowledge rather than generic summaries. A separate step generates thumbnails using Replicate’s Flux plus a custom NodeJS/canvas thumbnail API. The result is fully autonomous publishing, but it’s presented as experimental and potentially risky if a site depends heavily on SEO.
How does the workflow generate SEO topics without producing repetitive content?
What changes when the system moves from “topic planning” to “actually writing good posts”?
How does the internal knowledge layer work, and why does it matter for SEO risk?
Why does the thumbnail step require more than “AI image + text overlay”?
What are the main external research options mentioned for citations and facts?
How does the workflow become “fully autonomous” in publishing?
Review Questions
- What specific mechanism does the workflow use to reduce duplicate topics, and why might vector similarity approaches have failed here?
- How do the external research and internal RAG steps work together to prevent drafts from becoming generic?
- What role does the custom NodeJS/canvas thumbnail API play compared with using Flux output directly?
Key Points
- 1
Use SERP API and AI search models to generate keyword/topic ideas, but plan for less control if you rely on AI-generated ideas rather than manual keyword research.
- 2
Prevent duplicate or cannibalizing posts by feeding the research agent a structured archive of prior post summaries pulled from the CMS (e.g., Strapi).
- 3
Insert a dedicated external-research step to collect statistics, facts, and citations before any long-form writing happens.
- 4
Add a RAG-based internal knowledge step (via 8base) so drafts include unique business insights rather than only public web summaries.
- 5
Generate thumbnails in a brand-consistent way: use Replicate’s Flux for the base image and a custom NodeJS/canvas API to apply theme and overlay text.
- 6
Automate publishing through n8n by pushing completed fields to Strapi and optionally triggering social posting via Feed Hive.
- 7
Treat the approach as experimental and potentially risky for businesses where SEO is the top-performing acquisition channel; prioritize usefulness and uniqueness to reduce spam-like patterns.