Build an AI Social Media Content Generator in 20 Minutes | AI Agents with LangGraph and Llama 3.1
Based on Venelin Valkov's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
LangGraph can generate platform-specific social posts by branching into separate Twitter and LinkedIn writer agents that run in parallel.
Briefing
A LangGraph-based agent loop can turn technical input into platform-ready social posts for both Twitter and LinkedIn—while iterating through multiple drafts using critique feedback—without forcing a single, one-shot rewrite. The core payoff is speed and control: separate “writer” branches generate Twitter and LinkedIn drafts in parallel, then a supervisor checks whether each platform’s output has reached the requested number of drafts, triggering critique-and-rewrite cycles until the quota is met.
The system starts with an editor agent that rewrites raw technical text into a coherent base draft tailored for a specified target audience. From there, the graph branches into two specialized writers: one dedicated to Twitter and another dedicated to LinkedIn. Each writer produces only the text needed for its platform, with prompts that enforce different formatting and style constraints. For Twitter, the prompt calls for a short, natural, conversational paragraph optimized for virality, with requirements like clarity, a hook, brevity, and a call to action—plus a specific preference to avoid emojis and hashtags. For LinkedIn, the prompt shifts toward longer-form structure, reflecting the platform’s tolerance for more detailed explanations.
After initial drafts, critique agents evaluate the latest draft against the original editor output and the target audience. The Twitter critique focuses on tightening clarity and hooks, improving brevity, and removing “hype” or storytelling that doesn’t serve the key benefit; it also pushes for a more specific call to action. The LinkedIn critique performs a similar role but with longer, more detailed analysis—aiming to reshape benefits and framing to better match the intended audience and professional tone.
A supervisor (implemented via conditional edges in LangGraph) governs the iteration. It compares how many drafts have been produced so far against the user-requested number of drafts. If the count is below the target, the supervisor routes execution to the appropriate critique nodes (Twitter critique and LinkedIn critique), then loops back so writers can incorporate feedback into the next draft. Once both platforms meet the draft count, the graph ends and returns the generated posts.
Implementation details emphasize practical reproducibility and dependency setup: the model runs with temperature set to 0, and the Groq API key is pulled from a Google Colab notebook. The workflow uses LangChain with Groq and the Llama 3.1 70B model (via Groq), plus the latest LangGraph aligned with LangChain 0.3. The transcript also notes that parallel branch execution is a major speed advantage over purely sequential pipelines.
In a live example, the editor produces an “exciting news” style introduction about “Mr small,” then three distinct Twitter drafts emerge—each with different angles on cost, performance, and function calling. The LinkedIn drafts are longer and more varied in how they frame benefits and deployment tradeoffs. The final result is a set of draft options plus critique-driven improvements, delivered in roughly 15 seconds for the full graph execution.
Cornell Notes
The workflow uses LangGraph to generate social media posts from technical text by splitting work into parallel branches for Twitter and LinkedIn. An editor agent first rewrites the input into a coherent base draft for a chosen target audience. Platform-specific writer agents then produce drafts; critique agents review the latest draft and feed feedback back into the writers. A supervisor node uses conditional logic to keep looping through critique-and-rewrite until the requested number of drafts per platform is reached. This matters because it combines style control (different prompts per platform) with iterative quality improvements, while parallel execution keeps runtime low.
How does the system ensure Twitter and LinkedIn outputs match platform expectations rather than using one generic rewrite?
What role does the editor agent play before the platform-specific writers start?
Why does the supervisor loop matter, and how does it decide when to stop?
How does parallel execution improve performance in this design?
What kinds of feedback do the critique agents provide, and how is that feedback used?
What model and generation settings are used in the example run?
Review Questions
- How would you modify the supervisor logic if you wanted Twitter to generate more drafts than LinkedIn?
- Which prompt constraints most directly shape Twitter’s final tone and structure, and how do the critique prompts reinforce those constraints?
- What changes would be necessary to add a third platform (e.g., Instagram) while keeping parallel execution benefits?
Key Points
- 1
LangGraph can generate platform-specific social posts by branching into separate Twitter and LinkedIn writer agents that run in parallel.
- 2
An editor agent first rewrites technical input into a coherent base draft that both platform branches build on.
- 3
Critique agents provide targeted feedback (Twitter: clarity/hook/brevity and removing hype; LinkedIn: longer, benefit-focused framing) that writers use to produce improved drafts.
- 4
A supervisor node uses conditional edges to loop through critique-and-rewrite until each platform reaches the user-requested number of drafts.
- 5
Using temperature=0 and a fixed model configuration supports reproducible outputs across runs.
- 6
Parallel branch execution is a key reason the end-to-end generation can complete in roughly 15 seconds in the example.
- 7
Prompt specificity—especially platform-specific requirements—drives meaningful differences between the resulting Twitter and LinkedIn drafts.