Get AI summaries of any video or article — Sign up free
Build an AI Social Media Content Generator in 20 Minutes | AI Agents with LangGraph and Llama 3.1 thumbnail

Build an AI Social Media Content Generator in 20 Minutes | AI Agents with LangGraph and Llama 3.1

Venelin Valkov·
5 min read

Based on Venelin Valkov's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

LangGraph can generate platform-specific social posts by branching into separate Twitter and LinkedIn writer agents that run in parallel.

Briefing

A LangGraph-based agent loop can turn technical input into platform-ready social posts for both Twitter and LinkedIn—while iterating through multiple drafts using critique feedback—without forcing a single, one-shot rewrite. The core payoff is speed and control: separate “writer” branches generate Twitter and LinkedIn drafts in parallel, then a supervisor checks whether each platform’s output has reached the requested number of drafts, triggering critique-and-rewrite cycles until the quota is met.

The system starts with an editor agent that rewrites raw technical text into a coherent base draft tailored for a specified target audience. From there, the graph branches into two specialized writers: one dedicated to Twitter and another dedicated to LinkedIn. Each writer produces only the text needed for its platform, with prompts that enforce different formatting and style constraints. For Twitter, the prompt calls for a short, natural, conversational paragraph optimized for virality, with requirements like clarity, a hook, brevity, and a call to action—plus a specific preference to avoid emojis and hashtags. For LinkedIn, the prompt shifts toward longer-form structure, reflecting the platform’s tolerance for more detailed explanations.

After initial drafts, critique agents evaluate the latest draft against the original editor output and the target audience. The Twitter critique focuses on tightening clarity and hooks, improving brevity, and removing “hype” or storytelling that doesn’t serve the key benefit; it also pushes for a more specific call to action. The LinkedIn critique performs a similar role but with longer, more detailed analysis—aiming to reshape benefits and framing to better match the intended audience and professional tone.

A supervisor (implemented via conditional edges in LangGraph) governs the iteration. It compares how many drafts have been produced so far against the user-requested number of drafts. If the count is below the target, the supervisor routes execution to the appropriate critique nodes (Twitter critique and LinkedIn critique), then loops back so writers can incorporate feedback into the next draft. Once both platforms meet the draft count, the graph ends and returns the generated posts.

Implementation details emphasize practical reproducibility and dependency setup: the model runs with temperature set to 0, and the Groq API key is pulled from a Google Colab notebook. The workflow uses LangChain with Groq and the Llama 3.1 70B model (via Groq), plus the latest LangGraph aligned with LangChain 0.3. The transcript also notes that parallel branch execution is a major speed advantage over purely sequential pipelines.

In a live example, the editor produces an “exciting news” style introduction about “Mr small,” then three distinct Twitter drafts emerge—each with different angles on cost, performance, and function calling. The LinkedIn drafts are longer and more varied in how they frame benefits and deployment tradeoffs. The final result is a set of draft options plus critique-driven improvements, delivered in roughly 15 seconds for the full graph execution.

Cornell Notes

The workflow uses LangGraph to generate social media posts from technical text by splitting work into parallel branches for Twitter and LinkedIn. An editor agent first rewrites the input into a coherent base draft for a chosen target audience. Platform-specific writer agents then produce drafts; critique agents review the latest draft and feed feedback back into the writers. A supervisor node uses conditional logic to keep looping through critique-and-rewrite until the requested number of drafts per platform is reached. This matters because it combines style control (different prompts per platform) with iterative quality improvements, while parallel execution keeps runtime low.

How does the system ensure Twitter and LinkedIn outputs match platform expectations rather than using one generic rewrite?

Twitter and LinkedIn have separate writer and critique prompts. The Twitter writer is instructed to produce a short, natural, conversational paragraph optimized for virality, with explicit constraints like no emojis or hashtags (a stated preference) and a structure that emphasizes clarity, a hook, brevity, and a call to action. The LinkedIn writer uses a different prompt because LinkedIn supports longer-form content; the transcript notes that LinkedIn prompts are “vastly different” from Twitter prompts to reflect that difference. Critique agents also differ: Twitter critique targets hype/storytelling removal and specificity in benefits and calls to action, while LinkedIn critique provides longer, more detailed analysis suited to professional tone.

What role does the editor agent play before the platform-specific writers start?

The editor agent rewrites the raw technical input into a “nice and coherent text” suitable for further improvement. In the graph state, the editor output becomes the base text that both the Twitter and LinkedIn writers build on. Critique agents also reference the original editor output alongside the latest draft, so feedback is grounded in the initial rewrite rather than drifting away from the source framing.

Why does the supervisor loop matter, and how does it decide when to stop?

The supervisor controls iteration using conditional edges. It checks whether the number of drafts already produced for both platforms is greater than or equal to the user-specified draft count. If not, it routes execution to the critique nodes (Twitter critique and LinkedIn critique), then loops back so writers can incorporate feedback into the next drafts. If the draft counts meet the target, the graph ends and returns the results. This prevents endless generation and guarantees a fixed number of options per platform.

How does parallel execution improve performance in this design?

LangGraph allows the Twitter and LinkedIn branches to run concurrently after the editor step. The transcript highlights that these branches execute in parallel, which speeds up the workflow compared with sequential execution. In the example run, the complete graph execution takes about 15 seconds, attributed in part to this parallelism.

What kinds of feedback do the critique agents provide, and how is that feedback used?

Twitter critique feedback focuses on clarity and hook strength, brevity, and a more specific call to action; it also calls out removing hype and unnecessary storytelling. LinkedIn critique provides a larger, more detailed analysis and adjusts framing around cost effectiveness, performance, deployment flexibility, and other benefits. Writers then take the latest draft plus the critique feedback to generate the next draft version, appending it to the drafts list in the graph state.

What model and generation settings are used in the example run?

The setup uses Groq via LangChain with the Llama 3.1 70B model. Generation uses temperature set to 0 for reproducibility. The transcript also mentions the Groq API key is stored in a Google Colab notebook, and that the workflow is designed to fit within the free tier for the tutorial.

Review Questions

  1. How would you modify the supervisor logic if you wanted Twitter to generate more drafts than LinkedIn?
  2. Which prompt constraints most directly shape Twitter’s final tone and structure, and how do the critique prompts reinforce those constraints?
  3. What changes would be necessary to add a third platform (e.g., Instagram) while keeping parallel execution benefits?

Key Points

  1. 1

    LangGraph can generate platform-specific social posts by branching into separate Twitter and LinkedIn writer agents that run in parallel.

  2. 2

    An editor agent first rewrites technical input into a coherent base draft that both platform branches build on.

  3. 3

    Critique agents provide targeted feedback (Twitter: clarity/hook/brevity and removing hype; LinkedIn: longer, benefit-focused framing) that writers use to produce improved drafts.

  4. 4

    A supervisor node uses conditional edges to loop through critique-and-rewrite until each platform reaches the user-requested number of drafts.

  5. 5

    Using temperature=0 and a fixed model configuration supports reproducible outputs across runs.

  6. 6

    Parallel branch execution is a key reason the end-to-end generation can complete in roughly 15 seconds in the example.

  7. 7

    Prompt specificity—especially platform-specific requirements—drives meaningful differences between the resulting Twitter and LinkedIn drafts.

Highlights

Twitter and LinkedIn are generated by separate writer branches with different prompt constraints, then refined through critique loops.
A supervisor stops generation only after both platforms reach the requested draft count, using conditional routing in LangGraph.
Parallel execution of the Twitter and LinkedIn branches is emphasized as a major speed advantage over sequential pipelines.
In the example, three distinct Twitter drafts emerge from the same technical input, while LinkedIn drafts are longer and more varied in benefit framing.

Topics

  • LangGraph Agents
  • Social Media Generation
  • Twitter vs LinkedIn Prompts
  • Critique-and-Refine Loops
  • Groq Llama 3.1

Mentioned