How to Write a Literature Review in Minutes Using Jenni AI
Based on Research and Analysis's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Start a Jenni AI literature review with a prompt that specifies the full variable chain (independent variable, mediator, dependent variable) and sets a recency target for studies.
Briefing
A literature review’s hardest work isn’t reading papers—it’s turning scattered studies into a coherent story that links variables, explains mechanisms, and stays grounded in the right evidence. The walkthrough centers on using Jenni AI to speed up that process for a specific conceptual model: green HRM as the independent variable, effective commitment as a mediator, and employee performance as the dependent variable. The key payoff is a faster draft that already includes structured sections and citations, with a built-in way to keep the sources recent.
The process starts by creating a new Jenni document or note and entering a targeted search prompt: the impact of green human resource management on employee performance, with effective commitment as the mediating variable, prioritizing studies after 2018. Jenni then offers an outline. Instead of using a generic “introduction/methods/results” structure, the user selects a literature-review-appropriate heading set. Additional settings include citation style options and a citation recency filter; in this case, the user relies on the prompt’s “after 2018” instruction rather than enabling a separate “after 2020” filter.
From there, the draft is built heading by heading. Jenni suggests text as the cursor moves through each section, and the user can accept suggestions, reject them, or “try again” when the output doesn’t fit the framework. A practical troubleshooting moment arrives when the “mediating role of effective commitment” heading initially produces irrelevant content. The fix isn’t continued prompting—it’s refining the heading itself. By changing a broad heading into a more specific one (“mediating role of effective commitment between green HRM and employee performance”), Jenni is guided toward more relevant literature, improving alignment with the mediation logic.
Once the literature review is completed, the citations are automatically drawn from studies published in 2018 and later, and the references list includes full details with DOI numbers. The user can verify any citation by clicking it to view the underlying article, then reconfirm details and read additional related information. Citations are initially formatted in APA style, but the system allows switching to other citation styles.
The walkthrough also highlights Jenni’s AI Chat as a secondary tool for deeper work: brainstorming, summarizing, paraphrasing, and producing critical analysis. For literature-review tasks, the workflow emphasizes context selection (using the current document), optionally restricting answers to a library of sources, and querying specific articles. To do that, references can be saved into the library, and PDFs can be fetched only if the articles are open access; otherwise, users must upload their own PDFs. After documents are loaded, targeted questions—such as a source’s relevance to the current topic—can generate more reliable, source-grounded insights. The end result is a draft that can be rephrased and pasted into a research article, with GenAI framed as a broader productivity tool beyond literature reviews.
Cornell Notes
The walkthrough shows how Jenni AI can generate a structured literature review quickly while keeping the draft aligned to a specific research model: green HRM → effective commitment (mediator) → employee performance. The workflow starts with a detailed prompt that prioritizes post-2018 studies, then selects literature-review headings and fills each section using Jenni’s text suggestions (accept, reject, or retry). When a heading produces irrelevant content, refining the heading to explicitly match the mediation path improves relevance. The draft includes citations with DOI-linked references and supports citation-style changes, plus verification by opening cited articles. Jenni AI Chat can further support source-grounded work by summarizing, paraphrasing, and answering relevance questions using a saved library of references (with PDF fetching limited to open-access papers).
How does the workflow ensure the literature review matches a mediation model rather than staying generic?
What settings help control which studies appear in the draft?
How is the literature review built in practice inside Jenni AI?
How can citations be checked and corrected after the draft is generated?
What role does Jenni AI Chat play, and how does it stay tied to specific sources?
What is the recommended strategy when Jenni repeatedly produces irrelevant text for a section?
Review Questions
- When would enabling a separate citation recency filter be redundant, and what does the walkthrough use instead?
- Why does refining a heading improve the relevance of AI-generated literature-review text? Give the example used.
- What limitations apply to fetching PDFs for AI Chat, and how can users work around them?
Key Points
- 1
Start a Jenni AI literature review with a prompt that specifies the full variable chain (independent variable, mediator, dependent variable) and sets a recency target for studies.
- 2
Choose literature-review-appropriate headings rather than generic section templates like introduction/methods/results.
- 3
Build each section by accepting Jenni’s suggested text, but use “try again” when output doesn’t match the framework.
- 4
If repeated attempts produce irrelevant content, refine the heading to explicitly reflect the intended relationship (e.g., mediation between two specific constructs).
- 5
Rely on the auto-generated references list for DOI-linked verification, and open cited articles to reconfirm details.
- 6
Use Jenni AI Chat for source-grounded tasks by selecting the current document as context and optionally restricting answers to a saved library of references.
- 7
Remember that PDF fetching in AI Chat is limited to open-access papers; otherwise, upload PDFs manually.