Do qualitative thematic analysis & reporting with this ChatGPT -based tool (AILYZE)
Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
AI lies is designed for qualitative thematic analysis, offering summaries, targeted Q&A, and theme generation rather than only general-purpose chat responses.
Briefing
A new AI tool built specifically for qualitative thematic analysis is gaining attention for turning uploaded interview transcripts into structured themes, code-book style outputs, and—crucially—evidence in the form of quotes. The platform, called AI lies (a wordplay on “analyze” and “AI”), aims to reduce the workarounds researchers previously had to invent when using general-purpose chat systems not designed for qualitative coding.
The workflow starts with uploading a small set of documents. On the free “light” plan, each query accepts up to three files, which may require chunking for larger studies. The interface also includes guidance on what to ask and what to avoid; for example, it discourages prompting for numerical answers, reflecting that the tool is optimized for qualitative interpretation rather than quantitative tasks. Early use comes with minor glitches, but ongoing improvements are described by the person responsible for the platform.
To test the tool, the creator uses a hypothetical leadership study and generates three interview transcripts via ChatGPT to ensure the analysis begins without prior knowledge of the content. After upload, the tool offers multiple modes: file summarization, answering targeted questions, and running thematic analysis. Summaries can be returned as bullet points or essay-style text. Targeted questions can be asked, though some prompts may produce unhelpful or “silly” results when they don’t map cleanly onto the study’s context—such as a question about whether participants are “happy,” which yields difficulty determining that attribute while still returning content-based summaries.
The core feature is thematic analysis. When requested in essay format, the tool generates a set of themes (nine in the example) and places supporting quotes under each theme. The output is designed to be more than a high-level narrative: it provides evidence tied to the underlying interviews, which the creator emphasizes as essential for qualitative rigor. Additional instructions can be layered on top of the initial analysis, allowing users to refine how themes are framed or expanded.
The tool also supports cross-document comparisons. In the example, the analysis is extended to compare challenges reported by male versus female leaders. To make that comparison, the tool needs participant attributes (e.g., marking which participants are male or female). The resulting comparison avoids overclaiming: it reports no clear indication that female leaders face more challenges and notes that both groups use similar strategies. When the user requests quotes for the comparison, the tool supplies document extracts, reinforcing the claim with direct textual evidence.
Despite the strong utility, the workflow is framed as assistive rather than fully automated. The creator stresses that AI output should not replace researcher judgment or the full analytic process; instead, it can accelerate coding, theme drafting, and evidence gathering—especially the quote-and-extract functionality that many general chat tools struggle to deliver reliably.
Cornell Notes
AI lies is positioned as an AI tool built for qualitative thematic analysis, not general chat. After uploading up to three files per query on the free plan, users can request summaries, ask targeted questions, or generate thematic analysis outputs in essay or code-book style. The standout capability is producing themes paired with supporting quotes and document extracts, which helps maintain evidence-based qualitative reporting. The tool can also compare viewpoints across documents when participant attributes (e.g., gender labels) are provided, and it can return quotes for those comparisons. It’s presented as a helpful assistant for drafting themes and evidence, but not a substitute for full researcher analysis.
What constraints does AI lies impose on uploads, and how might that affect real projects?
Why does the tool’s guidance on prompts matter for qualitative work?
How does thematic analysis output look, and what makes it useful for reporting?
How can the tool support comparisons across participant groups?
What role should researcher judgment still play?
Review Questions
- How does AI lies handle evidence in thematic analysis, and why is that important for qualitative reporting?
- What steps are needed to run a cross-document comparison (e.g., by gender), and what happens when participant attributes aren’t specified?
- What limitations on file uploads could force a researcher to change their analysis workflow?
Key Points
- 1
AI lies is designed for qualitative thematic analysis, offering summaries, targeted Q&A, and theme generation rather than only general-purpose chat responses.
- 2
On the light (free) plan, each query accepts up to three uploaded files, which may require chunking for larger studies.
- 3
Prompt guidance discourages numerical questions, reflecting the tool’s qualitative orientation.
- 4
Thematic analysis outputs include themes plus supporting quotes and document extracts, strengthening evidence-based reporting.
- 5
Cross-document comparisons are possible when participant attributes (such as gender labels) are provided to the tool.
- 6
AI lies is best treated as an assistant for drafting themes and evidence, not a replacement for researcher-led analysis and interpretation.