Get AI summaries of any video or article — Sign up free
Do qualitative thematic analysis & reporting with this ChatGPT -based tool (AILYZE) thumbnail

Do qualitative thematic analysis & reporting with this ChatGPT -based tool (AILYZE)

4 min read

Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

AI lies is designed for qualitative thematic analysis, offering summaries, targeted Q&A, and theme generation rather than only general-purpose chat responses.

Briefing

A new AI tool built specifically for qualitative thematic analysis is gaining attention for turning uploaded interview transcripts into structured themes, code-book style outputs, and—crucially—evidence in the form of quotes. The platform, called AI lies (a wordplay on “analyze” and “AI”), aims to reduce the workarounds researchers previously had to invent when using general-purpose chat systems not designed for qualitative coding.

The workflow starts with uploading a small set of documents. On the free “light” plan, each query accepts up to three files, which may require chunking for larger studies. The interface also includes guidance on what to ask and what to avoid; for example, it discourages prompting for numerical answers, reflecting that the tool is optimized for qualitative interpretation rather than quantitative tasks. Early use comes with minor glitches, but ongoing improvements are described by the person responsible for the platform.

To test the tool, the creator uses a hypothetical leadership study and generates three interview transcripts via ChatGPT to ensure the analysis begins without prior knowledge of the content. After upload, the tool offers multiple modes: file summarization, answering targeted questions, and running thematic analysis. Summaries can be returned as bullet points or essay-style text. Targeted questions can be asked, though some prompts may produce unhelpful or “silly” results when they don’t map cleanly onto the study’s context—such as a question about whether participants are “happy,” which yields difficulty determining that attribute while still returning content-based summaries.

The core feature is thematic analysis. When requested in essay format, the tool generates a set of themes (nine in the example) and places supporting quotes under each theme. The output is designed to be more than a high-level narrative: it provides evidence tied to the underlying interviews, which the creator emphasizes as essential for qualitative rigor. Additional instructions can be layered on top of the initial analysis, allowing users to refine how themes are framed or expanded.

The tool also supports cross-document comparisons. In the example, the analysis is extended to compare challenges reported by male versus female leaders. To make that comparison, the tool needs participant attributes (e.g., marking which participants are male or female). The resulting comparison avoids overclaiming: it reports no clear indication that female leaders face more challenges and notes that both groups use similar strategies. When the user requests quotes for the comparison, the tool supplies document extracts, reinforcing the claim with direct textual evidence.

Despite the strong utility, the workflow is framed as assistive rather than fully automated. The creator stresses that AI output should not replace researcher judgment or the full analytic process; instead, it can accelerate coding, theme drafting, and evidence gathering—especially the quote-and-extract functionality that many general chat tools struggle to deliver reliably.

Cornell Notes

AI lies is positioned as an AI tool built for qualitative thematic analysis, not general chat. After uploading up to three files per query on the free plan, users can request summaries, ask targeted questions, or generate thematic analysis outputs in essay or code-book style. The standout capability is producing themes paired with supporting quotes and document extracts, which helps maintain evidence-based qualitative reporting. The tool can also compare viewpoints across documents when participant attributes (e.g., gender labels) are provided, and it can return quotes for those comparisons. It’s presented as a helpful assistant for drafting themes and evidence, but not a substitute for full researcher analysis.

What constraints does AI lies impose on uploads, and how might that affect real projects?

On the light (free) plan, each query allows submission of three files. For studies with more interviews, that likely means running the analysis in batches (chunking) and then reconciling themes across runs, rather than uploading an entire dataset at once.

Why does the tool’s guidance on prompts matter for qualitative work?

The interface includes “tips and tricks” about what to ask and what not to ask. For example, it discourages asking about numbers, aligning with the tool’s qualitative focus. When prompts don’t fit the qualitative coding frame—like asking whether participants are “happy”—the output can become difficult to interpret even if it still returns content summaries.

How does thematic analysis output look, and what makes it useful for reporting?

When thematic analysis is requested (in essay format in the example), the tool returns a set of themes (nine themes were generated). Under each theme, it provides quotes, and users can add further instructions. This quote-first structure is presented as a key advantage for qualitative reporting because it supplies evidence tied to the interviews.

How can the tool support comparisons across participant groups?

It can compare viewpoints across documents if the user specifies participant attributes. In the example, the user labels which participants are male or female, then asks whether female leaders experience more challenges than male leaders. The tool responds with a comparison that avoids overclaiming (no clear indication of more challenges for female leaders) and can generate quotes and document extracts to support the comparison.

What role should researcher judgment still play?

The workflow is framed as assistive. The tool can help draft themes, summaries, and evidence, but it shouldn’t be treated as the final authority for analysis or study conclusions. The creator emphasizes that qualitative research still requires researcher interpretation and rigor beyond AI-generated outputs.

Review Questions

  1. How does AI lies handle evidence in thematic analysis, and why is that important for qualitative reporting?
  2. What steps are needed to run a cross-document comparison (e.g., by gender), and what happens when participant attributes aren’t specified?
  3. What limitations on file uploads could force a researcher to change their analysis workflow?

Key Points

  1. 1

    AI lies is designed for qualitative thematic analysis, offering summaries, targeted Q&A, and theme generation rather than only general-purpose chat responses.

  2. 2

    On the light (free) plan, each query accepts up to three uploaded files, which may require chunking for larger studies.

  3. 3

    Prompt guidance discourages numerical questions, reflecting the tool’s qualitative orientation.

  4. 4

    Thematic analysis outputs include themes plus supporting quotes and document extracts, strengthening evidence-based reporting.

  5. 5

    Cross-document comparisons are possible when participant attributes (such as gender labels) are provided to the tool.

  6. 6

    AI lies is best treated as an assistant for drafting themes and evidence, not a replacement for researcher-led analysis and interpretation.

Highlights

The tool’s most distinctive value is pairing generated themes with supporting quotes and document extracts, making qualitative reporting easier to evidence.
Cross-document comparisons (e.g., male vs female leaders) work when participant attributes are explicitly provided, and the output can still include quotes for transparency.
Even with strong outputs, mismatched prompts (like asking whether participants are “happy”) can produce unclear or unhelpful results, reinforcing the need for careful question design.

Topics