LESSON 27 - RELIABILITY: METHODS OF DETERMINING RELIABILITY / DEPENDABILITY IN QUALITATIVE RESEARCH
Based on RESEARCH METHODS CLASS WITH PROF. LYDIAH WAMBUGU's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Qualitative reliability is reframed as **dependability**, emphasizing auditability and consistency with raw data rather than exact repeatability.
Briefing
Qualitative reliability is reframed as **dependability** because identical results across different researchers and times can’t be guaranteed in an interpretive setting. Instead of asking whether the same findings will emerge under the same conditions, dependability asks whether the research process is transparent and auditable—so outsiders can judge whether the conclusions genuinely follow from the raw data. In practice, dependability measures how well qualitative findings remain consistent with the data collected in the field, and whether other researchers could reach similar interpretations if they reviewed the same evidence.
This shift matters because qualitative research treats the **researcher as the main research instrument**, meaning bias, decisions, and interpretation are inseparable from data collection and analysis. That’s why reliability can’t be handled with statistical repeatability in the way it often is in quantitative studies. Dependability becomes the mechanism for demonstrating that nothing essential was missed, that procedures were reasonable, and that the report isn’t misleading relative to what participants said, what was observed, and what documents contained.
The lesson lays out **five practical methods** for establishing dependability in qualitative research. The central method is an **audit trail**, which lets an external auditor trace the study’s procedures, verify that the data exist as reported, and evaluate whether analytic decisions match the collected evidence. Audit trail work includes checking how data were gathered, how analysis was conducted, and how findings were presented, with the “trail” functioning as a record that supports the “audit.”
A second method is ensuring **transcripts are error-free**. Because qualitative data often come from interviews and observations, transcription must preserve participants’ meaning; otherwise, the analysis may reflect the researcher’s distortions rather than the participant’s account. Third is **interrelator reliability** (also called **interrelator agreement**), where an independent researcher cross-checks coding decisions—testing whether multiple coders categorize themes from the same text in comparable ways.
When more than one researcher is involved, the lesson adds two related safeguards: **coordination among researchers** during coding so theme development stays aligned, and **crosstaking the codes**, where researchers compare independently derived results to see whether their coding converges.
Coding itself is treated as a core reliability concern. Qualitative coding means organizing and labeling data to identify patterns and relationships, often using participants’ own wording through **in vivo** codes. To support consistent coding, researchers need a **code book** and must ensure that all interviews, observations, and documents are fully transcribed and recorded.
Finally, the lesson highlights threats to validity and reliability—framed as sources of error. It cites Brink (1993) and lists four main threats: the **researcher** (behavior, attitude, and bias), **participants** (artificial responses, withholding or distortion), the **social context** of data collection, and the **methods of data collection and analysis**. Inadequate **triangulation** can lead to thin data, weaker codes, and less dependable conclusions. The takeaway is that dependability is built through multiple, overlapping strategies embedded in the research proposal, not through a single check at the end.
Cornell Notes
Qualitative reliability is treated as **dependability** because exact repeatability across researchers isn’t realistic in interpretive work. Dependability focuses on whether findings are consistent with the raw data and whether outsiders can audit the process to judge if decisions were reasonable. The lesson emphasizes an **audit trail** as the main method, supported by error-free **transcripts**, **interrelator agreement** (independent cross-checking of codes), coordination among multiple coders, and **crosstaking** of independently derived codes. Coding quality is reinforced through a **code book** and careful transcription, including **in vivo** coding that uses participants’ own language. Threats to dependability include researcher bias, participant distortion, social context effects, and weak triangulation, which can reduce data richness and coding accuracy.
Why does qualitative research shift from “reliability” to “dependability”?
What is an audit trail, and why is it the primary method for dependability?
How do transcript accuracy and coding checks strengthen dependability?
What roles do multiple coders play in maintaining dependability?
What is in vivo coding, and how does a code book support reliability?
What threats to validity and reliability does Brink (1993) identify, and how do they affect dependability?
Review Questions
- How does dependability differ from reliability in qualitative research, and what question does it ultimately try to answer?
- Which steps in the audit trail process allow an external auditor to judge whether findings match raw data?
- How do interrelator agreement, coordination, and crosstaking collectively reduce coding inconsistency in qualitative thematic analysis?
Key Points
- 1
Qualitative reliability is reframed as **dependability**, emphasizing auditability and consistency with raw data rather than exact repeatability.
- 2
An **audit trail** is the core dependability method, enabling external scrutiny of data existence, analytic decisions, and how findings are presented.
- 3
**Error-free transcripts** protect participant meaning by preventing transcription from altering what was said in interviews or observations.
- 4
**Interrelator agreement** strengthens dependability by using independent coders to cross-check coding and theme decisions from the same text.
- 5
When multiple researchers code, **coordination** and **crosstaking the codes** help align theme development and compare independently derived results.
- 6
Dependability is supported by strong coding practices: use a **code book** and apply **in vivo** coding grounded in participants’ language.
- 7
Threats to dependability include researcher bias, participant distortion, social context effects, and weak **triangulation**, which can reduce data richness and coding quality.