Get AI summaries of any video or article — Sign up free
why I took down my climate science video thumbnail

why I took down my climate science video

Sabine Hossenfelder·
5 min read

Based on Sabine Hossenfelder's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

The World Weather Attribution press release claimed “high confidence” that human-caused warming increased the likelihood of January Los Angeles wildfires, including a “35% more probable” figure tied to a fire-weather index.

Briefing

A climate attribution study used to justify strong claims about human-caused warming and the January Los Angeles wildfires was pulled into a public dispute over whether its key result was actually statistically significant—and the fight spilled into media coverage, social posts, and a deleted climate-science video.

The flashpoint centered on a World Weather Attribution press release claiming that “human-induced warming from burning fossil fuels made the peak January [fire weather index…] 35% more probable,” alongside “high confidence” that human-caused climate change increased the likelihood of the “devastating LA wildfires.” In a strongly worded response, Sabine Hossenfelder argued the study’s own analysis did not support those conclusions because the reported probability ratio was statistically insignificant—meaning the data were compatible with climate change having no effect on the January event.

Her initial concern wasn’t only about what the press release emphasized. She also criticized a broader pattern: climate scientists allegedly know when attribution work is shaky but remain silent because speaking up would be politically inconvenient. That context fueled her frustration that the media amplified the headline claim without scrutinizing the underlying statistical result.

The video then disappeared after about a day. The immediate reason was procedural: roughly 12 hours after posting, she received claims that she had misread the study’s result table. With no quick clarification available while she was away from internet access, she set the video to private to avoid spreading what she feared could be misinformation on a policy-relevant topic. Wildfire risk affects real decisions in Los Angeles, including how urgently people plan for future events.

When she returned and rechecked the paper, the dispute narrowed to the study’s statistical presentation. She walked through how extreme-event attribution typically works: climate models are run twice—once with current global warming and once without—and the probabilities of an event matching the target (here, the LA wildfires) are compared via a ratio. She highlighted two methodological weaknesses often associated with this approach: many models may not reproduce extreme events well in the first place, and the outcome can depend heavily on how the “extreme event” is defined.

Her reading of the results table focused on the mean probability ratio and the 95% confidence interval. The mean was slightly above 1 (consistent with the “35% more probable” framing), but the confidence interval appeared to include 1, which would make the result not statistically significant under common conventions. She also argued the paper’s figure caption and color coding were confusing—blue versus orange, and multiple shades of orange—raising the possibility of legend or hue mix-ups. Further complications came from the fact that the paper did not clearly state what threshold it used for “statistical significance,” and she questioned whether the confidence interval was being applied in the way she assumed.

After several days, an author replied directly that the changes were “unsurprisingly, not statistically significant,” and that the correct color label was “light orange.” With that confirmation, the video was restored.

Still, the episode left her with a broader critique: the study was described in the press release without mentioning it was a “rapid attribution” study and without clarifying that it was not peer reviewed. She also emphasized that her argument does not deny that climate change can increase wildfire likelihood in some regions; it targets the specific claim about the LA January event. She concluded by urging caution about attribution claims, while acknowledging that extreme-event attribution methods vary and not all work in the field is necessarily unreliable.

Cornell Notes

A World Weather Attribution press release claimed human-caused warming made January Los Angeles wildfires more likely, citing a 35% increase in a fire-weather index probability and “high confidence.” In response, Sabine Hossenfelder argued the study’s own table showed the key probability ratio was not statistically significant because the 95% confidence interval included 1, making “no effect” compatible with the results. After she temporarily took her video private due to concerns she may have misread the table, an author confirmed the changes were “not statistically significant” and clarified the correct color label. The dispute also raised issues about how the study was communicated—especially the omission of that it was a “rapid attribution” (not peer reviewed) and the confusing figure legend/coding.

What specific claim about the LA wildfires triggered the dispute?

The World Weather Attribution press release said human-induced warming from burning fossil fuels made the peak January fire-weather index 35% more probable, and it asserted “high confidence” that human-caused climate change increased the likelihood of the devastating LA wildfires. The controversy focused on whether the underlying statistical result actually supported those strong statements.

How does extreme-event attribution typically estimate whether warming changed event likelihood?

It runs climate models twice: once without global warming and once with global warming at the current level. It then counts or estimates how often simulated events similar to the target occur in each setup and compares probabilities using a ratio. A ratio above 1 suggests the event became more likely under warming; below 1 suggests it became less likely.

Why did the 95% confidence interval matter for the “statistical significance” question?

Hossenfelder’s reading was that the mean probability ratio was slightly above 1 (matching the “35%” framing), but the 95% confidence interval included 1. Under common practice, that means the data are compatible with climate change having no effect, so the result would be statistically insignificant. She also noted the paper did not clearly define what threshold it used for “statistical significance.”

What role did the table’s color coding and caption play in the confusion?

She argued the figure caption and legend appeared inconsistent with the colors used in the table (e.g., what was labeled as blue versus orange, and which shade corresponded to significant vs non-significant). She checked other tables for additional orange shades and found different variants, which made her suspect legend/hue mix-ups. Later, an author confirmed the correct label was “light orange,” consistent with non-significance.

What procedural reason led her to take the video private?

About 12 hours after posting, she received claims that she misread the result table and that the result might be statistically significant. With no prompt clarification available while she was offline, she set the video to private to avoid spreading potentially incorrect information on a policy-relevant wildfire-risk topic.

What communication gap did she say remained even after the statistical clarification?

She said the press release did not mention the study was a “rapid attribution” study and that it was not peer reviewed. She also emphasized that her critique targets the specific LA January claim, not the broader possibility that climate change can increase wildfire likelihood in other regions.

Review Questions

  1. In extreme-event attribution, what does a probability ratio greater than 1 imply, and how is that ratio computed from model runs?
  2. Why does a confidence interval that includes 1 typically undermine a claim of a statistically significant effect?
  3. What specific omissions or ambiguities in press communication did Hossenfelder say made the public-facing claim more misleading?

Key Points

  1. 1

    The World Weather Attribution press release claimed “high confidence” that human-caused warming increased the likelihood of January Los Angeles wildfires, including a “35% more probable” figure tied to a fire-weather index.

  2. 2

    Hossenfelder argued the study’s own table indicated the probability ratio was not statistically significant because the 95% confidence interval included 1, leaving “no effect” compatible with the results.

  3. 3

    A key driver of the dispute was how “statistical significance” was presented—without a clear threshold definition—and how the table’s legend/color coding appeared confusing.

  4. 4

    She temporarily took her video private after receiving claims she may have misread the table, citing the risk of misinformation on a policy-relevant wildfire topic.

  5. 5

    An author later replied that the changes were “not statistically significant,” confirming the “light orange” label and restoring the video.

  6. 6

    Even with the statistical clarification, she criticized the press release for not stating the work was a “rapid attribution” study and not peer reviewed, and she distinguished LA-specific conclusions from broader wildfire risk effects elsewhere.

Highlights

The headline “35% more probable” framing depended on a mean probability ratio above 1, but the 95% confidence interval was read as including 1—undercutting statistical significance under common conventions.
The public dispute hinged on table interpretation: confidence intervals, undefined significance thresholds, and potentially inconsistent color/legend coding.
The video was taken down briefly not because the statistical question was settled, but because new claims suggested a possible misread while she lacked timely clarification.
After days, an author confirmed the changes were “not statistically significant,” aligning with the “light orange” label.
The press release’s omission of “rapid attribution” and non-peer-reviewed status became part of the broader critique of how results were communicated.

Topics