Headlines often promise miracle cures and terrifying threats, but how much of it is backed by real science? In this interactive workshop, we invite you to go beyond the clickbait and step into the shoes of a scientist. We will explore the current state of public science communication, demystifying concepts such as open access, paywalls and the movement for open data.
Participants will then put these skills to the test in a 'Journal Club' simulation. You will be assigned a realistic (fictional) research case study to evaluate. Your challenge is to assess the study objectively—looking past the hype to rate its true scientific rigour—and present it to your peers. The twist? You win not by having the 'best' study but by how accurately your personal assessment matches the independent ratings of your peers. Join us to sharpen your critical thinking toolkit and learn how to access knowledge without the gatekeepers.
Minimum Age: 12+ recommended.
Duration: 60 Minutes
Format: Small groups (Journal Clubs) of 6–10 participants.
Materials: Laptops (optional), 'Study Cards' (assigned randomly), 'Rating Slips' (plenty per table), 'The Rubric' (one per person), one 'Ballot Box' per table (can be a simple envelope).
The session begins with a ten-minute briefing to equip participants with the necessary conceptual tools. The facilitator will introduce the core distinctions between public press releases and original scientific papers, specifically highlighting barriers such as paywalls, the significance of Open Data (the availability of raw statistics), and explain the 1-to-7 rating scale, ensuring everyone understands that 'science' encompasses the methodology and data availability, not just the polished PDF document.
Following the briefing, the facilitator distributes a unique 'Study Card' to each participant, ensuring a random assignment of fictional case studies. The room then settles into a ten-minute period of silent reading, allowing individuals to digest their assigned material. Crucially, participants must perform a strategic self-assessment: they are to rate their own study on the 1-to-7 scale based solely on the evidence provided in the text, rigorously ignoring the sensationalism of the headline. An evaluation guide is suggested below:
| Category | Low Score (1-2) | Medium Score (3-5) | High Score (6-7) |
|---|---|---|---|
| 1. Access | Impossible: Behind a paywall ($30+), no link provided, or "Contact author" with no reply. | Difficult: Requires University login, searching through specific databases, or requesting via forums. | Open: One-click access to the full PDF. Free for everyone (Open Access). |
| 2. Headline | Clickbait: Uses words like "Miracle," "Proven," "Cure." Scarier or better than the actual data. | Modest: Describes the finding but might leave out limitations to sound more interesting. | Accurate: Boring but true. Describes exactly what happened (e.g., "Correlation observed in mice"). |
| 3. Does it Make Sense? (Theory) | Nonsense: Ignores all previous science. Invents new laws of physics/biology without proof. | Standard: Repeats what we already know without adding much new value. | Robust: Fills a clear gap in knowledge. Uses past research responsibly to build a new argument. |
| 4. Quality of the Test (Methods & Data) | Flawed & Closed: Tiny sample size. No control group. Data is secret/hidden. | Acceptable: Decent sample size. Standard methods. Data available upon request. | Rigorous & Open: Preregistration, large sample size, gold-standard controls, and fully available materials and data. |
| 5. Verdict (Conclusion) | Overblown: Claims a fact based on a guess. Confuses correlation with causation. | Logical: Conclusion mostly fits the results but ignores some alternative explanations. | Nuanced: Very careful. Admits what they don't know. Claims only what the data proves. |
| 6. Source Reputation | Suspicious: Marketing blogs, "Predatory" journals (pay-to-publish), or corporate white papers. | Unverified / Variable: Preprints (not yet reviewed), or mid-tier journals that sometimes favour hype. | Trusted: Top-tier peer-reviewed journals, reputable independent research institutes, or government bodies. |
Once decided, they must write their Study ID and their confidential rating on a slip of paper and deposit it into the central Ballot Box (or envelope) on their table. It is vital to emphasise that once this slip is deposited, the self-rating cannot be changed.
For the next thirty minutes, each table transforms into a peer-review committee. Participants take turns presenting their assigned study to the group, with a strict time limit of approximately three minutes per person. The facilitator must enforce a key rule regarding these presentations: they must remain neutral summaries of the facts—covering the method, data access and conclusion—without the presenter sharing their personal opinion or their secret rating. After hearing each pitch, the listening peers immediately write the Study ID and their own independent 1-to-7 rating on a blank slip. These slips are folded and handed directly to the presenter, who collects their 'peer reviews' in a sealed pile without looking at them until the end of the round.
The workshop concludes with the 'Reveal' phase, where the game is scored. First, participants open their collected pile of peer voting slips and calculate the average rating given to them by their table, rounding to the nearest whole number. Once this is done, the facilitator opens the central Ballot Box and reads aloud the original self-ratings for each Study ID. Participants then compare their initial self-rating against the average peer rating to calculate the 'Gap.' The winners are those who achieved a Gap of zero or one, demonstrating that their objective assessment of the science perfectly aligned with the consensus of their peers, regardless of whether the study was high-quality or low-quality.