Our methodology

How we rate the evidence

Every ingredient on Evidose goes through the same rigorous review process. Here's exactly how we assess the evidence and assign ratings — so you can judge our judgements.

The rating system
8–10
Strong evidence
Criteria
Multiple independent RCTs with consistent findings
At least one high-quality meta-analysis available
Meaningful effect sizes in human populations
Results replicated across different research groups
Low risk of bias in the majority of studies
Example

Creatine monohydrate for athletic performance. Hundreds of studies, consistent results, large effect sizes, no commercial bias in the evidence base.

4–7
Mixed evidence
Criteria
Some positive RCTs exist but findings are inconsistent
Effect sizes vary considerably between studies
Limited by small sample sizes or short durations
Some studies show benefit, others show none
Methodological quality varies across the evidence base
Example

Ashwagandha for stress reduction. Promising signals but studies are small, effect sizes vary, and long-term safety data is limited.

0–3
Weak evidence
Criteria
Primarily animal or in vitro studies only
No high-quality human RCTs available
Industry-funded studies dominating the evidence base
Theoretical mechanisms not demonstrated in humans
Claims extrapolated beyond what the evidence supports
Example

Collagen peptides for joint health. Largely broken down during digestion, targeted delivery unproven in quality human trials.

The review process
01
Literature search
We search PubMed and Cochrane for all available human studies on the ingredient, prioritising RCTs and meta-analyses.
02
Quality assessment
Each study is assessed for sample size, methodology, blinding, conflict of interest, and statistical validity.
03
Evidence synthesis
We weigh the totality of evidence — not just the positive studies. Inconsistency and null results are given equal weight.
04
Plain English summary
Our neuroscientist writes a clear verdict that accurately represents what the evidence does and doesn't support.

How we weigh different study types

Not all research is equal. We apply a hierarchy of evidence when assessing ingredients, giving more weight to higher-quality study designs.

Meta-analysis / systematic review
Highest
Pools data from multiple studies. Most reliable when studies are high quality and consistent.
Randomised controlled trial (RCT)
High
Gold standard for establishing causation. Quality varies — we assess blinding, sample size, and duration.
Cohort / observational study
Moderate
Useful for identifying associations but cannot establish causation. Prone to confounding.
Animal study
Low
May suggest mechanisms but human translation is unreliable. Not sufficient to support health claims.
In vitro (cell study)
Very low
Early-stage research only. Results in cell cultures frequently do not translate to human outcomes.

Our limitations

We review the publicly available evidence as it stands today. Science evolves — ratings are updated as new research emerges, and every page shows when it was last reviewed.

We are not a substitute for personalised medical advice. Individual responses to supplements vary, and factors like genetics, existing conditions, and medications all affect outcomes.

If you spot an error, outdated study, or missing research, we want to know. Science is a conversation, not a monologue.