How to read nutrition research with confidence
As a dietetic learner, it can sometimes feel overwhelming or confusing when interpreting and appraising journal articles or research findings. Here are a few pointers to help you to see the wood from the trees when it comes to nutrition and dietetic research – and to help you reach those higher grade boundaries in your assignments.
It is important for students to be up to date with the latest research and evidence in the world of nutrition and dietetics. Even after you qualify, as an evidence-informed practitioner, a crucial part of your job will be to critique and appraise research to ensure you are practising in line with the best evidence.
What is the study design?
Understanding the difference between types of study design can help you to identify possible strengths and limitations of the research.[1] Having a grasp on the terminology used can help you to assess whether the right study design was chosen to answer the research question and to identify the predictable vulnerabilities that exist in research.
Study designs can be appraised using the Hierarchy of Evidence Pyramid Model, which illustrates that the potential for bias decreases as study designs move towards the top of the pyramid.[2]
What do you know about the participants?
Understanding who took part in the study can help you to see whether the research is generalisable to the population or group you are interested in. For example:
Were participants recruited from one site or multiple?
Was the research carried out in a country with a significantly different system to the NHS or one with different health priorities?
How many participants were recruited? The sample size will help you to determine how reliable and valid the results are – in other words, are there enough participants in the study to detect a true effect, if one exists, or to limit the risk of false negatives?
The authors may have carried out a power calculation.[3] This uses previous similar studies or a pilot study to work out how many participants are needed to minimise the chance of type I error (reporting a difference between the groups when there isn’t really one) or type II error (reporting that there is no difference between the two groups when there is one).
Finally, the amount of attrition across the study – this is the number of participants who drop out – allows you to assess attrition bias and appraise internal validity.
What outcomes were measured and how?
Consider what is being measured, and whether the appropriate tools or adequately trained people were employed to collect the data. For example, if dietary intake was assessed, what tools were used to do this? All methods of dietary assessment have biases, so it is important that you consider this when you are interpreting the results.
Do the authors tell you who collected the data? As a dietetic learner, you will understand the complexities of taking an accurate diet history. Were the researchers in the study adequately trained to collect accurate and reliable dietary information from participants?
Explore the results with curiosity
Are the results clearly presented and transparent? Investigate the numbers that are reported and consider what the results mean in practice. For example, a result may prove to be statistically significant, but this does not necessarily mean that the result is relevant in clinical practice.[4]
Investigate confidence intervals, if they are reported. Wide confidence intervals indicate a lack of precision in the results. Is any data missing, and if so, was this acknowledged and appropriately handled by the authors in the data analysis?
Finally, do the results adequately answer the research question?
Look for confounders
Confounders are factors that can distort the relationship between exposure and outcome, making an association look stronger or weaker than it really is, or even reverse it. If confounding factors have not been accounted for, the interpretation, credibility and real-world application of the study results can be impacted.
Common confounding factors include demographics, socioeconomic status, clinical factors (such as certain diagnoses or BMI), behavioural factors and environmental or service-related factors (such as variations in clinical practice).
When reading a research paper, ask yourself a few questions to acknowledge sources of confounding. Did the researchers adjust their randomisation or recruitment to consider confounding factors? Or do you think there are factors that the researchers have overlooked or haven’t accounted for?
Compare the findings with existing literature
Do the authors discuss their findings in relation to the broader evidence base? Is there fair balance in the discussion around contradictory findings from the literature and consideration of the quality or limitations of the evidence base? Do the authors consolidate this information and make appropriate conclusions or recommendations for future investigations?
Check funding and affiliations
Funding and affiliations don’t automatically invalidate results, but it is important to check for and acknowledge any conflicts of interest as a potential source of bias in research studies.
The bottom line
In nutrition and dietetic research, it is impossible to eliminate all sources of bias or limitations. A good study will openly highlight limitations or ways in which the findings might have been influenced. Common study limitations to look out for include inadequate study design, sample size, recruitment, attrition, data collection methods and short follow-up periods.
Remember too that research should not be interpreted in a silo. Always look at the bigger picture. Compare findings with previous evidence and published guidelines and remember to keep the individual, groups and communities you are working with at the centre of your decision making.

Lynsey Richards is a Registered Dietitian and course leader for a postgraduate dietetics course. Experience includes home enteral tube feeding, nutrition support, renal, diabetes and research in dietetic practice.
Lynsey Richards RD
References
Chidambaram AG, Josephson M. Clinical research study designs: The essentials. Pediatric Investigation. 2019. 3: 245–252 https://doi.org/10.1002/ped4.12166
Vatkar A, Kale S, Shyam A, Srivastava S. Understanding the Levels of Evidence in Medical Research. Journal of Orthopaedic Case Reports. 2025. 5: 6–9 https://doi.org/10.13107/jocr.2025.v15.i05.5534
Jones SR, Carley S, Harrison M. An introduction to power and sample size estimation. Emergency Medicine Journal. 2003. 20: 453–458 https://doi.org/10.1136/emj.20.5.453
Sharma H. Statistical significance or clinical significance? A researcher's dilemma for appropriate interpretation of research results. Saudi Journal of Anaesthesia. 2021. 15: 431–434 https://doi.org/10.4103/sja.sja_158_21

Comments