Preprint / Version 2

Low prevalence of a priori power analyses in motor behavior research

##article.authors##

  • Brad McKay
  • Abbey Corson
  • Mary-Anne Vinh
  • Gianna Jeyarajan
  • Chitrini Tandon
  • Hugh Brooks
  • Julie Hubley
  • Michael J Carter McMaster University

DOI:

https://doi.org/10.51224/SRXIV.175

Keywords:

Metascience, Sample size planning, Positivity rates, Effect size

Abstract

A priori power analyses can be used to ensure studies are unlikely to miss interesting effects. Recent metascience has suggested that kinesiology research may be underpowered and selectively reported. Here, we examined whether power analyses were currently being leveraged to ensure informative studies in the motor behavior research. We reviewed every article published in the Journal of Motor Learning and Development, the Journal of Motor Behavior, and Human Movement Science between January 2019 and June 2021. Our results revealed that power analyses were reported in 13% of all studies (k = 636) that tested a hypothesis. Yet, no study in the sample targeted the smallest effect size of interest. Most studies with a power analysis instead relied on estimates from previous studies, pilot studies, or benchmarks to determine the effect size of interest. Studies in this sample without a power analysis reported support for their main hypothesis 85% of the time, while studies with a power analysis found support 76% of the time. The median sample sizes were n = 17.5 without a power analysis and n = 16 with a power analysis, suggesting the typical study design in our sample was underpowered for all but the largest plausible effect size. At present, power analyses are not being used to optimize the informativeness of motor behavior studies; a trend that likely extends to other kinesiology subdisciplines. Adoption of this simple and widely recommended practice may greatly enhance the credibility of the motor behavior literature and kinesiology research in general.

Metrics

Metrics Loading ...

References

Albers, C., & Lakens, D. (2018). When power analyses based on pilot data are biased: Inaccurate effect size estimators and follow-up bias. Journal of Experimental Social Psychology, 74, 187–195.

Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report. American Psychologist,

(1), 3. https://doi.org/10.1037/amp0000191

Aust, F., & Barth, M. (2020). papaja: Prepare reproducible APA journal articles with R Markdown. https://github.com/crsh/papaja

Bacelar, M. F. B., Parma, J. O., Murrah, W. M., & Miller, M. W. (in-press). Meta-analyzing enhanced expectancies on motor learning: Positive effects but methodological concerns. International Review of Sport and Exercise Psychology, 1–30. https://doi.org/10.1080/1750984X.2022.2042839

Barth, M. (2022). tinylabels: Lightweight variable labels. https://cran.r-project.org/package=tinylabels

Carter, E. C., Schönbrodt, F. D., Gervais, W. M., & Hilgard, J. (2019). Correcting for bias in psychology: A comparison of meta-analytic methods. Advances in Methods and Practices in Psychological Science, 2 (2), 115–144. https://doi.org/10.1177/2515245919847196

Chang, W. (2022). Extrafont: Tools for using fonts. https://CRAN.R-project.org/package=extrafont

Chua, L.-K., Jimenez-Diaz, J., Lewthwaite, R., Kim, T., & Wulf, G. (2021). Superiority of external attentional focus for motor performance and learning: Systematic reviews and meta-analyses. Psychological Bulletin, 147 (6), 618.

Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. The Journal of Abnormal and Social Psychology, 65 (3), 145–153. https://doi.org/10.1037/h0045186

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed). L. Erlbaum Associates.

Correll, J., Mellinger, C., McClelland, G. H., & Judd, C. M. (2020). Avoid Cohen’s “small,” “medium,” and “large" for power analysis. Trends in Cognitive Sciences, 24 (3), 200–207.

Fanelli, D. (2010). “Positive” results increase down the hierarchy of the sciences. PLOS ONE, 5 (4), e10068. https://doi.org/10.1371/journal.pone.0010068

Gelman, A., & Carlin, J. (2014). Beyond power calculations: Assessing type s (sign) and type m (magnitude) errors. Perspectives on Psychological Science, 9 (6), 641–651.

Iannone, R. (2016). DiagrammeRsvg: Export DiagrammeR graphviz graphs as SVG. https://CRAN.R-project.org/package=DiagrammeRsvg

Jack O. Wasey. (2019). PRISMAstatement: Plot flow charts according to the "PRISMA" statement. https://CRAN.R-project.org/package=PRISMAstatement

Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Alper, S., Aveyard, M., Axt, J. R., Babalola, M. T., Bahník, Š., Batra, R., Berkics, M., Bernstein, M. J., Berry, D. R., Bialobrzeska, O., Binan, E. D., Bocian, K., Brandt, M. J., Busching, R., . . .Nosek, B. A. (2018). Many labs 2: Investigating variation in replicability across samples and settings. Advances in Methods and Practices in Psychological Science, 1 (4), 443–490. https://doi.org/10.1177/2515245918810225

Kraemer, H. C., Mintz, J., Noda, A., Tinklenberg, J., & Yesavage, J. A. (2006). Caution regarding the use of pilot studies to guide power calculations for study proposals. Archives of General Psychiatry, 63 (5), 484–489.

Lakens, D. (2017). Equivalence tests: A practical primer for t-tests, correlations, and meta-analyses. Social Psychological and Personality Science, 1, 1–8. https://doi.org/10.1177/1948550617697177

Lakens, D. (2022a). Improving your statistical inferences. https://lakens.github.io/statistical_inferences/

Lakens, D. (2022b). Sample size justification. Collabra: Psychology, 8 (1), 33267.

Lakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A., Argamon, S. E., Baguley, T., Becker, R. B., Benning, S. D., Bradford, D. E., et al. (2018). Justify your alpha. Nature Human Behaviour, 2 (3), 168–171.

Lakens, D., & Evers, E. R. (2014). Sailing from the seas of chaos into the corridor of stability: Practical recommendations to increase the informational value of studies. Perspectives on Psychological Science, 9 (3), 278–292.

Lohse, K., Buchanan, T., & Miller, M. (2016). Underpowered and overworked: Problems with data analysis in motor learning studies. Journal of Motor Learning and Development, 4 (1), 37–58. https://doi.org/10.1123/jmld.2015-0010

Lovakov, A., & Agadullina, E. R. (2021). Empirically derived guidelines for effect size interpretation in social psychology. European Journal of Social Psychology, 51 (3), 485–504. https://doi.org/10.1002/ejsp.2752

McKay, B., Hussien, J., Vinh, M.-A., Mir-Orefice, A., Brooks, H., & Ste-Marie, D. M. (2022). Meta-analysis of the reduced relative feedback frequency effect on motor learning and performance. Psychology of Sport and Exercise, 61, 102165. https://doi.org/10.1016/j.psychsport.2022.102165

McKay, B., Yantha, Z. D., Hussien, J., Carter, M. J., & Ste-Marie, D. M. (in-press). Meta-analytic findings in the self-controlled motor learning literature: Underpowered, biased, and lacking evidential value. Meta-Psychology. https://doi.org/10.31234/osf.io/8d3nb

Mesquida, C., Murphy, J., Lakens, D., & Warne, J. (2022). Replication concerns in sports science: A narrative review of selected methodological issues in the field. SportRxiv. https://sportrxiv.org/index.php/server/preprint/view/127

Neuwirth, E. (2022). RColorBrewer: ColorBrewer palettes. https://CRAN.R-project.org/package=RColorBrewer

Neyman, J. (1937). » smooth test» for goodness of fit. Scandinavian Actuarial Journal, 1937 (3-4), 149–199.

Neyman, J. (1942). Basic ideas and some recent results of the theory of testing statistical hypotheses. Journal of the Royal Statistical Society, 105 (4), 292–327. https://doi.org/10.2307/2980436

Ooms, J. (2022). Rsvg: Render SVG images into PDF, PNG, (encapsulated) PostScript, or bitmap arrays. https://CRAN.R-project.org/package=rsvg

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349 (6251), aac4716. https://doi.org/10.1126/science.aac4716

R Core Team. (2021). R: A language and environment for statistical computing. R Foundation for Statistical Computing. https://www.R-project.org/

Rousselet, G. A., Pernet, C. R., & Wilcox, R. R. (2017). Beyond differences in means: Robust graphical methods to compare two groups in neuroscience. European Journal of Neuroscience, 46 (2), 1738–1748.

Rousselet, G. A., & Wilcox, R. R. (2020). Reaction times and other skewed distributions: Problems with the mean and the median. Meta-Psychology, 4, 1–39.

Rudis, B., & Gandy, D. (2019). Waffle: Create waffle chart visualizations. https://gitlab.com/hrbrmstr/waffle

Thornton, A., & Lee, P. (2000). Publication bias in meta-analysis: Its causes and consequences. Journal of Clinical Epidemiology, 53 (2), 207–216. https://doi.org/10.1016/S0895-4356(99)00161-4

Ushey, K. (2022). Renv: Project environments. https://CRAN.R-project.org/package=renv

Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L. D., François, R., Grolemund, G., Hayes, A., Henry, L., Hester, J., Kuhn, M., Pedersen, T. L., Miller, E., Bache, S. M., Müller, K., Ooms, J., Robinson, D., Seidel, D. P., Spinu, V., . . . Yutani, H. (2019). Welcome to the tidyverse. Journal of Open Source Software, 4 (43), 1686. https://doi.org/10.21105/joss.01686

Wilcox, R. R. (2021). Introduction to robust estimation and hypothesis testing (5th ed.). Academic press.

Zhu, H. (2021). kableExtra: Construct complex table with ’kable’ and pipe syntax. https://CRAN.R-project.org/package=kableExtra

Downloads

Posted

2022-07-12 — Updated on 2022-07-13

Versions