A new project aims to tackle the “replication crisis” by shifting incentives among scientists.It started in psychology, but now findings in many scientific fields are proving impossible to replicate. Here’s what researchers are doing to restore science’s reputation. The true causes and implications of the replication problem in science.

Brian Nosek’s efforts to confirm research discoveries by collaborating with labs throughout the U.S. and around the world sparked a combustible, and much-needed, conversation in the scientific community.Bad papers are still published. But some other things might be getting better.AI hype has researchers in fields from medicine to political science rushing to use techniques that they don’t always understand—causing a wave of spurious results.

The replication crisis devastated psychology. This group is looking to rebuild it.

The Psychological Science Accelerator could be the future of the field around the globe — if they can sustain it.To find statistical significance, one need merely look sufficiently hard.The science replication crisis might be worse than we thought: new research reveals that studies with replicated results tend to be cited less often than studies which have failed to replicate.Widespread failure to reproduce results still plagues scientific research.

. There continues to be difficulties when it comes to replication of studies in the field of Psychology. In part, this may be caused by insufficiently standardized analysis methods that may be subject to state dependent variations in performance. In this work, we show how to easily adapt the two-layer feedforward neural network architecture provided by Huang1 to a behavioral classification problem as well as a physiological classification problem which would not be solvable in a standardized way using classical regression or “simple rule” approaches. In addition, we provide an example for a new research paradigm along with this standardized analysis method. This paradigm as well as the analysis method can be adjusted to any necessary modification or applied to other paradigms or research questions. Hence, we wanted to show that two-layer feedforward neural networks can be used to increase standardization as well as replicability and illustrate this with examples based on a virtual T-maze paradigm2–5 including free virtual movement via joystick and advanced physiological data signal processing.What scientists learn from failed replications: how to do better science.As the cultural edifices of western civilization are torn down one by one, there’s one institution whose prestige and authority continues to grow – science. Respect for scientists has, in many quarters, been transformed into a form of worship. And questioning their authority akin to heresy. Yet.

In conversation with Cory Doctorow on what chokepoint capitalism means for creators and consumers, its prevalence in the cultural industries, and how to fight against it.Mind Blown | University of Nevada, Reno. Rising evidence shows that many psychology studies don’t stand up to added scrutiny. The problem has many scientists worried – but it could also encourage them to up their game.Science journals may be lowering their standards to publish studies with eye-grabbing — but probably incorrect — results.

Cognitive Distortions

How the culture wars came for Wikipedia’s articles about human intelligence.Researchers cite studies that can’t be replicated weirdly often. Karl Andersson’s ‘appallingly bad’ paper has exposed the insanity of ethnography’s turn towards introspection and other postmodern research methods that place little value on objectivity, says William Matthews. Researchers point out how psychology often manipulates studies to support theories rather than revising theories in light of new results.Small MRI studies inflate effect sizes, leaving the brain imaging research literature cluttered with false positives.

A recent study co-led by Wharton’s Gideon Nave attempted to replicate social science experiments published in top journals, with mixed results.…Read More. Largest replication study to date casts doubt on many published positive results.Principal Component Analysis (PCA) is a multivariate analysis that reduces the complexity of datasets while preserving data covariance. The outcome can be visualized on colorful scatterplots, ideally with only a minimal loss of information. PCA applications, implemented in well-cited packages like EIGENSOFT and PLINK, are extensively used as the foremost analyses in population genetics and related fields (e.g., animal and plant or medical genetics). PCA outcomes are used to shape study design, identify, and characterize individuals and populations, and draw historical and ethnobiological conclusions on origins, evolution, dispersion, and relatedness. The replicability crisis in science has prompted us to evaluate whether PCA results are reliable, robust, and replicable. We analyzed twelve common test cases using an intuitive color-based model alongside human population data. We demonstrate that PCA results can be artifacts of the data and can be easily manipulated to generate desired outcomes. PCA adjustment also yielded unfavorable outcomes in association studies. PCA results may not be reliable, robust, or replicable as the field assumes. Our findings raise concerns about the validity of results reported in the population genetics literature and related fields that place a disproportionate reliance upon PCA outcomes and the insights derived from them. We conclude that PCA may have a biasing role in genetic investigations and that 32,000-216,000 genetic studies should be reevaluated. An alternative mixed-admixture population genetic model is discussed.The study of associations between inter-individual differences in brain structure and behaviour has a long history in psychology and neuroscience. Many associations between psychometric data, particularly intelligence and personality measures and local variations of brain structure have been reported. While the impact of such reported associations often goes beyond scientific communities, resonating in the public mind, their replicability is rarely evidenced. Previously, we have shown that associations between psychometric measures and estimates of grey matter volume (GMV) result in rarely replicated findings across large samples of healthy adults. However, the question remains if these observations are at least partly linked to the multidetermined nature of the variations in GMV, particularly within samples with wide age-range. Therefore, here we extended those evaluations and empirically investigated the replicability of associations of a broad range of psychometric variables and cortical thickness in a large cohort of healthy young adults. In line with our observations with GMV, our current analyses revealed low likelihood of significant associations and their rare replication across independent samples. We here discuss the implications of these findings within the context of accumulating evidence of the general poor replicability of structural-brain-behaviour associations, and more broadly of the replication crisis.

From anti-vaxxers to Flat Earthers, the public’s (and scholars’) perception of science shifted sometime between 1990-2010, writes Michael Gordin. There’s a general sense that it’s bad for society—which may be right. But studies offer surprisingly few easy answers.Science is mired in a “replication” crisis. Fixing it will not be easy.Lots to praise and to ponder in this excellent piece by Michael Nielsen and Kanjun Qiu on improving the discovery ecosystem with metascience. The piece contains some pop-ideas to stimulate thinking such as: Fund-by-variance: Instead of funding grants that get the highest average score from reviewers, a funder should use the variance (or kurtosis or some similar measurement. . Philip Kitcher wonders: What is “sloppy science,” and how should we characterize its rigorous counterpart?.