The Actual History
Peer review—the evaluation of scientific work by one or more people with similar competencies as the work's producers—evolved gradually over several centuries, rather than emerging as a sudden innovation. The roots of modern peer review trace back to the 17th century with the founding of the Royal Society of London in 1660 and the French Académie des Sciences in 1666. These institutions began collecting and evaluating scientific claims, though the process bore little resemblance to today's formalized system.
The first scientific journal, Philosophical Transactions of the Royal Society, was established in 1665 under the editorship of Henry Oldenburg. While Oldenburg consulted with colleagues about submissions, this consultation was informal and primarily editorial rather than evaluative. Throughout the 18th century, journal editors typically made publication decisions independently or with limited consultation with trusted associates.
The term "peer review" itself didn't appear until the 20th century. Even prestigious journals like Nature (founded 1869) and Science (founded 1880) operated without formal external review processes for decades. Editors would often make publication decisions based on their own expertise or informal consultations.
The transformation toward modern peer review accelerated after World War II, driven by several factors: the exponential growth in research output, increasing specialization within scientific disciplines, and the influx of government funding that required accountability mechanisms. The U.S. National Science Foundation, established in 1950, helped institutionalize peer review as a means of allocating research funds.
By the 1960s and 1970s, external peer review became standardized across most scientific journals. The process typically involved editors sending manuscripts to independent experts who would evaluate the work's methodology, results, and conclusions, then recommend acceptance, revision, or rejection. This system gained further prominence as academic tenure and promotion increasingly depended on publishing in peer-reviewed journals.
The late 20th century saw peer review firmly entrenched as the primary quality control mechanism in scientific publishing. Despite persistent criticisms—including concerns about reviewer bias, publication delays, failure to detect fraud, and resistance to innovative ideas—the system has remained the cornerstone of scientific credibility.
The digital revolution of the late 20th and early 21st centuries introduced new variations like open peer review (where reviewer identities are disclosed), post-publication peer review (evaluation after publication), and preprint servers (allowing circulation of papers before peer review). These innovations have supplemented rather than replaced traditional peer review.
Today, peer review stands as the globally accepted standard for validating scientific knowledge, with an estimated 3 million scholarly articles passing through this process annually. Despite its flaws and ongoing evolution, it remains the primary mechanism by which scientific communities maintain quality control and establish consensus, influencing everything from medical treatments to public policy.
The Point of Divergence
What if formal peer review had never been established as the standard for scientific publishing? In this alternate timeline, we explore a scenario where the post-World War II scientific community took a fundamentally different approach to validating and disseminating research findings.
The divergence might have occurred in several plausible ways:
First, the post-war scientific establishment might have maintained the older editor-driven model rather than transitioning to external review. In our timeline, influential figures like Vannevar Bush advocated for systems that would ensure accountability for the tremendous government research funding flowing after WWII. In this alternate timeline, perhaps Bush and his contemporaries placed greater emphasis on institutional reputation and editorial expertise rather than external validation.
Alternatively, the divergence might have occurred when early attempts at formal peer review revealed its limitations. In our timeline, scientists acknowledged these problems but generally accepted them as necessary trade-offs. In this alternate history, early experiences with reviewer bias, publication delays, and administrative burdens might have prompted a more decisive rejection of the formalized system.
A third possibility involves the economics of academic publishing. In our timeline, commercial publishers embraced peer review as it professionalized their journals and justified subscription costs. In this alternate history, different business models might have emerged—perhaps one where rapid publication and broad dissemination were prioritized over pre-publication filtering.
Most plausibly, this divergence would have occurred during the critical period of the 1950s-1960s when scientific institutions were being formalized in the Cold War era. Without the institutionalization of peer review during this period, scientific evaluation would have followed a dramatically different trajectory, with cascade effects throughout the entire knowledge ecosystem.
Instead of developing the system of blind pre-publication review by external experts that we know today, scientific communities might have established alternative validation mechanisms. These could have included post-publication evaluation systems, reputation networks, institutional certification, or entirely different approaches to establishing scientific consensus and reliability.
Immediate Aftermath
Changes in Scientific Publishing (1950s-1970s)
In the absence of formalized peer review, scientific publishing would have evolved along markedly different lines. Editorial discretion would have remained the primary filter for publication, with journal editors wielding significant power over what entered the scientific record. Major journals like Nature, Science, and The Lancet would have continued their tradition of editor-driven selection, relying on their institutional knowledge and networks.
Without external review requirements slowing the publication process, the time from submission to publication would have remained dramatically shorter—often weeks rather than months or years. This accelerated timeline would have particularly benefited fast-moving fields like physics and molecular biology during their mid-century revolutionary periods.
Publishers would have developed alternative quality signals in the absence of peer review. Journal prestige would still matter, but would be based more on the reputation of the editorial board and the institutional affiliations of authors rather than the rigor of review processes. Scientific societies might have taken a more active role in certifying quality through post-publication endorsements or commentary.
Institutional Adaptations (1960s-1980s)
Universities and research institutions would have developed alternative methods for evaluating scientific productivity and quality when making hiring and promotion decisions. Without the simple metric of "publications in peer-reviewed journals," more holistic evaluation systems might have emerged:
-
Departmental Reading Committees: University departments might have formed internal committees to evaluate the quality of faculty publications regardless of where they were published.
-
Citation Prominence: Citation metrics would have gained prominence earlier as a post-publication quality measure, with specialized citation tracking services emerging in the 1960s rather than the late 1970s.
-
Institutional Endorsement Systems: Major research universities might have created formal endorsement systems, where published work could receive the institution's "seal of approval" after internal review.
Government funding agencies like the National Science Foundation and the National Institutes of Health would have developed their own quality control mechanisms. Without established peer review processes, these agencies might have relied more heavily on panel-based evaluations of research programs rather than individual project proposals, or created their own networks of trusted reviewers independent of journals.
Early Cases of Scientific Fraud and Controversy
The absence of pre-publication peer review would have led to more questionable research entering the published literature initially, followed by more public disputes and retractions. Several high-profile cases of scientific error or fraud might have emerged earlier:
-
The Piltdown Man hoax (exposed in 1953 in our timeline) might have prompted earlier discussions about verification standards.
-
Cases like Cyril Burt's fabricated data on inherited intelligence might have been published more widely before being challenged.
-
Without peer review filtering, claims about cold fusion or memory water might have achieved greater initial visibility in mainstream scientific publications.
These controversies would have accelerated the development of alternative validation mechanisms. Scientific societies would likely have established post-publication review committees that could issue formal opinions on published works, creating a system of retrospective quality control.
Rise of Informal Networks and "Invisible Colleges"
Without formalized peer review creating standardized quality thresholds across disciplines, scientific communities would have relied more heavily on informal networks to filter and evaluate research:
-
Preprint Circulation: Physical preprints would have remained crucial longer, with researchers sending manuscripts directly to colleagues for feedback before wider circulation.
-
Conferences as Validation: Scientific conferences would have taken on greater importance as forums where research could be publicly scrutinized before broader dissemination.
-
Departmental Technical Report Series: University department report series would have maintained higher prestige, becoming important venues for establishing priority and receiving community feedback.
These "invisible colleges" of informal evaluation would have created more varied standards across different scientific communities, with some fields developing more stringent post-publication critique cultures than others.
Long-term Impact
The Evolution of Scientific Communication (1980s-2000s)
Digital Revolution Without Peer Review Constraints
As digital technologies emerged in the 1980s and 1990s, scientific communication would have undergone a dramatically different transformation without the peer review framework. Rather than digitizing existing peer-reviewed journals, the digital revolution might have enabled entirely new validation systems:
-
Early Adoption of Open Publishing: Without peer review as a bottleneck, scientific publishing would have moved online much more rapidly in the late 1980s and early 1990s.
-
Version-Controlled Research: Digital platforms would have developed sophisticated version-control systems for scientific papers, allowing continuous improvement based on community feedback rather than the binary accept/reject model.
-
Algorithm-Based Quality Assessment: By the early 2000s, sophisticated algorithms analyzing citation patterns, usage metrics, and text features would have emerged as automated tools for research evaluation.
The internet would have accelerated these trends, with platforms emerging that combined immediate publication with post-publication community evaluation. Services like arXiv (established 1991 in our timeline) would have become central to scientific communication much earlier and across more disciplines.
Reputation Economy
A sophisticated reputation economy would have developed to replace the quality signal that peer review provided:
-
Researcher Impact Profiles: Comprehensive digital profiles tracking a researcher's publications, citation impact, community engagement, and institution would have become standard by the late 1990s.
-
Community Endorsement Systems: Digital platforms would enable formal endorsements from established researchers, creating a network-based validation system visible to all.
-
Dynamic Impact Metrics: Instead of journal impact factors, paper-level metrics would have dominated, with sophisticated measures incorporating reader engagement, citation velocity, and replication attempts.
Scientific Knowledge and Credibility (2000s-2025)
Knowledge Fragmentation and Integration
Without the standardizing effect of peer review, scientific knowledge would have become more fragmented along community lines:
-
Disciplinary Divergence: Different scientific disciplines would have developed vastly different standards for validation, with some embracing highly rigorous post-publication review and others maintaining more permissive standards.
-
Powerful Curator Networks: Networks of respected scientists serving as knowledge curators would have emerged, creating "reading lists" and annotations that helped guide researchers through the expanded literature.
-
Meta-Analysis Revolution: To combat fragmentation, meta-analysis methodologies would have developed much earlier and more comprehensively, becoming the primary means of establishing scientific consensus across disparate findings.
Public Understanding of Science
The relationship between science and the public would have evolved differently:
-
Scientific Literacy Focus: Educational systems would place greater emphasis on teaching evaluation skills rather than just scientific facts, as citizens would need to navigate unfiltered scientific claims.
-
Transparent Disagreement: Scientific disagreements would be more visible to the public, making the provisional nature of scientific knowledge more apparent.
-
Expert Verification Services: Third-party verification services would have emerged, providing assessments of scientific claims for journalists, policymakers, and the public—similar to fact-checking organizations.
Scientific Progress and Innovation Patterns
The pace and pattern of scientific advancement would differ significantly:
-
Accelerated Cross-Disciplinary Innovation: Without peer review potentially rejecting unconventional ideas that cross disciplinary boundaries, breakthrough innovations combining insights from multiple fields might have occurred more frequently.
-
Earlier Detection of Replication Problems: The replication crisis that emerged in the 2010s in our timeline might have been identified much earlier, as the scientific community developed stronger post-publication verification systems in response to initial quality concerns.
-
More Visible Dead Ends: Failed approaches and negative results would be more visible in the scientific record, potentially saving research resources by preventing repeated exploration of unproductive paths.
Contemporary Scientific Landscape (2010s-2025)
Institutional Adaptation
By the present day, scientific institutions would have evolved distinctive features:
-
Disaggregated Functions: The functions that peer-reviewed journals traditionally bundled—dissemination, validation, curation, archiving—would exist as separate specialized services provided by different entities.
-
Professional Evaluators: A professional class of scientific evaluators might have emerged, specialists who assess research quality independently of conducting their own research.
-
Funding-Publication Integration: Research funders would be more directly involved in knowledge dissemination, potentially operating their own publication platforms with built-in quality assurance mechanisms.
Global Scientific Ecosystem
The global scientific enterprise would demonstrate different patterns:
-
Reduced Publication Inequality: Without peer review as a barrier, researchers from less prestigious institutions and developing countries might face fewer systemic disadvantages in publishing their work.
-
Language Processing Technology: Advanced translation and language processing tools would have developed earlier to help researchers evaluate work in languages they don't speak, as English might not have become as dominant in scientific publishing.
-
Community-Based Governance: Scientific governance would rely more heavily on community-based mechanisms rather than formal gatekeeping, with sophisticated reputation systems determining who influences consensus formation.
Technological Infrastructure
The technological infrastructure supporting science would have distinctive features:
-
Integrated Research Environments: Rather than separate publication platforms, science would operate on integrated environments where data, analysis code, and manuscripts exist in interconnected, executable formats.
-
AI-Powered Evaluation: Artificial intelligence systems would play a major role in preliminary quality assessment, flagging methodological concerns, statistical errors, or plagiarism before human evaluation.
-
Continuous Knowledge Graphs: Scientific knowledge would be represented in continuously updated knowledge graphs rather than discrete publications, with contributions assessed at a more granular level than entire papers.
Expert Opinions
Dr. Melissa Schwartzman, Professor of Science and Technology Studies at MIT, offers this perspective: "Without formal peer review, science would likely have developed more robust post-publication evaluation mechanisms much earlier. The absence of pre-publication filtering would have necessitated stronger community-based validation systems. While we might have seen more initial noise in the scientific record, the system might actually have proven more efficient at self-correction over time. The scientific community would have invested more in developing the skills to evaluate research quality independently rather than outsourcing that judgment to anonymous reviewers. The resulting ecosystem might have been messier but potentially more innovative and self-aware about its own limitations."
Professor Jian Chen, Director of the Center for Scientific Communication at Peking University, argues: "The absence of formal peer review would have dramatically altered global scientific power dynamics. Without the standardizing influence of Western-dominated peer review systems, we would likely see more diverse approaches to knowledge validation across different cultural and national contexts. East Asian scientific traditions, which historically emphasized consensus-building through different mechanisms, might have maintained more distinctive practices rather than conforming to the Western peer review model. This could have led to both more fragmentation in global science but also more genuine cross-cultural exchange of methodologies for establishing reliable knowledge."
Dr. Aiden Okafor, Science Policy Fellow at the Royal Society, suggests: "The alternate timeline without peer review would almost certainly have accelerated the development of open science practices. Without peer review as a quality signal, there would have been greater pressure to make the entire research process transparent—sharing data, methods, and even preliminary results would become necessary for establishing credibility. This transparency would have become the primary mechanism for ensuring reliability. We might have avoided some of the reproducibility problems that have plagued certain fields, as transparent methods and open data would have been required for credibility from a much earlier stage. However, we might also have seen greater challenges in reaching scientific consensus, as the structured process of review helps communities converge on reliable findings."
Further Reading
- How We Became Our Data: A Genealogy of the Informational Person by Colin Koopman
- Peer Review: Reform and Renewal in Scientific Publishing by Adam Mastroianni
- How the Cold War Transformed Scientific Collaboration by Elena Aronova
- The Scientific Journal: Authorship and the Politics of Knowledge in the Nineteenth Century by Alex Csiszar
- Making Knowledge in Early Modern Europe: Practices, Objects, and Texts, 1400-1800 by Pamela H. Smith
- The Politics of Pure Science by Daniel S. Greenberg