In 1945, Vannevar Bush wrote his famous essay, As We May Think. Bush had risen to the level of Director of the US Office of Scientific Research and Development during World War II, where he was overwhelmed with the volume of new scientific research and publications that came across his desk. He lamented,
There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers—conclusions which he cannot find time to grasp, much less to remember, as they appear.
In response to this problem, he famously envisioned the Memex. The Memex was to be a desk that stored millions of pages of books, research papers, notes, and other information on microfilm, but the desk would also allow complex ways of retrieving and annotating the information, and it would remember all of the user’s previous interactions with the material. In this way, he envisioned instant access to all of the information in the world relevant to your interests, and this data would be linked together to form “associative trails” or hyperlinks.
Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them. … The physician, puzzled by a patient’s reactions, strikes the trail established in studying an earlier similar case, and runs rapidly through analogous case histories, with side references to the classics for the pertinent anatomy and histology. …
The historian, with a vast chronological account of a people, parallels it with a skip trail which stops only on the salient items, and can follow at any time contemporary trails which lead him all over civilization at a particular epoch. There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record. …
Thus science may implement the ways in which man produces, stores, and consults the record of the race.
Though his vision was limited by the mechanical prospects of information management imaginable in 1945, it had these essential elements:
- Virtually unlimited, instantaneous access to all the world’s relevant information.
- Hyperlinking between relevant documents and passages.
- Permanent storage of annotations and new connections made by the user.
- Ease of search and indexing of all the material.
This was bold thinking in a world where the then extant knowledge and research was contained in printed books and journals, the best of which were hand-indexed. These books and journals, if you had access to them, could not be edited or updated. There was no effective way of searching the published literature, let alone unpublished research or the original data that the research was based upon. Once you found a relevant work, this work was not necessarily a pathway to immediately readable primary sources or related literature, especially if the literature was published later.
By 1967, Bush realized that computers would become the method by which his vision could be implemented. The ideas of hypertext and linked documents inspired the work of Douglas Enlgebart (who invented the computer mouse and helped to develop the graphical user interface), Ted Nelson (who started Project Xanadu in 1960), and countless others, including Tim Berners-Lee.
Berners-Lee is famous for creating the World Wide Web in 1990. Before the World Wide Web, Berners-Lee created ENQUIRE in 1980 as a means of hyperlinking scientific data, emails, and other information for CERN (the European Organization for Nuclear Research). Eventually, those ideas culminated in the World Wide Web and the Internet as we know it today (interestingly, Berners-Lee’s father invented the first commercial stored program electronic computer in 1951).
The motivations for Berners-Lee’s efforts were in parallel with those of Vannevar Bush. The first line for the world’s first website described the WWW project as “a hypermedia information retrieval initiative aiming to give universal access to a large universe of documents.” He was interested in increasing the ease of collaboration and organizing the massive amount of information being generated in scientific research programs around the world. Communities of scientists and other interested parties could easily access the latest information, contribute to it, add notes, and help grow and apply knowledge. This format finally gave rise to Bush’s vision in ways that even Bush couldn’t forsee (like near-instantaneous search and communication).
Yet, today, scientific publishing and dissemination of scientific information is stuck in a wormhole that connects the 1940s to the 2010s. While many improvements have been made, much of the promise of the WWW is not implemented. Here are some of the problems:
- Most of the world’s scientific literature is protected behind a paywall – even papers that are fifty years old; the price per article is usually outrageous (like $35+ rather than something reasonable like 99 cents).
- Because there are so many different paywalls from so many different publishers of journals and books, a true system of hyperlinking to relevant papers and sources doesn’t exist.
- Once a paper is published, it is almost never updated (a throwback to the print era).
- Only rarely is the primary research data available to the reader (again a throwback to the print era).
- Most of the world’s published scientific literature is of low quality, redundant, or otherwise not really contributory to scientific progress; this is promoted both by the archaic publish-or-perish requirements of academic institutions and the profit-motivation of most journals, particularly many of the open source journals.
- Peer review and editorialization is ineffective and in the hands of a tiny number of potentially biased reviewers and editors, who stifle progress and promote prevailing bias.
These problems could all be solved if all scientific papers were published in an open access format, with open peer ranking and review, able to be corrected or updated, with full transparency of underlying data, etc. The current academic system of rewarding academicians who publish (and punishing those who do dot) emphasizes bad behaviors and encourages low quality publications (it encourages quantity over quality and research for the sake of career promotion, not advancement of the science). The current system of paid journals (either those protected by paywalls and subscription fees or those touted as open-access which charge authors a fee to publish) is also an archaic arrangement that wrongfully perpetuates the idea that the literature published in those journals is necessarily better than literature published elsewhere. These anachronisms come from a pre-industrial age and have more to do with making money and protecting established power-holders than they do with promoting scientific progress. True scientific progress is disruptive by nature, but the current system oppresses disruption.
The Internet and the World Wide Web, created expressly for dissemination of scientific data, has revolutionized a lot of other traditional paradigms, and as one old system has been replaced with another (disruption), there is always an existing power that fights against it. In each case, however, progress has been made and what we have now is far better than what was painfully replaced. Consider the following examples.
Encyclopedias. Before the Internet, dozens of printed encyclopedias (and references of all other sorts) were available. The Encyclopedia Britannica was perhaps the finest of these general knowledge encyclopedias. It was first published in 1768 and the last printed edition was published in 2010. It now exists online only. The last printed edition had about 40,000 articles with over 8,500 photographs and illustrations and cost $1,400 new. The subscription to the online version costs $70 per year. But I’ll bet that you just use Wikipedia like the rest of us. The English-language Wikipedia has over 5,000,000 unique articles and hundreds of millions of pictures and graphics. It is updated constantly and it hyperlinks exhaustively both to itself and to the rest of the web. Its sources are a click a way and a record of original writing and edits exists for each article, often with some exciting debate among various editors, that anyone can read.
So why would anyone buy a subscription to Britannica, let alone a print edition that was out of date even before it was printed? An air of legitimacy. The Encyclopedia Britannica, like many traditional sources, markets itself as more accurate or more true than less expensive, less controlled, and more available alternatives. Britannica, on its website, cites the number of Noble Prize winners who have contributed to articles, or American presidents, for example, as a way of claiming some authority. But Wikipedia? Anyone can edit it at anytime. Surely, that’s a bad thing?Students in high school or college may get in trouble for citing Wikipedia, but not the Britannica.
But this is all bullocks. The massive peer-review system that underpins Wikipedia and similar efforts is far better than anything Britannica could ever do. Would you rather read an article about Shakespeare’s The Tempest written by one associate professor somewhere at a junior college (Britannica), or one written by over 1,800 Shakespeare enthusiasts (many of whom are also professors or published authors on the subject, edited or updated over 4,000 times (the last edit literally today), replete with over 130 clickable footnotes and hundreds of others links and reference citations (Wikipedia)? By the way, the article on The Tempest has over 100 active watchers if you want to try to make a stupid edit.
In 2005, this study in Nature found an equal number of mistakes in Wikipedia and Britannica. That was over ten years ago; I suspect that Wikipedia has grown stronger while Britannica has continued to decay. More importantly, for up to the minute, useful knowledge (that promotes the easily ability to be fact checked), there is no comparison. Crowdsourcing, massive peer review, deep hyperlinking, and instantaneous availability has won the battle. The same facts presented on Wikipedia have deep context and near-instantaneous recall of sources. But Britannica? You’ll just have to trust them. Britannica has survived by promoting its “air of legitimacy,” but such an appeal to authority is really just a clever marketing tool. This air of legitimacy has become a shroud of death.
News. The newsprint and traditional magazine industry is similarly dying quickly. A daily newspaper may feel like an official “record” for history, but we mostly get our news from multimedia-rich, hyperlinked, and continuously updated online sources. Many media outlets have made this transition successfully, but websites like reddit.com go a step further, fulfilling the vision of massive peer review and crowd-sourcing. I always go to Reddit or Twitter for new or breaking news. The data is more fluid, but that is usually an acceptable tradeoff that makes the information more immediately useful. I you want the final version of a news story, you really just have to wait about a decade for a book to come out; by then, the information doesn’t have the same usefulness. Quick and fluid versus slow and fixed is always a battle in news information.
Video. Traditionally, we had four or five sources of television. Today, we have a massive amount of content immediately available, most at low or no cost. Instead of 20 or so central creators of television and movies, we have millions. YouTube is a marketplace that is free to enter, free to consume (except for the advertisements, of course) and creates a more level playing field for all. This means that any content creator who creates something worthwhile has an opportunity for success. In the old system, quality was not as important because there was limited competition. With only three or four over-the-air networks, any content at all would draw consumers. It wasn’t better content, but it too had an air of legitimacy because it came from a “major” network. But this is what all monopolies claim – that there is no need for competition and that a closed system is best to preserve quality (and control).
Music. Before I get back to the scientific literature, I’ll list music as a final example. The music industry has gone from a few, centralized number of producers and distributors, to a system where anyone can create music and sell it and distribute it though the same platforms as the biggest players. They may not have the promotion and radio access that still drives musical consumption trends, but anyone can make a song or an album and anyone can download it for free or for maybe 99 cents. This has resulted in higher quality and more choices. The music industry fought tooth and nail against this transition, which was spurred on by file-sharing websites like Napster (the music industry once sued LimeWire for $75 trillion!). The music industry story shows us that necessary change will happen whether an industry supports it or not (and whether it is legal or not).
Scientific publishing. So what about medical and scientific journals? As I stated above, because of a publishing cycle that is stuck 200 years ago in the print era, the dream of Berners-Lee and Bush is largely unfulfilled for scientific literature. You cannot, in the vast majority of cases, click on a hyperlinked reference and see the original paper instantaneously after you find the abstract or reference in PubMed or some other source. Literature is not organized in a convenient place by its metadata so that an interested researcher can quickly see all of the relevant literature and sort it in a variety of ways, with immediate access to full text articles and the original full data sets of the authors. There is not a place where massive peer review occurs or where authors can update and correct their original papers. You cannot easily go to one place and see the most important papers in your field, regardless of where they were published, organized and ranked by massive peer review, and stay up-to-date with and contribute to your field. Far too much trust and credibility is extended to journal editors and scientific journalists, many of whom have no real qualifications even to be in those positions.
Two factors have led to this current crisis in the way scientific literature is disseminated: the perpetuation of the expensive, subscription based, journal system, in which journals claim some legitimacy compared to other methods of publication, and the push for academic scientists to publish as many articles as possible to further their careers.
Irony?
Are paywall protected, subscription-only journals really the protectors of high-quality publications? In other words, is something more likely to be true because it is published in the New England Journal of Medicine rather than if it had been published on a Wikipedia-like commons for scientific publications. This is difficult to answer because such a comparison doesn’t exist. If you do the work of producing a good publication, you are obviously going to submit it to the best journal you can; for this fact alone, journals like the NEJM typically produce higher quality literature.
But the editors of leading journals decide which papers to publish largely based upon impact and how well the study is apparently produced. Impact is typically a function of novelty, and novelty can often be an indicator of faulty findings. High quality production can easily be faked, and often is. So studies with dubious conclusions or false-positive findings are all too common in leading journals. Articles in traditional journals are plagued by many problems which I will only briefly mention here.
- There is a serious lack of quality peer-review. Many researches have conducted studies to determine the effectiveness of the closed peer-review process on the quality of scientific publications and the validity of peer review. This review concluded that peer review is of dubious value. Richard Smith, in this review, concluded, “So peer review is a flawed process, full of easily identified defects with little evidence that it works. Nevertheless, it is likely to remain central to science and journals because there is no obvious alternative, and scientists and editors have a continuing belief in peer review. How odd that science should be rooted in belief.” But there is an alternative: crowd peer review – a process that works just as Reddit or Wikipedia work today, and a process open to continuous input, even years after publication. Here is some more discussion about problems plaguing peer review. One widespread belief is that peer review selects only the best articles for top journals, suppressing the publication of low quality articles. But there is no evidence for this. Making it in to a top journal has as much to do with networking as it does the quality of the paper; most lower quality papers are still published, just in lower tier journals; and many of the papers that are deemed unworthy of publication may be of the highest value, like replication studies or studies that do not reject the null hypothesis.
- Most studies have serious methodological errors that make it past initial peer review and editorship. Douglas Altman has contributed much to the evolving literature on the incredibly poor quality of most academic published studies. He reports, among other things, that among papers he has analyzed from high quality journals, 63% use incorrect analysis of multiple observations, 61% given inadequate information on harmful consequences of interventions, 58% use an incorrect method of comparison for subgroups, 89% fail to report the mechanism used to allocate interventions, 51% fail to state whether blinding was utilized, and 25% fail to mention the eligibility criteria. These are all basic and important principles. What’s more, he notes that the system is stacked against refutation. Letters to the editor, for example, are allowed only 200-3oo words and are needed usually within 4-6 weeks of publication. Massive (and unending) peer review fixes these issues as well. Today, when future corrections are made or noted, they are often unattached to the original article, so that a future reader is often unaware of post-publication criticism, errata, or even retractions.
- These factors, along with a troubling amount of research around low-probablity hypotheses, contribute to this stark realization: Most published conclusions in literature are untrue. I have discussed this often and it is a significant problem to determine which publications present likely conclusions and which do not. The editorship and peer review process have not helped this discernment. These false conclusions stand unchallenged and unchecked, except in the relatively rare case that a paper is retracted. Meta-analyses and other flawed attempts at systematic review continue to incorporate faulty research years after the science has moved on. This creates inertia in scientific progress. Today, scientific papers are archived dead documents; but in the world I am envisioning, every paper would be presented in a context of continuous peer review, commentary, confirmatory studies (or lack of confirmation), etc.
- Speaking of retractions, almost 700 known retractions are issued each year with over 12,000 errata published. But most of the readership of original papers are unaware of the retractions or errata, so the mistake is permanent for most.
- The current system encourages novel research for premiere publication, which disparages and discourages the most important type of study for the scientific process: a reproduction. We see few studies performed which seek to reproduce the exact methodology of a previous study to verify the results, and when we do, we more often than not find that the results are not reproducible. I have highlighted this “reproducibility crisis” here.
- Publication bias of another form is a widespread bias that affects meta-analysis. Only studies with positive findings (rejection of the null hypothesis) are usually published; this means we do not know how many studies that did not reject the null hypothesis might have been published if the cost of publication weren’t so high. These are valuable studies. A meta-analysis that considers four positive and one negative study will likely show a positive conclusion; but what if 12 negative papers never made it past the publication barriers? All data should be published. This also includes the publication of raw data files and spreadsheets from the published research to allow for others to reanalyze and find errors, fraud, and alternate hypotheses. A Wikipedia-like repository for scientific papers makes the cost of this type of voluminous publication negligible.
- Perhaps it’s a good thing that half of all scientific papers published are read only by their editors and authors. This number represents two problems: first the high cost of readership (there are about 30,000 publications, many of which charge substantial subscription fees) and the low quality and lack of important of most of these publications. How do I know when an important paper in OB/Gyn appears in a journal I don’t regularly read? How do I get the paper even if I know about it? This work should be crowd-sourced and open.
It is a subject for another day to think about why academicians are wrongly incentivized to produce so many low quality publications, but the problem has only gotten worse in recent years as the number of publications produced is increasing meteorically as pay-for-publication, open access journals are thriving. All of the problems listed above for traditional journals exist also for these journals, plus one additional problem: next to no or no quality editorship or peer review. There are now over 10,000 open access or pay-to-publish journals. The quality varies widely, but many will publish any paper submitted for a small fee within a couple of hours of receipt. Beall’s List provides a list of many of these predatory journals.
Indeed, the extremely low quality of many open access journals has helped the paywall journals maintain their air of legitimacy, but I wouldn’t be so bold as to guess that more true hypothesis are published in pay-walled journals as compared to the open access journals. Both have similar problems and these problems are largely corrected with a massively reviewed, free-to-view format. Academic institutions could easily learn to embrace such as a system, and they could and should give more credit to faculty who produce high-impact, high-quality papers as judged by the community that uses them.
But first the pay-to-play system has to go away. The open-access journals are already freely viewable but don’t exist in a framework that provides the type of system I am describing. The paywall articles need to become free to view, as well. Ironically, this will probably occur in the scientific arena in the same way that it occurred in the music arena. Just as Napster and other file-sharing websites forced publishers to change their paradigm, so too websites like Sci-Hub will encourage this breakthrough. Sci-Hub is the Napster of scientific papers, with over 58,000,000 papers ready to download and hundreds of thousands of downloads per day.
For the paywall journals to truly give way to a new system, they must go the way of Britannica or adapt. This means that stakeholders must get past the air of legitimacy that the preeminent journals maintain, and this will not be easy. A lot of academic pride and a lot of careers are tied to publication in these journals; and a lot of money is made by the publishers. But these factors are a hindrance to progress.
Consumers of scientific publications are actually misled by the gravitas of an article published in an austere journal. They wrongfully assume that it is of higher value or better quality because it is published in such a journal. I heard a debater in a recent Intelligence Squared debate about the FDA make the comment that a particular paper was from a “reasonable journal” and a “reasonable university” as if this were all that needed to be said to end the debate. He did not refer to the merits of the data, but instead made this logical fallacy though an appeal to authority. Science based on faith. A paper could be complete garbage and meet this shallow requirement. Even metrics like how many times an article has been cited or how many times a publication has been downloaded are not very valuable in determining what its quality is; this often just reflects how accessible it is and prevailing bias. Does it come from an open source journal or a widely prescribed journal? It will be accessed more. Does it agree with the prevailing bias of the scientific community? It will be published in a leading journal.
Authors must lead this revolution. The authors of scientific literature don’t make money for their authorship (at least not directly), so that shouldn’t work against the revolution; their incentive is advancement of the field and advancement of their careers. If they want their research read and utilized, then they should be in favor of open dissemination. If they want their research to have high impact, then they should want a wide readership. If they want their findings to be a true part of the scientific process, then they should want the best peer review and commentary about their data. If they want the scientific process to work as designed, and true progress to be made, then they must fight against the current system and replace it.