Scholarly Peer Reviews Must Involve Experts

The manner in which scholarly peer reviews are being performed in some settings today is not serving as an effective gatekeeper. Peer review is supposed to filter out invalid or otherwise inadequate work and to encourage good work. Most of the poor work that is being produced could be blocked from publication if the peer review system functioned as intended.

Although some historical instances of peer review can be traced back to the 18th century, it didn’t become a routine and formal part of the scholarly publication process until after World War II. With dramatic increases in the production of scholarly content by the mid-20th century, a means of filtering out inadequate work became imperative.

According to Wikipedia,

Scholarly peer review (also known as refereeing) is the process of subjecting an author’s scholarly work, research, or ideas to the scrutiny of others who are experts in the same field, before a paper describing this work is published in a journal, conference proceedings or as a book. The peer review helps the publisher…decide whether the work should accepted, considered acceptable with revisions, or rejected.

Peer review requires a community of experts in a given (and often narrowly defined) field, who are qualified and able to perform reasonably impartial review.

Scholarly publications, such as academic journals, are only useful if the claims within them are credible. As such, the peer review process performs a vital role. When the process was first established, it was called peer review based on the assumption that those who produced scholarly work were experts in the relevant field. An expert’s peers are other experts. A “community of experts” is essential to peer review.

Over time, in some fields of study, the production of scholarly work has increasingly involved students who are still fairly early in the process of developing expertise. Corresponding with this transition, peer reviewers also increasingly lack expertise. During their advanced studies, it is absolutely useful for students to be involved in research and the production of scholarly work, but this work should not be published based solely on reviews by their peers. Reviews from anyone who’s interested in the subject matter can potentially provide useful feedback to an author, but only reviews by experts can support the objectives of the peer review process.

Characterizing this problem strictly as one that stems from the involvement of students is not entirely accurate. Scholarly work that is submitted for publication is rarely authored by students alone. Almost always, a professor’s name is attached to the work as well. Unfortunately, even if we assume that a professor is an expert in something, we cannot assume expertise in the domain addressed by the work that’s submitted for publication. In my own field of data visualization, many professors who teach courses and do research in data visualization lack expertise in fundamental aspects of the field. For example, it is not uncommon for professors to focus solely on the development of data visualization software with little or no knowledge of the scientific foundations of data visualization theory or actual experience in the practice of data visualization. One of the first times that I became aware of this, much to my surprise, was when the professor who introduced me when I gave a keynote presentation at the Vis Week Conference a decade ago admitted to me privately that he had little knowledge of data visualization best practices.

Do you know how the expertise of peer reviewers is often determined? Those who apply to participate in the process rate themselves. On every occasion when I participated in the process, I completed a questionnaire that asked me to rate my own level of expertise in various domains. There are perhaps exceptions to this self-rating approach—I certainly hope so—but this appears to be typical in the domains of data visualization, human-computer interaction, and even statistics.

Something is amiss in the peer review process. As long as people who lack expertise are deciding which scholarly works to accept or reject for publication, the quality of published work will continue be unreliable. We dare not forget the importance of expertise.

Take care,

3 Comments on “Scholarly Peer Reviews Must Involve Experts”

By Dale Lehman. January 25th, 2018 at 2:37 pm

I agree that peer review needs to be improved, but I am much more pessimistic about its potential. I’ve reviewed many manuscripts and have never been asked to fill out any self-declaration about expertise. I’ve always been sent papers by editors on the basis of my past publication experience. So, in a sense, I have the required expertise to review these manuscripts.

But the problems are of a different nature. Often (perhaps even usually) the reviews are conducted by people with poor incentives. They rarely have the incentive to do a good job (compensation is nonexistent) and sometimes have perverse incentives (to protect their own record by insisting that authors praise their work, and sometimes rejecting work that is contrary to their record). Even with good incentives, they don’t have the time or inclination to be thorough. They rarely have sufficient depth of knowledge to ask the right questions. They never ask for the original data, nor would they spend the time to confirm that analysis is done correctly. I speak from my experience on both sides of the refereeing process. I find that when the process works well it is the exception, not the rule. And, I don’t believe there is any way to significantly improve the process.

My own preference is for all papers to be initially be reviewed by editors to decide if they are sufficiently of interest (and of minimally required competence) to be put out for public review. Then, there would be a period of open review (anonymous or authored), after which editors would then decide whether the manuscript is of sufficient merit to be officially “published” (or revised and resubmitted). They would review all the comments the papers received, as well as the integrity of the comments themselves. Of course, I would also require that the data be provided along with the manuscripts for public review.

I would also change the incentive structure so that authors receive credit for assembling the data set, credit for passing the initial review stage, and then more credit for final publication.

My principle difference with what you have stated above is that I see the need to open the process as much as possible, while it sounds like you want to close the process to the “experts.” I just don’t see that as working – and I think the evidence of how poorly it works is already there. The problem is not that the experts are not “expert” enough – the incentives are so fraught with problems that it cannot be trusted to experts. I think the job of editors is to sift through all of the feedback and decide how much weight to place on various reviews. I would have the review process itself made as open as possible.

By stephenfew. January 25th, 2018 at 5:17 pm


It is absolutely true that the fact that many reviewers lack expertise in the domain is but one of several problems in the peer review process. As I’ve written before, I believe that anonymity also leads to many problems and, as you have pointed out, that inappropriate incentives are a common problem as well.

Selecting reviewers solely because they have published papers in the field can’t possibly work as long as the review process is flawed. The problem is self-perpetuating–a vicious cycle of incompetence. As long as inexpert reviewers accept flawed papers for publication, it isn’t appropriate to base someone’s qualifications for serving as a reviewer based solely on having published work. Also, allowing editors to select reviewers only works if the editors are themselves experts in the domain. Unfortunately, the editors who oversee these journals too often lack expertise in the domain. They sometimes function primarily as administrators rather than as scholars.

I believe that your proposal for solving the peer review problem suffers from several problems. First, the ability to evaluate the merits of scholarly work is not the democratic realm of the many, but is instead the realm of the few. The “wisdom of crowds” Wikipedia model simply doesn’t work. Second, opening the review process to the general public would complicate it beyond our ability to control. It is hard enough to manage the process with a small number of participants. Third, if you make papers publicly available for review, you’ve already defeated the purpose of the process by giving the work a public platform where it can do harm.

By Dale Lehman. January 25th, 2018 at 11:14 pm

A rare opportunity for me to be more skeptical and pessimistic than you. I believe the peer review process to be so broken that the difficulties you point out (which are valid) pale in significance to the problems with the current process.

Leave a Reply