Re-examining Scientific “Expertise” and the Purpose of Peer Review

There’s a quote from the book, Rethinking Expertise:

“…people who have this kind of expertise share some of the tacit knowledge of the communities of practitioners while still not having the full set of skills that would allow them to make original contributions to the field…”

For most practicing scientists [and engineers], they tend to know when it’s time to go from having a tacit knowledge and expertise in something, such as their area of study, to an area where they are, at best, having an interactional expertise. This is because they are aware of the level of knowledge and the amount of details that are needed to be an expert in a particular area. So, they tend to know when they no longer have that [expertise in a subject matter].

It leaves two questions to ponder on:

  1. Exactly how many people are aware of the level of their knowledge in a certain area when they either offer their “expert” opinion or willingly engage in a discussion where tacit knowledge of some subject matter is required?
  2. Is the supposed “expert” level of knowledge not as strictly required in other areas outside of science?

There needs to be a distinction between an expert, an “interactional” expert and the uninformed/misinformed pedestrian [the latter really needs no further mentioning–really]. An expert in a particular field is just an expert in that particular field, plain and simple. No matter how perceptive and well-informed a person may be, that doesn’t solidify that person in a position of absolute expertise. Likewise, a graduate student in a certain field, who has yet been equipped with the tacit knowledge to making any “contribution” can say, “I’m not yet a qualified expert but I’ve scoped this research scene out some and here is what I think.”

The same way of thinking can be applied to a science journalist with experience in the field. They can give their perspective even though they are not making an original research “contribution”.

Some don’t have the use for the concept of “interactional” experts and it’s probably, for practical purposes, means a non-expert that for one reason or another people may think have some credibility. I mean, recognized experts can make wrong arguments. It would be better for you to listen to these so-called “experts” and certain “interactional” expert observers and see if you can form an expression of them by how fairly and objectively insightful they seem–over time. How do their expectations square with reality? You can choose to instinctively place a high value on recognized expertise but that doesn’t automatically equate such expertise with credibility.

Peer reviewed articles/journals also come to mind. I mean, how reliable are peer reviewed scientific articles/journals? For those of you [reading this] that might not be aware of, many journals will ask for a suggestion of a few referees and the editor might use one of them; that reviewer will then be used in addition to another referee which the editor will pick from his/her internal list. The reasoning behind the use of this system is simply that it is impossible for the editors to keep track of exactly who is an expert in whatever field of topic is at hand. By doing it this way, at least one referee is–hopefully–an expert, whereas the other (that was handpicked by the editor) will have a little bit more of a “general” knowledge on the topic at hand.

However, it’s good to know that far from all journals use this system. Your more prestigious journals will not allow you to choose, or, moreover suggest or recommend just anyone and they tend to use more than just two referees. Now, this is, of course, only viable for your well-known and well-funded [read: established] journals that can afford to have editors aplenty, one or more for each sub-field and that are also able to persuade a good number of so-called “experts” to referee for them, such as friends-within-the-industry. A true expert would never agree to review a paper that was written by a friend or someone they collaborate with.

Also, there is a common misconception that the peer-review process has been designed to ensure that a paper is “right”. Understand, all peer-review does is assess whether any errors have been made and that the reasoning behind the paper is sound. At times, referees vehemently disagree with the findings of a paper but they will not reject it unless there’s a specific mistake or a definite error in the logic in the paper being reviewed. Real peer review comes after the paper has been published when those that participate in the field of topic [that’s the subject of the paper] gets to read and comment on the paper. Oftentimes, peer review is placed on a pedestal in a well-meaning but not helpful way. When peer review is held up high in the realm of “being sacred” it’ll leave it open to the tactics of debating. Perhaps, that’s a good thing.

Some of your modern-day philosophers try to define [mainstream] science and nothing that they conclude on can be deemed watertight. Attempts to define mainstream science, in any way, is indeed difficult. What you’ll end up doing is exploring some of the gray areas and I’m certain that some insight can be derived from such an undertaking. Utility is the key operand for the ultimate test of science, or better said, engineering. If you’re able to engineer a subject that is based on science you can be sure that you’re on the right track. To me, this is why a particular area of [peculiar] interest such as climate science is embedded in manufactured difficulties, mainly because no one can engineer a new climate (regardless of what some may think and at the same time cannot prove). The way I look at peer review articles and journals is that it’s more or so dependent on trustworthiness. Part and parcel of the scientific method is replication and public verification but the [general] public, largely, has no background in knowing how to go about determining what is fact and determining what is not fact.

Peer review is far from perfect and it is inevitable that there will be errors in many papers that are submitted for peer review. Referring back to the system I mentioned earlier, if it [the system] works most of those errors will be in the discussions and/or conclusions and will simply be due to the fact that the authors are drawing the wrong conclusions from the data or analysis that’s been made available to them. Understandably, errors in the paper does not mean that the entire paper is absolute rubbish. No, as long as the experiments and analysis has been performed correctly it can still be seen as a decent paper especially since other people can use the same data as a means to derive the correct conclusions.

Most people do not pay close attention to low impact journals. The higher impact journals are critical of what papers they are willing to accept and most reviewers will advice accordingly. Some things just cannot be prevented from being published such as deliberate fabrication of data or the brazen omissions of mistakes that were made in the experiments. But those mistakes will be surfaced by the fact that science is self-correcting. In addition to unsound rubbish getting published, there are examples of journals rejecting papers of importance. Just think of the publication Science and Nature when they declined to publish the results of Lauterbur, the inventor of MRI, who would later go on to win the Nobel Prize for it. Now, that does happen from time-to-time but when the work is sound and novel it does eventually become inundated with recognition. Look at it this way, just because something has been awarded as a great contribution to the world of technology doesn’t mean that the manuscript that was originally submitted met all of the standards of that particular journal. Science and Nature also reject a good bit of quality research simply because there are so many submissions being sent to them–more than they could possibly publish. A rejection isn’t considered as an insult; most would see it as an accomplishment just to get granted a full review by them.

If you take one hundred papers that claim to have a result of a 95% level of confidence you would expect about five of them to be wrong. In theory, things are way more complicated as a wrong but interesting result can be considerably useful. A classic example of this is the theory of Yang and Mills who were trying to write down a theory of hadronic interactions. Well, they were wrong, however, Yang-Mills theories became the foundation of weak interactions and QCD [quantum chromodynamics]–and grand unified theories. If everything that’s “wrong” is/was kept out of journals, we wouldn’t have this.

The not-so-obvious benefit from Yang-Mills theories: Right answer, wrong problem.

At times, people that criticize science just criticize science for the sake of criticizing science and don’t know much about it nor are they willing to set time aside to become acquainted with it and, at the same time, they’ll claim that the ones that are receiving Nobel Prizes aren’t coming up with anything new. In reality, just about every single invention and scientific discovery–anything seen as creative–is just taking ideas which already existed and are combining them in efforts of developing new methods, although, quite often, there have been time when someone came to the revelation of the same idea as someone else from a time before them even if the person that received the credit for the idea wasn’t even aware of it.

I’ll re-iterate the fact that there is a process in peer reviewing scientific papers and state that there needs to be a substantiated definition of what this process is that people refer to in regards to having one’s research published. You see, for many journals it’s imperative that a process be established. It cannot rely on manuscripts being handed out to just one person and having that one person decide the fate of the paper that was submitted. In physics, speaking specifically for the Physical Review family of journals, the fact that this it’s a human evaluation, it will be taken into serious consideration. Falling back on what I stated earlier about papers being rejected, an outright rejection is given due process for further consideration. In other words, there stands plenty of opportunity to make one’s case to be heard by more than one person [and their bias]. When you take the time to ponder on it for a bit, all of these different processes rejecting something unanimously would tend to indicate that such rejection has less to do with bias and more to do with an unfavorable manuscript.

Here’s a quote from an article by the late Dan Koshland that brings another aspect to this:

“The trouble is that journals can easily become too conservative, because editors find it easier to reject the unusual than to take a chance on the unthinkable….. The existence of multiple journals provides the final safeguard against too much conservatism and is the ultimate reason that science is more receptive to non-conformity than any other segment of our society.”

[1] D.E. Koshland, Jr., Nature v.432, p.447 (2004).

Interesting enough, there was a U.S. Supreme Court decision years ago known as Daubert v. Merrell Dow Pharmaceuticals, Inc., from over eighteen years ago (in fairness it was probably seven years old when the article was submitted). Within context, a good number of times that “peer review” is mentioned in that decision it is rejecting documents that were not peer reviewed, implying an implicit support of the system. Below is a part that questions the value of peer-review which is based on David F. Horrobin himself:

Another pertinent consideration is whether the theory or technique has been subjected to peer review and publication. Publication (which is but one element of peer review) is not a sine qua non of admissibility; it does not necessarily correlate with reliability, see S. Jasanoff, The Fifth Branch: Science Advisors are Policymakers 61-76 (1990), and, in some instances, well-grounded but innovative theories will not have been published, see Horrobin, The Philosophical Basis of Peer Review and The Suppression of Innovation, 263 JAMA 1438 (1990). Some propositions, moreover, are too particular, too new, or of too limited interest to be published. But submission to the scrutiny of the scientific community is a component of “good science,” in part because it increases the likelihood that substantive flaws in methodology will be detected. See J. Ziman, Reliable Knowledge: An Exploration of the Grounds for Belief in Science 130-133 (1978); Relman & Angell, How Good is Peer Review?, 321 New Eng.J.Med. 827 (1989). The fact of publication (or lack thereof) in a peer reviewed journal thus will be a relevant, though not dispositive, consideration in assessing the scientific validity of a particular technique or methodology on which an opinion is premised.

From that, you can draw two definite conclusions:

  1. David F. Horrobin is well-known and respected enough to be cited by the Supreme Court, and
  2. The majority of the faults listed above can be traced to a single author and a single article.

As stated before, referees are human. They’ll approach papers with their assumptions and prejudices and it is not always easy to accommodate those. With that said, an astute referee that is knowledgeable and willing to do some research can make suggestions for improvements in the qualities of your published work. Some will say that criticizing peer review is something that has to be done in a disinterested and objective tone or else it comes off as nothing more than sour grapes and that it is unlikely to effect any change in the process. Perhaps, that’s accurate. I’ll add to that by saying that if all papers were published electronically (e.g., arXiv), with no peer review, it would be indeed difficult to plow through them all and evaluate them fairly.

Clarify what the alternative would be then. Would one publish the name of the reviewers? You probably could but the frankness afforded by anonymity serves a purpose. Rather, it’s the fig leaf of anonymity. The pool of true experts in a paper’s topic is pretty small and it’s really not all that hard to figure out who’s who. It’s been proposed that the names of the authors remain unknown. I mean, there just aren’t that many people that could have written a paper. These days, it’s easier to figure out–the apparatus is unique in the same vein of fingerprints. The playing field has been leveraged now that there is arXiv.

For those who are adamantly anti-peer review, I’ll leave you with this one last quote from the late Dan Koshland:

“I realize now that a new theory is likely to meet resistance, but it should, if based on GOOD experiments, receive skeptical encouragement if science is to remain in balance. Non-conformists are necessary for progress in science, just as mutations are necessary for progress in evolution. However, there must be constraints to select good mutations from bad mutations. Too many mutations block evolution, as error-prone straints of bacteria have proved. So non-conformist thinking in science must be encouraged to make progress, but restrained to prevent anarchy. In science, it is PEER-REVIEWED JOURNALS and granting agencies that provide such balance.”

[1] D.E. Koshland, Jr., Nature v.432, p.447 (2004).

Now, I do have to question the validity of peer review, especially with all of the highly-questionable/easily refutable, moronic menagerie of “climate change” garbage that’s being permissibly published in these “scientific” journals.

To each his own, I guess….

Desmond