One of my old bosses said that half of the chemistry articles he read while in industry could not be reproduced by him and his colleagues. One major flaw of the peer review system is that authors are asked to provide names of potential reviewers. Usually they name people whom they have referenced in their papers. The reviewers are honored that their work is being cited and sometimes fail to (or in many cases are unable to) scrutinize with sufficient care.
We are finding that a lot of our traditional processes no longer work in the modern world. We need to revise these systems.
I am mentally toying with a contagious rating system. If you publish your first article and it can be replicated by independent research, you get one star. You publish your next article as a 'one star' scientist. If one of your papers gets disproven (but is not fraudulent) you get minus one star. If one of your papers is shown to be fraudulent you get minus 10 stars from your current rating.
Whenever you rate someone else's paper, your own rating is included in your assessment. The contagious aspect is retroactive, so that if you publish a fraudulent paper 5 years from now, all 34 of the papers you rated in the intervening 4 years get downgraded correspondingly. If you support a paper that is found to be fraudulent, this applies too.
This should encourage new scientists to select veterans who are not likely to be downgraded in the future and should make sure that veteran scientists know that if they support a paper that does not utilize good science, they pay the price too.
I like the general approach, but I see some potential problems with the system as you've laid it out.
First, I would not use the "stars" system to handle fraud at all. I would make a separate list of those who've admitted to, or been convicted of, fraud, publish that separately, and exclude those people from the regular rating system. (Among other reasons, if someone, call him X, commits fraud and gets caught, his career as a scientist is probably over anyway. But under your system, X can now start endorsing papers he doesn't like by other people, to cause their scores to fall.)
Second, say someone (call him A) does publish an honest paper that turns out to be wrong. While this does mean A's judgment isn't very good, it doesn't necessarily mean that papers by someone else (call them B) are likely to be wrong simply because a bunch of the same third parties liked both A's and B's papers. Better to just reduce the amount by which A's rating of someone else's paper changes its score.
There have been researchers who have repeatedly committed fraud - and keep getting grants. One example that comes to mind is a researcher who claimed to have cloned a human. I think that was a second case of fraud on his part...
It certainly does not have to be 'stars' - I used that because it is so familiar. A numeric rating makes more sense, probably. You should be able to click on it for Details of what caused his rating to increase/decrease. I think that 'wrong' merits a one point decrease, but 'fraud' merits a 10 point decrease.
People with low scores will probably not be asked to comment very often.
Technology advances have been used in academia to improve many things in the past. Perhaps the process will be improved using such a rating system, if those who have accumulated influence or control of funding are willing to accept a more objective method. The system does work most of the time, but a number of its failures are staggering.
One of the biggest critics of the peer review system is Carver Mead, who has compared it to the Spanish Inquisition. The peer review system has been used to propagate the nonsense of global warming. Science is about logic not about consensus and some of the best scientific work is not being done (reported) by peer reviewed journals.
I was thought of global warming while reading the article. I'm hardly the intellectual in the Gulch, but it appears my dino hunting instincts picked up that scam's stinky scent.
This is what I wrote back in 2012 about the "peer review" model for the "acceptance" of thoughts and conclusions.
"Individuals in modern academia are convinced that ONLY with the support of two or more peer-reviewed sources, and reviewed and approved by the academics who set themselves up as the authority to determine what is or is not valid and if YOUR opinion is ignorant and invalid or not. If we all had to subject ourselves to those constraints, we would not have light bulbs, electricity, automobiles, or airplanes. The Wright brothers were ridiculed, but because they ignored the "conventional wisdom," and I use the term conventional wisdom in its loosest form, we now have air travel and space shuttles, all because of the Wright brothers did not recognize a pre-conceived box to think inside.
"Almost all of mankind's greatest inventions and creations came from people who ignored traditional conventional thinking. What lesson can we learn? IGNORE TRADITIONAL CONVENTIONAL THINKING if you ever expect to expand beyond the present. This does not mean that we should not learn from previous knowledge. The Massachusetts Institute of Technology, in my opinion, has the best charter or mission statement. In short, the transfer of knowledge for intellectual stimulation. Intellectual stimulation is how we expand our views and open our minds to the possibilities of the enigmas of the universe. I believe there is a solution to ALL problems. We may know the answer, we may not like the answer, or how we arrived at the answer, or even know how to solve the problem but every problem has a solution. When the elitists scoff at the possibilities simply because it differs from convention, they are no better than the educated elitist power mongers of the dark ages who burned people at the stake for heresy because they had a viewpoint that differed from "accepted" writings or dogma."
This is an anti-science view. Edison and the Wrights did not disregard the scientific method. They would have been rightly ignored if they had.
Yes, the peer review system does sometimes become corrupt (usually by a person or institution who controls the funding to the scientists involved). If this happens it may be necessary to look for peer reviewers outside the corrupt institution(s), or occasionally to publish without them. But if your results aren't reproducible, then your work isn't science. *That's* why I don't believe the likes of Michael Mann.
I have always found disdained anecdotal evidence to be more accurate, in real life application, than any evidence set forth by a study. I have ignored "scientifically" supported data in light of my own observations with great results. We have a number of people in the immediate and extended family who should not be alive...but are...and thriving. This does not mean ignore the data from studies, it just means to compare it to what one has experienced and then make decisions to ignore, incorporate or embrace.
Its a way of life. This was common in the Semiconductor Industry to. People would publish false work, or within the company try to "smooth data" to cover something up. People filed patents that never worked, some of them were ludicrous and stole from others to publish or file. You learn to put up a shield that says "No-Moochers".
Even in academia, the so called safeguards do not stop the mooching. Before the the first IBM desk top computers I built my own computer and started a word processing business. I immediately became swamped with work from several universities. More than once a student would bring me a paper to process, and their instructor would also submit nearly identical work for publication from the students work. The instructors were always willing to pay double, wanted it rushed and furious when I would turn them down. They also always threatened me in some fashion, usually with banishment of any work performed by the students.
On the Corporate side; What can I say? Every corporation I worked at was a Socialist structure with a pecking order. If you ever read anything from Heinlein you know about the pecking order of monkeys in a cage. When the little monkey finds a peanut, everyone wants to steal it.
Thanks for your grounded voice of reason. I am more concerned about the nature of articles chosen for publication across disciplines, reflecting an unfortunate and observable trend in media. This outrageous fraud is not an unrelated blip.
I've been part of the peer review system. At best, reviewers try to take a stab at understanding what they read. At worst, they apply their preconceived notions and often miss the point. At the same time they approve articles that would never pass muster in an undergraduate laboratory course.
Having been part of the "peer review process" myself I discovered; first that a lot of it has to do with the "good old boy" system, and second that a lot of what goes on has to do with acquisition of Grant Money, and or in the corporate world the bonuses that go along with submitting and patenting.
I am mentally toying with a contagious rating system. If you publish your first article and it can be replicated by independent research, you get one star. You publish your next article as a 'one star' scientist. If one of your papers gets disproven (but is not fraudulent) you get minus one star. If one of your papers is shown to be fraudulent you get minus 10 stars from your current rating.
Whenever you rate someone else's paper, your own rating is included in your assessment. The contagious aspect is retroactive, so that if you publish a fraudulent paper 5 years from now, all 34 of the papers you rated in the intervening 4 years get downgraded correspondingly. If you support a paper that is found to be fraudulent, this applies too.
This should encourage new scientists to select veterans who are not likely to be downgraded in the future and should make sure that veteran scientists know that if they support a paper that does not utilize good science, they pay the price too.
Jan
First, I would not use the "stars" system to handle fraud at all. I would make a separate list of those who've admitted to, or been convicted of, fraud, publish that separately, and exclude those people from the regular rating system. (Among other reasons, if someone, call him X, commits fraud and gets caught, his career as a scientist is probably over anyway. But under your system, X can now start endorsing papers he doesn't like by other people, to cause their scores to fall.)
Second, say someone (call him A) does publish an honest paper that turns out to be wrong. While this does mean A's judgment isn't very good, it doesn't necessarily mean that papers by someone else (call them B) are likely to be wrong simply because a bunch of the same third parties liked both A's and B's papers. Better to just reduce the amount by which A's rating of someone else's paper changes its score.
It certainly does not have to be 'stars' - I used that because it is so familiar. A numeric rating makes more sense, probably. You should be able to click on it for Details of what caused his rating to increase/decrease. I think that 'wrong' merits a one point decrease, but 'fraud' merits a 10 point decrease.
People with low scores will probably not be asked to comment very often.
Jan
I'm hardly the intellectual in the Gulch, but it appears my dino hunting instincts picked up that scam's stinky scent.
"Individuals in modern academia are convinced that ONLY with the support of two or more peer-reviewed sources, and reviewed and approved by the academics who set themselves up as the authority to determine what is or is not valid and if YOUR opinion is ignorant and invalid or not. If we all had to subject ourselves to those constraints, we would not have light bulbs, electricity, automobiles, or airplanes. The Wright brothers were ridiculed, but because they ignored the "conventional wisdom," and I use the term conventional wisdom in its loosest form, we now have air travel and space shuttles, all because of the Wright brothers did not recognize a pre-conceived box to think inside.
"Almost all of mankind's greatest inventions and creations came from people who ignored traditional conventional thinking. What lesson can we learn? IGNORE TRADITIONAL CONVENTIONAL THINKING if you ever expect to expand beyond the present. This does not mean that we should not learn from previous knowledge. The Massachusetts Institute of Technology, in my opinion, has the best charter or mission statement. In short, the transfer of knowledge for intellectual stimulation. Intellectual stimulation is how we expand our views and open our minds to the possibilities of the enigmas of the universe. I believe there is a solution to ALL problems. We may know the answer, we may not like the answer, or how we arrived at the answer, or even know how to solve the problem but every problem has a solution. When the elitists scoff at the possibilities simply because it differs from convention, they are no better than the educated elitist power mongers of the dark ages who burned people at the stake for heresy because they had a viewpoint that differed from "accepted" writings or dogma."
Yes, the peer review system does sometimes become corrupt (usually by a person or institution who controls the funding to the scientists involved). If this happens it may be necessary to look for peer reviewers outside the corrupt institution(s), or occasionally to publish without them. But if your results aren't reproducible, then your work isn't science. *That's* why I don't believe the likes of Michael Mann.
did you not read in the second paragraph..
" This does not mean that we should not learn from previous knowledge. "
This eerily reminds me of the last scene of Raiders of the Lost Ark:
https://www.youtube.com/watch?v=Fdjf4lMm...
Top Sociometrician?
Most Honored Expert Subverter of Truth Using Metric Tons of Irrelevant and Falsified Data To Create Irrational False Conclusions
With All Due Respect
On the Corporate side; What can I say? Every corporation I worked at was a Socialist structure with a pecking order. If you ever read anything from Heinlein you know about the pecking order of monkeys in a cage. When the little monkey finds a peanut, everyone wants to steal it.
.