Academics should be collaborating, not competing for pseudoscientific rankings.
At a time when federal employees are prohibited from uttering the phrase “climate change,” the right routinely attempts to undermine universities’ legitimacy, and tuitions have skyrocketed alongside student debt, it seems perverse that academics would further endanger their mission to educate and enlighten. Yet by embracing a malignant form of pseudoscience, they have accomplished just that.
What is the scientific method? Its particulars are a subject of some debate, but scientists understand it to be a systematic process of gathering evidence through observation and experiment. Data are analyzed, and that analysis is shared with a community of peers who study and debate its findings in order to determine their validity. Albert Einstein called this “the refinement of everyday thinking.”
There are many reasons this method has proven so successful in learning about nature: the grounding of findings in research, the openness of debate and discussion, and the cumulative nature of the scientific enterprise, to name just a few. There are social scientists, philosophers, and historians who study how science is conducted, but working scientists learn through apprenticeship in grad school laboratories.
Scientists have theorized, experimented, and debated their way to astounding breakthroughs, from the DNA double helix to quantum theory. But they did not arrive at these discoveries through competition and ranking, both of which are elemental to the business world. It’s a business, after all, that strives to be the top performer in its respective market. Scientists who adopt this mode of thinking betray their own lines of inquiry, and the practice has become upsettingly commonplace.
Here are five ways capitalist logic has sabotaged the scientific community.
1. Impact Factor
Scientists strive to publish in journals with the highest impact factor, or the mean number of citations received over the previous two years. Often these publications will collude to manipulate their numbers. Journal citations follow what is known as an 80/20 rule: in a given journal, 80 percent of citations come from 20 percent of the total articles published: this means an author’s work can appear in a high-impact journal without ever being cited. Ranking is so important in this process that impact factors are calculated to three decimal places. “In science,” the Canadian historian Yves Gingras writes in his book Bibliometrics and Research Evaluation, “there are very few natural phenomena that we can pretend to know with such exactitude. For instance, who wants to know that the temperature is 20.233 degrees Celsius?”
One might just as easily ask why we need to know that one journal’s impact factor is 2.222 while another’s is 2.220.
2. The H-Index
If ranking academic journals weren’t destructive enough, the h-index applies the same pseudoscience to individual researchers. Defined as the number of articles published by a scientist that obtained at least that number of citations each, the h-index of your favorite scientist can be found with a quick search in Google Scholar. The h-index, Gingras notes in Bibliometrics, “is neither a measure of quantity (output) nor quality of impact; rather, it is a composite of them. It combines arbitrarily the number of articles published with the number of citations they received.”
Its value also never decreases. A researcher who has published three papers that have been cited 60 times each has an h-index of three, whereas a researcher who has published nine papers that have been cited nine times each has an h-index of nine. Is the researcher with an h-index of nine three times a better researcher than their counterpart when the former has been cited 81 times and the latter has been cited 180 times? Gingras concludes: “It is certainly surprising to see scientists, who are supposed to have some mathematical training, lose all critical sense in the face of such a simplistic figure.”
3. Altmetrics
An alternative to Impact Factors and h-indexes is called “alt-metrics,” which seeks to measure an article’s reach by its social media impressions and the number of times it’s been downloaded. But ranking based on likes and followers is no more scientific than the magical h-index. And of course, these platforms are designed to generate clicks rather than inform their users. It’s always important to remember that Twitter is not that important.
4. University Rankings
The U.S. network of universities is one of the engines of the world’s wealthiest country, created over generations through trillions of dollars of investment. Its graduates manage the most complex economies, investigate the most difficult problems, and invent the most advanced creations the planet has ever seen. And they have allowed their agendas to be manipulated by a little magazine called the US News and World Report, which ranks them according to an arcane formula.
In 1983, when it first began ranking colleges and universities, it did so based on opinion surveys of university presidents. Over time, its algorithm grew more complex, adding things like the h-index of researchers, Impact Factors for university journalism, grant money and donations. Cathy O’Neil of the blog mathbabe.org notes in her book Weapons of Math Destruction that, “if you look at this development from the perspective of a university president, it’s actually quite sad… here they were at the summit of their careers dedicating enormous energy toward boosting performance in fifteen areas defined by a group of journalists at a second-tier newsmagazine.”
Why have these incredibly powerful institutions abandoned critical thought in evaluating themselves?
5. Grades
The original sin from which all of the others flow could well be the casual way that scientists assign numerical grades and rankings to their students. To reiterate, only observation, experiment, analysis, and debate have produced our greatest scientific breakthroughs. Sadly, scientists have arrived at the conclusion that if a student’s value can be quantified, so too can journals and institutions. Education writer Alfie Kohn has compiled the most extensive case against grades. Above all, he notes, grades have “the tendency to promote achievement at the expense of learning.”
Only by recognizing that we are not bound to a market-based model can we begin to reverse these trends.
First published January 18, 2018 in AlterNet