Tag Archives: impact

We need only 1 not 4000 journals

Aaron Ciechanover criticises profits of Nature publisher and backs mega-journal model according to a June 26, 2023 report at Times Higher Education

Speaking at the annual Lindau Nobel Laureate Meeting,  Aaron
Ciechanover, the Israeli biologist who won the Nobel Prize in Chemistry in 2004, took aim at the high cost of scientific publishing which, he argued, was linked to the importance attached by scientists to publishing in big-name journals. “Everyone wants to have a paper in Cell, Science or Nature, which is wrong – we are celebrating where you are published rather than what you have published,” explained Professor Ciechanover at the event in southern Germany.

Well said!

No more disruptive science?

Nature reported yesterday a  new paper by Russell Funk on research innovation or “disruptiveness”

The number of science and technology research papers published has skyrocketed over the past few decades — but the ‘disruptiveness’ of those papers has dropped, according to an analysis of how radically papers depart from the previous literature.
Data from millions of manuscripts show that, compared with the mid-twentieth century, research done in the 2000s was much more likely to incrementally push science forward than to veer off in a new direction and render previous work obsolete. Analysis of patents from 1976 to 2010 showed the same trend.

So has  everything already discovered by getting most low hanging fruits (A)? Are scientists no more taking any risk (B)? Or is the “disruptive” science now hidden in the meaningless research (C)? Or did only change citation practices (D)? The answer is in the original paper

Specifically, despite large increases in scientific productivity, the number of papers and patents with CD5 values in the far right tail of the distribution remains nearly constant over time. This ‘conservation’ of the absolute number of highly disruptive papers and patents holds despite considerable churn in the underlying fields responsible for producing those works… These results suggest that the persistence of major breakthroughs—for example, measurement of gravity waves and COVID-19 vaccines—is not inconsistent with slowing innovative activity. In short, declining aggregate disruptiveness does not preclude individual highly disruptive works.

In my own words: Progress is found in the top percentiles just as many decades before. But most research publications are a waste of money and  even harmful for cluttering up the research field.

There seem to be also some critical comments and of course some methodological issues need to be clarified for any further interpretation (eg exclusion of reviews, validity of the 5 year interval, …). 5 years may not be enough in some fields, medical practice doesn’t even change for a long time – see also the comment of Bruce Albert. In any case, the authors promised to give me the CD5 dataset which will be nice to look up my own work.

20 Feb 2023

Forgot to update this post as there is an option E – that the study is just describing an artefact,.. I received the dataset one week later but couldn’t verify the claims when analyzing my own “disruption score”.  Upon inquiry RF said that  PubMed doesn’t include citations for all papers. “So to drop these papers from the data, required that papers had at least 1 reference in their reference list, and had been cited at least 1 time”.

The numbers were however still confusing as are 2.3 million entries in the CD5 file while Pubmed had roughly 18 millions entries in 2010 according to https://www.nlm.nih.gov/archive/20110328/bsd/licensee/2010_stats/2010_LO.html.notice.html. So I asked if the discrepancy may be explained by an additional constraint? RF  explained that  “For the Nature paper, we only analyzed data up through 2010, for consistency with the other data sets used in the paper. But we computed the measure for more recent years” which may have led to the missing scores.

A colleague also wrote about the study later in a German magazine https://www.laborjournal.de/rubric/narr/narr/n_23_03.php basically saying that science is not disruptive, it builds nearly always on earlier ground work: “Disruptive is economic gobbledegook”.

Interestingly and only last week I learned about another much more extensive reanalysis that arrives at very similar conclusions  “Dataset Artefacts are the Hidden Drivers of the Declining Disruptiveness in Science“. Holst et al. describe in this paper

Our reanalysis shows that the reported decline in disruptiveness can be attributed to a relative decline of these database entries with zero references. … Proper evaluation of the Monte-Carlo simulations reveals that, because of the preservation of the hidden outliers, even random citation behaviour replicates the observed decline in disruptiveness.

And well, there is now also a PubPeer entry but only from the last year.

No big impact but recognition

As a part time ethicist I am quite happy if an earlier article gets some recognition. Recognition is something different to the craziness of summing up impact factors, it is some kind of payback by longterm influence.

https://bmcmedethics.biomedcentral.com/articles/10.1186/1472-6939-11-21/metrics

How to push the impact of 2,299 scientists with 8,000 citations each?

Answer: Be co-author of an autophagy guideline

this is another episode of guidelines paper. More participants listed here – Affiliations listed stopped at 2299 – this means that there are 2299 authors in the manuscript. Unbelievable – how did they manage to get a consensus on what is written. May the first author explain, how the authorship on this guidelines is decided?

Endlich neue CV Formulare bei der DFG

Quelle DFG 1.9.22 https://www.dfg.de/en/research_funding

Publication details in proposals and CVs
Performance assessment based on content-related qualitative criteria also explicitly includes ensuring that the entire spectrum of academic publication types are equally displayed and acknowledged in funding proposals and CVs. In addition to a maximum of ten publications in the more common publication formats, the CV can therefore now list up to ten further sets of research outcomes and findings that have been publicised in a variety of other ways, including articles on preprint servers, data sets or software packages, for example. In DFG proposals, the project-specific list of publications will be included in the general bibliography. The intention here is to shift the focus of the review and the evaluation of a proposal away from the list of publications and towards the substance of the applicant’s accomplishments. In order to document their own published preliminary work, applicants can typographically highlight (e.g. in bold) a maximum of ten of their own publications in the bibliography that are important for the project. No information on quantitative metrics such as impact factors and h-indices is required in the CV or the proposal, and such information is not to be considered in the review. The relevant details are included in DFG forms and review instructions.

Academic age

Ever heard of this term? Here it comes

Another feature that was rated useful was evidence of applicants’ ‘academic age’. This was defined as the number of full-time-equivalent (FTE) years for which they’d worked in academia and was calculated from the year of their first academic publication, rather than the year they got their graduate degree.

So while the new Swiss granting scheme looks really nice, I expect that other funders will use the idea and divide impact factor by academic age…

Overloading reviewer

There is an interesting paper at bioRxiv on the never ending stream of review requests

… overburdening of reviewers to be caused by (i) an increase in manuscript submissions; (ii) insufficient editorial triage; (iii) a lack of reviewing instructions; (iv) difficulties in recruiting reviewers; (v) inefficiencies in manuscript handling and (vi) a lack of institutionalisation of peer review.

What makes it even more worse, that with the limited capacity of pre-publication review also the capacity of post-publication review is dropping…

Want to be a predatory reviewer? Just click on “yes” at the Cureus website (but read up the gossip at researchgate first)

 

Performance Paradox durch den “perversen Lerneffekt”

Das „Performance Paradox“ wird in der Literatur unterschiedlich definiert: Zum Einen als Phänomen, daß Organisationen die Kontrolle über etwas haben, das sie nicht genau kennen. In einem Sammelband der Leopoldina hingegen ist das die Tendenz aller Leistungskriterien, mit der Zeit ihre Relevanz zu verlieren. So Margit Osterloh (S.104) zu dem “perversen Lerneffekt”, der sich bei dem Impaktfaktor eingestellt hat:

Dieser tritt dann auf, wenn man den Fokus auf den Leistungsindikator legt und nicht auf das, was er messen soll: „When a measure becomes a target, it ceases to be a good measure.“ Menschen sind besonders kreativ, wenn es darum geht, bei Kennzahlen gut abzuschneiden, ohne die tatsächlich relevante Leistung zu erhöhen.

Nachdem ich bei Kollegen schon CVs gesehen habe, die über 1.000 eigene Artikel aufzählen (aber keine nennenswerte Preise bekommen haben), dann hat Osterloh recht, wenn ich diese Pseudoleistung mit dem phänomenalen Arthur Kornberg vergleiche, von dem ich gerade die Biographie lese und der quantitativ nicht annähernd so viel publiziert hat.
Osterloh kann jedenfalls auch recht gut erklären, warum der Betrug immer häufiger wird.

Wenn die intrinsische durch eine extrinsische Motivation verdrängt worden ist, werden auch noch Manipulationsversuche wahrscheinlicher.

“Gaming the System”: Ich glaube langsam, das ist der grösste Umbruch in der Wissenschaft in den letzten 50 Jahren. Mit der Vervielfachung der mittel- und unterklassigen Forschung plus Performance Paradox ist der Nettoertrag wissenschaftlichen Fortschritts kaum gestiegen, auch wenn das zig-fache Finanzvolumen in die Forschung fliesst.
Osterloh beschreibt Forschung als systemisches “Marktversagen”, denn Forschung ist dadurch gekennzeichnet

dass sie öffentliche Güter erzeugt, die durch Nichtausschließbarkeit und Nichtrivalität im Konsum charakterisiert sind, weshalb der Markt hier nicht funktioniert.

Weil mit der inhärenten Unsicherheit primärer Forschungsergebnisse niemand weiss, ob es stimmt, was ich gerade schreibe, kann ich mich selbst mit gefakten Veröffentlichungen längere Zeit am “Markt” halten. Ein möglicher Nutzen stellt sich sowieso nur spät ein und mit vielen Ko-Autoren ist eine Leistungsmessung fast unmöglich.

Gibt es Alternativen? Sicher, ab S. 109:

Boykott der Indikatoren.

Und zurück zur Gelehrtenrepublik (das ist auch kein Widerspruch zur Bildung für alle).

Hat man dieses „Eintrittsticket“ in die „Gelehrtenrepublik“ aufgrund einer rigorosen Prüfung erworben, dann kann und sollte weitgehende Autonomie gewährt werden. Dies sollte eine angemessene Grundausstattung einschließen, welche es den Forschenden freistellt, sich am Wettlauf um Drittmittel zu beteiligen oder nicht.

Payback for referees

There is a recent letter at Nature saying

I have discovered a negative correlation between the number of papers that a scientist publishes per year and the number of times that that scientist is willing to accept manuscripts for review  … I therefore suggest that journals should ask senior authors to provide evidence of their contribution to peer review as a condition for considering their manuscripts.

While I agree with the overall observation, Continue reading Payback for referees

A Science career should not be like a Mastermind game

You do an experiment or a clinical study and you are the code braker not knowing the peg positions and colors ( set by a code maker ).

The codebreaker tries to guess the pattern, in both order and color, within twelve (or ten, or eight) turns. Each guess is made by placing a row of code pegs on the decoding board. Once placed, the codemaker provides feedback by placing from zero to four key pegs in the small holes of the row with the guess. Continue reading A Science career should not be like a Mastermind game

First major publisher releases article metrics

From a press release

Today, the open-access publisher the Public Library of Science (PLoS; www.plos.org), announces the release of an expanded set of article-level metrics Continue reading First major publisher releases article metrics

Stagnancy at terrific speed or the impact of impact

Besides another critical review of the impact game this month in LJ, I found a ever more devastating paper at the University of Konstanzclick for the Babelfish translation. Continue reading Stagnancy at terrific speed or the impact of impact

Finally: A true alternative to Thomson ISI® impact factors

That was even worth a note in Nature News that finally a free journal-ranking tool entered the citation market. The attack came by an article in JCB (“Show me the data“), the response was weak. Sooooooo we have a choice now which of the metric indices is being the worsest way to rate a researcher (if you can’t understand otherwise what she/he his doing).
BTW individual IF reporting was never intended but ISI but is now common use in many countries. I don´t believe (as Decan Butler explains) that there is so much difference between popularity and prestige – but there is a big difference between popularity and quality.

Anything better than impact factors?

Here is a nice inside view from the BMC journals – you can watch how often your own papers are being downloaded.

bmc.png

Hopefully these hits are not only generated by search engine spiders, yea, yea.