All posts by admin

Der MDPI Deal (1 von 3)

Wie kann das nur sein?

Über 100 deutsche Universitäten kooperieren nun mit MDPI in einem neuen nationalen Abkommen? Aus einer Bildunterschrift

Franziska Fischer (rechts), Kaufmännische Direktorin bei ZB MED, und Peter Roth (links), Verlagsleiter bei MDPI, bei der Unterzeichnung des neuen nationalen Open-Access-Abkommens zwischen MDPI und dem ZB MED-Konsortium. Foto: MDPI

Mehr als die Hälfte der befragten Fachbereiche bezweifelt die Seriosität, so eine Studie der Universität Kassel.

Die Reputation von MDPI ist im Keller so auch das Laborjournal

Von mehreren Lesern erhielt die Laborjournal-Redaktion in den vergangenen Wochen ähnliche Zuschriften: „Der MDPI-Verlag gehört unbedingt in die öffentliche Diskussion. Bitte bewahren Sie … insbesondere junge Forscher:innen davor …, in seinen möglicherweise fragwürdigen Zeitschriften zu publizieren, … sodass wissenschaftliches Denken und Handeln … wieder eine Zukunft haben.“

Ich hätte jedenfalls nie einen Artikel zu MDPI geschickt.

Und als dann doch eine Kollegin ohne mein Zutun einen abgelehnten Artikel dort abgab, war das Review mehr als dürftig. Dafür war die Spamflut, die dann einsetzte gewaltig.

Kein Wunder, dass sich nun im Januar die Fragen häufen, warum um Himmels willen die ZB MED freiwillig mit MDPI kooperiert? Wo wir doch sowieso schon eine völlige Überproduktion an Artikeln haben?

 

https://www.timeshighereducation.com/news/germany-faces-questions-over-publishing-agreement-mdpi
“This deal should provoke a discussion about publication priorities and the need to avoid supporting predatory journals,” said Abalkina, who questioned why the Germany’s research funder had not been required to ratify the agreement, as it did with larger deals involving Springer Nature, Elsevier and Wiley.
The deal was announced after Finland’s Publication Forum downgraded 271 journals belonging to Frontiers and MDPI to its lowest level, claiming these publishers “make use of the APC operating model” which “aim[s] to increase the number of publications with the minimum time spend for editorial work and quality assessment”.

Und dann erschien vor wenigen Tagen noch eine Studie in QSS/MIT Press, die eindrucksvoll belegt, wie sehr MDPI in nahezu allen Parametern abhängt.

 


CC-BY-NC

PlagScan wants to leave the scene without refund of prepaid credit

Plagscan had a good name in the scientific community in the past. When complaining that my account was no more working, I received an email this morning Continue reading PlagScan wants to leave the scene without refund of prepaid credit


CC-BY-NC

We as editors were losing the academic freedom

Here is an unrolled thread of Brandy Schillace at Bluesky

My very last issue (on ..) has now come out with BMJ Medical Humanities: I have stepped down from my role as Editor in Chief. There are a lot of factors—after 17 years editing two consecutive journals, it was time. But there’s more, and I feel we should talk about the climate of #academic publishing.

When I began in 2007, editing a medical anthropology #journal for Springer, I had complete control of process. Articles came to me directly, I sent them personally to reviewed, who replied to me, and when accepted I worked one on one with a dedicated copy editor. A lot to juggle, but great QC.

Ours was a top ranked publication with high impact factor; we processed a lot of material—and yes, that was lots of paper shuffling. But we had an associate build a database of reviewers, and we managed just fine. Imagine! You sent a paper to the editor! And she had it copy edited!

In 2010 or so, Springer forced all journals to use a new online system. On one hand, a bit easier on me… the system kept track of where papers were, who was reviewing. I didn’t love it. But I didn’t hate it. Yet. Unfortunately, though, the system made other things invisible. Who ran the thing?

It was kept up by online assistants who worked overseas—largely in India. It looked like we were just using plug and play software, but there were people keeping all that tech going; ghost workers in the #AI. Except I couldn’t just speak to them if something went wrong. And things did go wrong.

And then something worse happened. They took away my copy editor. I had to send things through the new system to be copy edited—and that too happened overseas. There were teams of people editing all genres, with no specificity, and without English as a first language. Citations were a nightmare.
Authors complained and I would reach out to my contact, who had to reach out to their contacts, and so on in three time zones every time something went awry. But that’s not all. I was losing control of the process, unable to see it clearly. Reviewed got mad at the system too. Authors hated it.

I left that journal to take over BMJ Medical Humanities in 2017. The publisher was smaller and I had far more contact—all good things. The online system was, however, a beast. No better. Possibly worse. And yet again all copy edits were handled overseas. I wanted CMA style; it crashed the system
(It had to do with the software in use but also the four levels of people, time zones, and language issues among us). Everything too ages, but ultimately I had the support of a good network of people—including an assistant through BMJ, and my associate editors and colleagues.

But let’s skip ahead— I had begun a big push to diversify our journal. We started Path to Publication, helping those without institutional support. It was a lot of work. For my meager stipend, about 25 hrs a week. But I had support. Then … About 2-3 years ago, things began to change. Was it Covid? Maybe. Hard to say.

There was a reorganization. We lost my immediate report and the assistant. Plan S was putting pressure on everyone to go OA but that would mean costs fell on authors—and it would certainly end all our diversity work. Who could afford to pay the fees, especially from the humanities? We pushed back.

But a new emphasis on profit, and on publishing more and more papers had taken hold. I was questioned about my QC; why wasn’t I accepting more papers, faster? Meanwhile, system problems persisted and authors and reviewers already overtaxed were giving up. We continued to publish edgy DEI work—

One of our most important pieces was about white supremacy culture in medicine. There was blowback; I got a lot of ugly emails. Thankfully, BMJ stood by my decision to publish it. I’m grateful. But the flurry around it should have rolled a warning bell. More changes were on the horizon.

The physician burnout discourse emphasises organisational challenges and personal well-being as primary points of intervention. However, these foci have minimally impacted this worsening public health…

Not long after, I began to get notices from behind the scenes people—those who received articles before I did, through the  online system. They were ‘flagging’ articles they deemed ‘problematic.’ Now, they often had reason… perhaps they hadn’t completed the patient anonymization, or etc. But.

But—*someone else* was determining things before I could read the work. It’s easy to see how this affects decision making—harder for me to read unbiased. And sometimes, the problems weren’t really a failure to complete a step. In another BMJ journal, a paper accepted by the EiC was pulled.

Somewhere behind the scenes, we as editors were losing the academic freedom to evaluate papers for ourselves. It’s not outright. And I’m sure it’s in the name of safety and efficiency. But I’ve watched as one by one, things that used to be the purview of editors have been lost. And it’s everywhere.

I am not here to attack my past publishers; they are part of a giant revolution that stretches from academia to finders private and public. It has been accelerated by Covid and by the political turn from DEI, from diversity and autonomy as good things—to ‘problematic’ things. We are not profitable.

There are many good things that have come from my time as an editor. I enjoyed working with BMJ and still prefer it to many other institutions (and they did fight for me, and for papers I wanted published and authors I wanted protected). But the academic publishing world is not what it was.

I wish all luck and strength to those stepping into editorial shoes. And I can hope things will get better. But with the encroachment of AI, I imagine things will get worse before they get better. And that makes it harder for everyone.

* BMJ = British Medical Journal
* DEI = diversity, equity and inclusion
* EIC = editor in chief


CC-BY-NC

Die elektronische Patientenakte

“das Narrativ der sicheren elektronischen Patientenakte ist nicht mehr zu halten” so der CCC2024.

oder heise.de

Nachdem Sicherheitsforscher auf dem 38. Chaos Communication Congress gravierende Mängel bei der elektronischen Patientenakte (ePA) für gesetzliche Versicherte gefunden haben, fordert der Chef der Bundesärztekammer, Klaus Reinhardt, rasche Nachbesserung. Er könne die ePA 3.0 nach aktuellem Stand nicht empfehlen. Dennoch sei das keine Aufforderung zum Opt-out. Der Verband der Kinder- und Jugendärzt:innen (BVKJ) rät Eltern hingegen, für deren Kinder Widerspruch einzulegen. Das berichten das Ärzteblatt und die Ärztezeitung.

und nochmal heise.de

Ärzte unterliegen der Schweigepflicht und gehören zu den Berufsgeheimnisträgern [2]. Dass ärztliche Unterlagen und Aufzeichnungen über Patienten nicht einfach beschlagnahmt werden können, wird in der Strafprozessordnung (StPO) in § 97 Beschlagnahmeverbote [3] geregelt. Voraussetzung ist, dass sich zu beschlagnahmende Gegenstände “im Gewahrsam der zur Verweigerung des Zeugnisses Berechtigten” befinden. Da sich die elektronische Gesundheitskarte nicht im Gewahrsam des Arztes, sondern im Gewahrsam des Patienten befindet …

kann wohl auch der Staat darauf zugreifen.


CC-BY-NC

White rabbit

and the DØVYDAS version


CC-BY-NC

AI lobotomizing knowledge

I tried out chatGPT 4o to create the R ggplot2 code for a professional color chart

v1
v20

ChatGPT had serious problems to recognize even the grid fields while it was impossible to get the right colors or any order after more than a dozen attempts (I created the above chart in less than 15m).

At the end, chatGPT ended with something like a bad copy of Gerhard Richters “4900 Colours”…

https://www.hatjecantz.de/products/16130-gerhard-richter

Why was this task so difficult?

Although labeled as generative, AI is not generative in a linguistic sense that

… aims to explain the cognitive basis of language by formulating and testing explicit models of humans’ subconscious grammatical knowledge

I would like to call it better imitating AI. ChatGPT never got the idea of a professional color chart for optimizing color workflow from camera to print).

It was also lacking any aesthetics. Although the Richter squares are arranged randomly, they form a luminous grid pattern with overwhelming kaleidoscopic color fields.

A less academic version – it is the biggest copyright infringement ever since Kim Dotcom.

TBC


CC-BY-NC

Paper of the year

I have occasionally selected a paper of the year, that I enjoyed most for reading. Usually this was not a breakthrough paper but a more hidden pearl. BMJ Christmas and Ig Nobel 2024 were funny, sure, but here is my selection from the literature, a correspondence

Can a biologist fix a radio? — Or, what I learned while studying apoptosis Cancer Cell Volume 2, Issue 3 P179-182 September 2002

A more successful approach will be to remove components one at a time or to use a variation of the method, in which a radio is shot at a close range with metal particles. In the latter case radios that malfunction (have a “phenotype”) are selected to identify the component whose damage causes the phenotype. Although removing some components will have only an attenuating effect, a lucky postdoc will accidentally find a wire whose deficiency will stop the music completely. The jubilant fellow will name the wire Serendipitously Recovered Component (Src) and then find that Src is required because it is the only link between a long extendable object and the rest of the radio. The object will be appropriately named the Most Important Component (Mic) of the radio. A series of studies will definitively establish that Mic should be made of metal and the longer the object is the better, which would provide an evolutionary explanation for the finding that the object is extendable.


CC-BY-NC

Scientific conclusions need not be accurate, justified, or believed by their authors

This is the subtitle of another blog on the scientific method (and scientific madness).

I don’t agree with the statement – conclusion should be as accurate and as logical as possible. Conclusions should be believed by the authors as they  are fraudsters otherwise.

The original paper for the strange hypothesis is  by  Dang and Bright.

Dang and Bright argue that all this makes sense if we expect the norms governing the presentation of scientific conclusions to scientific peers to align with the reality that science works through division of cognitive labor and collective efforts at error correction.

which is basically not true – see Brandolinis law.


CC-BY-NC

Someone who understands and someone who doesn’t.

Just for the records.

 

Famous soccer coach Jürgen Klopp 2020 about COVID-19

 

Less famous Robert F Kennedy 2024 about polio vaccination

(and response by 75 Nobel laureates).

https://www.wjst.de/blog/wp-content/uploads/2024/12/81adbc30-full.pdf

CC-BY-NC

Anarchy explained

"Sei ungehorsam!" "Nein!"
byu/McGrex inasozialesnetzwerk

 

“Lacking a comprehensive anarchist worldview and philosophy, and in any case wary of nomothetic ways of seeing, I am making a case for a sort of anarchist squint.”  James Scott in “Two Cheers for Anarchism” who writes Pletz when he means Datzetal-Pleetz


CC-BY-NC

Update on Mendelian Randomization

As written before I never published any study that included a Mendelian randomization. The reasons are well known.

A new paper from Bristol discusses  the  recent explosion of low-quality two-sample Mendelian randomization studies and offers a cure.

We advise editors to simply reject papers that only report 2SMR findings, with no additional supporting evidence. For reviewers receiving such papers, we provide a template for rejection.


CC-BY-NC

Another example where bad science was leading to a catastrophic event

https://bsky.app/profile/jeroenvanbaar.nl/post/3lcsyzzc24k2f

The full story at this address and the 3 reasond in a nutshell

Clever ecological modelers came up with a way of calculating a ‘maximum sustainable yield’ (MSY), set at 16% of the total population, which should theoretically leave enough fish to repopulate each year … But fishing floundered further and the Grand Banks cod population collapsed almost entirely in 1992 …

While the Canadian government attempted to sample the cod population in the 1980s, their ships caught so much less than professional fishermen … In doing so, the modelers ignored a selection bias: the pros used better tech and only fished in the highest-yielding spots, so these numbers cannot be extrapolated to the entire region…

In humans, the number of kids in a population depends heavily on the number of parents, because one pair of parents usually has just one kid at a time. In cod, on the other hand, a single fish can produce eight million eggs at a time. This means that the number of cod babies who make it to adulthood depends much less on the existing population size and much more on environmental factors like food and predation.

A third problem is that the fishing industry has far-reaching and often unforeseeable effects on the ecosystem as a whole.


CC-BY-NC

When the research bubble collapses

There is a new super interesting analysis of two research bubbles.

We introduce a diffusion index that quantifies whether research areas have been amplified within social and scientific bubbles, or have diffused and become evaluated more broadly. We illustrate the utility of our diffusion approach in tracking the trajectories of cardiac stem cell research (a bubble that collapsed) and cancer immunotherapy (which showed sustained growth).

Couldn’t we identify this stem cell bubble earlier? The authors believe that limited diffusion of biomedical knowledge anticipates abrupt decreases in popularity. But that takes time …

What’s again noticeable here, that in the stem cell research, the initial claim was later called into question leading to the retraction of more than 30 papers from claims of data fabrication.


CC-BY-NC

Brandolinis Law Again

“The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it (source)”

It is timely to quote now the 2016 Nature letter of Phil Williamson

With the election of Donald Trump, his appointment of advisers who are on record as dismissing scientific evidence, and the emboldening of deniers on everything from climate change to vaccinations, the amount of nonsense written about science on the Internet (and elsewhere) seems set to rise. So what are we, as scientists, to do?

Most researchers who have tried to engage online with ill-informed journalists or pseudoscientists will be familiar with Brandolini’s law (also known as the Bullshit Asymmetry Principle): the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it. Is it really worth taking the time and effort to challenge, correct and clarify articles that claim to be about science but in most cases seem to represent a political ideology?

I think it is.


CC-BY-NC