It is a troubling sign of the times, and the crisis that many fear higher education is in, that several of our analyses this week relate to the theme of university collapse.
An essay on the massification of UK higher education argues that “the current state of UK universities seems like a very bad deal for those involved, bad for society and ultimately unsustainable”, with high participation rates and declining income levels creating a system that author Lincoln Allison compares to that of the Soviet Union.
“The reasons to fear the collapse of the system, however, are not that it’s bad or unfair but that it’s unfundable,” writes the emeritus reader in politics at the University of Warwick.
“The university sector has been bloated to an unsustainable level and is now bound to decline; the questions are by how much and how will it happen.”
Every tax dollar the Government spends should improve American lives or advance American interests. This often does not happen. Federal grants have funded drag shows in Ecuador, trained doctoral candidates in critical race theory, and developed transgender-sexual-education programs. In 2024, one study claimed that more than one-quarter of new National Science Foundation (NSF) grants went to diversity, equity, and inclusion and other far-left initiatives. These NSF grants included those to educators that promoted Marxism, class warfare propaganda, and other anti-American ideologies in the classroom, masked as rigorous and thoughtful investigation.
While I once believed that funding should primarily support the advancement of core scientific methods and studies rather than numerous DEI initiatives, this view is a grotesque distortion of reality, especially when we consider the so-called “study” the White House is citing. Many DEI projects are, in fact, valuable educational efforts or have an environmental focus, often addressing critical research needs that receive little to no funding from other sources.
Here is a brief overview how these numbers were produced, and key problems that I have with the methods. The statement comes from the October 9, 2024 Senate Republican staff report Division. Extremism. Ideology: How the Biden-Harris NSF Politicized Science from the U.S. Senate Committee on Commerce, Science & Transportation, then led by Sen. Ted Cruz (PDF, the original is no more available on Aug 12, 2025). The underlying dataset was released on February 11, 2025 (press release and database).
Staff analyzed 32,198 NSF prime awards with start dates between January 2021 and April 4, 2024. Using a keyword-based tagging process, they identified 3,483 awards they labeled as “DEI/neo-Marxist,” totaling more than $2.05 billion. The report says that for 2024 (measured only up to April 4), 27% of new grants fell into this category. Appendix A of the report explains the method. Staff pulled all NSF awards from USAspending.gov with start dates in the 2021–2024 window. They ran an n-gram/keyword search using glossaries from sources like NACo and the University of Washington, expanding the list to more than 800,000 variants. Awards with zero or only one keyword match were removed, and additional filtering plus manual checks produced the final set of 3,483. Grants were grouped into five thematic categories (Status, Social Justice, Gender, Race, Environment). The “27% in 2024” figure came from the share of awards in that subset with start dates in the first quarter of 2024.
Faults and shortcomings in the method
The keyword approach equates the presence of certain words with being a DEI-focused grant, and the keyword list is very broad (including terms like “equity,” “privilege,” “climate change,” “systemic,” “historic*,” and “intersectional”), which can capture unrelated research.
The 27% figure comes from only part of the year (January–April 2024), not a full year.
There is ambiguity between counts and dollar amounts; the 27% refers to counts, not necessarily to total funding share.
Removing all single-keyword matches and applying manual pruning introduces subjectivity and potential bias.
Categories like “Social Justice” or “Race” are based purely on word presence, not actual research aims, conflating standard NSF education/broader impacts work with political advocacy.
Reliance on abstracts and spending descriptions means the screen often catches standard boilerplate language that NSF requires by law.
A House Science Committee Democratic staff review in April 2025 found numerous false positives in the Cruz dataset, such as biodiversity studies flagged for the word “diversity” or wildlife grants flagged for the word “female.” That review also notes that NSF is required by statute to consider “broader impacts” in all awards.
The Senate report is a partisan staff product, not peer-reviewed, and uses normative framing (“neo-Marxist,” “extremist”) rather than neutral description.
Restoring „gold standard“ of science by non-scientists?
An US health secretary who wants to retract an Annals paper for personal opinion?
I have summarized the history of Post Publication Peer Review starting with Pubmed Commons to the leading website PubPeer. But most recently there are at least three new kids on the block: Peer Community In, Paperstars and alphaXiv.
What is the difference?
Peer Community In was founded in 2017 by Denis Bourguet and colleagues, targeting researchers across disciplines with a focus on peer-reviewing and recommending preprints as an alternative to traditional journals.
Paperstars, launched in 2023 by currently undisclosed founders, is aimed at both the general public and researchers, focusing on making scientific papers more accessible through AI-generated summaries and visuals.
alphaXiv was created in 2024 by the Allen Institute for AI to serve researchers and academics by enhancing preprint discoverability through AI-powered search and summarization tools.
BTW Science Guardians is a scammer website.
This is not about the extraordinary cyclist Peter Sagan but about the astronomer Carl Sagan who postulated in his 1979 book “Broca’s brain” that “extraordinary claims require extraordinary evidence”
A major part of the book is devoted to debunking “paradoxers” who either live at the edge of science or are outright charlatans.
Warum steht der Begriff „Staatsräson“nicht ausdrücklich im Grundgesetz, wenn er doch angeblich das oberste Interesse oder Prinzip beschreibt, nach dem ein Staat handelt, um sein Bestehen, seine Ordnung und seine Sicherheit zu wahren?
– Ursprünglich wurde der Begriff in der Frühneuzeit geprägt, etwa durch Niccolò Machiavelli und später Giovanni Botero oder Richelieu.
– Er diente zur Legitimation staatlicher Machtpolitik, oft losgelöst von ethischen oder rechtlichen Maßstäben.
– In der Moderne ist er normativ begrenzt – d. h. im demokratischen Rechtsstaat muss Staatsräson mit Recht, Moral und Verfassung vereinbar sein.
Also ist Staatsräson das, was ein Staat für unbedingt notwendig hält, um sich selbst zu schützen und zu erhalten. Müsste in das nicht doch in das Grundgesetz?
Das Grundgesetz ist eine rechtsstaatliche Verfassung – kein Machtinstrument. Das Grundgesetz von 1949 wurde bewusst als Gegenentwurf zur NS-Diktatur geschaffen. Es soll:
– Macht begrenzen, nicht rechtfertigen,
– die Grundrechte des Einzelnen schützen, und
– Recht und Moral über staatliche Interessen stellen.
Ein Begriff wie „Staatsräson“, der traditionell die Zwecke des Staates über Recht und Moral stellt, passt nicht zu einer rechtsstaatlichen, demokratischen Verfassung wie dem Grundgesetz.
We do not need to discuss all dystopic X posts about LLMs.
https://x.com/elonmusk/status/1936333964693885089
Whenever Nature Mag, however, publishes nonsense like “A foundation model to predict and capture human cognition” this may deserve a comment…
Fortunately Science’s Cathleen O’Grady already commented
“I think there’s going to be a big portion of the scientific community that will view this paper very skeptically and be very harsh on it” says Blake Richards, a computational neuroscientist at McGill University … Jeffrey Bowers, a cognitive scientist at the University of Bristol, thinks the model is “absurd”. He and his colleagues tested Centaur … and found decidedly un-humanlike behavior.”
The claim is absurd as training set of 160 psych studies was way to small to cover even a minor aspect of human behavior.
And well, a large fraction of the 160 published study findings are probably wrong as may be assumed from another replications study in psych field
Ninety-seven percent of original studies had significant results … Thirty-six percent of replications had significant results.
All interpretations made by a scientist are hypotheses, and all hypotheses are tentative. They must forever be tested and they must be revised if found to be unsatisfactory. Hence, a change of mind in a scientist, and particularly in a great scientist, is not only not a sign of weakness but rather evidence for continuing attention to the respective problem and an ability to test the hypothesis again and again.
Blurred as I have no image rightsSource: https://www.faz.net/aktuell/wissen/medizin-nobelpreistraeger-thomas-suedhof-wie-boese-ist-wissenschaft-110567521.html
Peer review is far from a firewall. In most cases, it’s just a paper trail that may have even encouraged bad research. The system we’ve trusted to verify scientific truth is fundamentally unreliable — a lie detector that’s been lying to us all along.
Let’s be bold for a minute: If peer review worked, scientists would act like it mattered. They don’t. When a paper gets rejected, most researchers don’t tear it apart, revise it, rethink it. They just repackage and resubmit — often word-for-word — to another journal. Same lottery ticket in a different draw mindset. Peer review is science roulette.
Once the papers are in, the reviews disappear. Some journals publish them. Most shred them. No one knows what the reviewer said. No one cares. If peer review were actually a quality check, we’d treat those comments like gospel. That’s what I value about RealClimate [PubPeer, my addition]: it provides insights we don’t get to see in formal reviews. Their blog posts and discussions — none of which have been published behind paywalls in journals — often carry more weight than peer-reviewed science.
1. Organismal Superposition & the Brain‑Death Paradox
Piotr Grzegorz Nowak (2024) argues that defining death as the “termination of the organism” leads to an organismal superposition problem. He suggests that under certain physiological conditions—like brain death—the patient can be argued to be both alive and dead, much like Schrödinger’s cat, creating ethical confusion especially around organ harvesting. https://philpapers.org/rec/NOWOSP
2. Life After Organismal “Death”
Melissa Moschella (2017, revisiting Brain‑Death debates) highlights that even after “organismal death,” significant biological activity persists—cells, tissues, and networks (immune, stress responses) can remain active days postmortem. https://philpapers.org/rec/MOSCOD-2
3. Metaphysical & Ontological Critiques
The Humanum Review and similar critiques challenge the metaphysical basis of the paper’s unity‑based definition of death. They stress that considering a person’s “unity” as automatically tied to brain-function is metaphysically dubious. They also quote John Paul II, arguing death is fundamentally a metaphysical event that science can only confirm empirically. https://philpapers.org/rec/MOSCOD-2
4. Biological Categorization Limits
Additional criticism comes from theoretical biology circles, pointing out that living vs. dead is an inherently fuzzy, non-binary distinction. Any attempt to define death (like in the paper) confronts conceptual limits due to the complexity of life forms and continuous transitions. https://humanumreview.com/articles/revising-the-concept-of-death-again
5. Continuation of Scientific Research
Frontiers in Microbiology (2023) supports the broader approach but emphasizes that transcriptomic and microbiome dynamics postmortem should be more deeply explored, suggesting the paper’s overview was incomplete without enough data-driven follow-up https://pmc.ncbi.nlm.nih.gov/articles/PMC6880069/
Scientists take it for granted that the consensus they refer to is not the result of opinion polls of their colleagues or a negotiated agreement reached at a research conclave. Rather, it is a phrase that describes a process in which evidence from independent lines of inquiry leads collectively toward the same conclusion. This process transcends the individual scientists who carry out the research.
Unfortunately parallel lines only intersect at infinity.
“Nazi Census” documents the origins of the census in modern Germany, along with the parallel development of machines that helped first collect data on Germans. Or read IBM and the Holocaust which has more details on IBM’s conscious co-planning and co-organizing of the Holocaust for the Nazis.
Why should you read that? Not because of Nazis but because authoritarians thrive on data. Here are todays news
-1-
US plans to merge all government data. A large-scale effort, led by Elon Musk’s team, aims to link federal databases — raising serious concerns among privacy and security experts.
-3-
Here is an article from the Dean of the UC Berkeley Law School Erwin Chemerinsky and the emeritus Harvard constitutional law professor Lawrence Tribe about the consequences: “We should all be very, very afraid”.
Peter Liptons Hauptwerk “Inference to the Best Explanation” (IBE) ist leider nie auf Deutsch erschienen. Ich habe den Text daher von Gemini zusammenfassen lassen, überarbeitet und werde ihn auch in den nächster Wochen noch weiter ergänzen. Lipton ist einer meiner Lieblingsphilosophen. Er hat das Buch 1991 in erster und dann 2004 in zweiter Auflage veröffentlicht. Es ist ein Meilenstein in der modernen Wissenschaftstheorie und bietet eine detaillierte Analyse einer spezifischen Form des wissenschaftlichen aber auch alltäglichen Schließens: wie funktioniert am besten der Schluss auf die beste Erklärung?Continue reading Wann ist eine Erklärung eine gute Erklärung?→
Natürlich kritisieren Wissenschaftler/innen unzulässige politische Einflussnahmen (zuletzt in Deutschland in der Fördermittelaffäre). Und natürlich haben auch Universitäten unterschiedliche politische Ansichten, gerade live zu sehen wo die eine Universität Widerstand zeigt, und die andere einknickt.
Aber dann geht er zur Frage über, was bedeutet das eigentlich, daß die politische Agenda immer mehr die wissenschaftliche Ergebnisse beeinflusst?
Forschung wird von der Politik gerne in eine Richtung gelenkt, die politische Ziele unterstützt, anstatt neutral und faktenbasiert zu bleiben. Die öffentliche und akademische Debatten werden eingeschränkt, dabei werden kritische Stimmen oder abweichende wissenschaftliche Ansichten unterdrückt oder sogar delegitimiert, wenn sie nicht dem vorherrschenden politischen Narrativ entsprechen.
Wenn Finanzierung und Karriere aber von politischer Anpassung ab hängen, dann sind– Forschende zunehmend darauf angewiesen, ihre Arbeiten an politische Erwartungen anzupassen, um Fördergelder und akademische Positionen zu sichern. Die Gefahr dabei: Es wird nicht ohne langfristige Folgen bleiben. Sobald Wissenschaft als politisches Werkzeug wahrgenommen wird, verliert sie die Glaubwürdigkeit und damit das Vertrauen der Gesellschaft und ihre Fähigkeit, objektive Erkenntnisse abzuliefern.
Unter Neopatrimonialismus wird ein, besonders häufig in Afrika anzutreffender, Herrschaftstyp bezeichnet, der (in Anlehnung an Max Webers Herrschaftstypologie) als eine Mischform aus klassisch patrimonialer und legal-rationaler Herrschaft angesehen werden kann. Als Regimetyp ist er zwischen Autokratie und Demokratie anzusiedeln. Kennzeichnende Bestandteile des Neopatrimonialismus sind Klientelismus und politische Patronage.
Besonders häufig in Afrika anzutreffen? Den Wikipedia Artikel müsste mal überarbeitet werden, der Atlantic hat jedenfalls noch mehr Vorschläge
here is an answer, and it is not classic authoritarianism—nor is it autocracy, oligarchy, or monarchy. Trump is installing what scholars call patrimonialism. Understanding patrimonialism is essential to defeating it. In particular, it has a fatal weakness that Democrats and Trump’s other opponents should make their primary and relentless line of attack. Two professors published a book that deserves wide attention. In “Assault on the State: How the Global Attack on Modern Government Endangers Our Future”, Stephen E. Hanson, a government professor at the College of William & Mary, and Jeffrey S. Kopstein, a political scientist at UC Irvine, resurface a mostly forgotten term whose lineage dates back to Max Weber, the German sociologist best known for his seminal book “Protestant Ethic and the Spirit of Capitalism.”