Wir verstehen uns als liberale, progressive, weltoffene, linke und feministische Stimmen, die für Pluralität und Toleranz einstehen. Alle Menschen müssen leben dürfen “nach dem Gesetz, nach dem sie angetreten”. Gerade deswegen aber sehen wir mit Sorge, wie fatal die Debatte um Sex und Gender derzeit läuft. Bestürzt müssen wir zur Kenntnis nehmen, dass falsche und zum Teil regelrecht aberwitzige Verdrehungen (“weiblicher Penis”, “die Biologie ist längst weiter”, “es gibt mehr als zwei Geschlechter”) gerade denen in die Hände spielen, die unsere demokratische Vielfalt mit dumpfen Parolen bedrohen…
Der funktionale Begriff “Geschlecht” ist in der naturwissenschaftlichen Community unstrittig: Biologisch gibt es bei allen Arten, die sich über das Verschmelzen ungleich großer Keimzellen vermehren (Anisogamie), nur zwei Arten von Keimzellen – und daraus abgeleitet zwei Geschlechter, die als männlich und weiblich bezeichnet werden…
Auf dieser biologischen Grundlage der Zweigeschlechtlichkeit gibt es kulturelle und soziale Erwartungen und Geschlechterrollen. Es ist ein Kennzeichen liberaler Gesellschaften und eine große Errungenschaft der Emanzipationsbewegung des neunzehnten und zwanzigsten Jahrhunderts, dass Geschlechterrollen keinen zwingenden Charakter mehr haben und dem Individuum alle gesellschaftlichen Rollen unabhängig vom Geschlecht offen stehen.
Every tax dollar the Government spends should improve American lives or advance American interests. This often does not happen. Federal grants have funded drag shows in Ecuador, trained doctoral candidates in critical race theory, and developed transgender-sexual-education programs. In 2024, one study claimed that more than one-quarter of new National Science Foundation (NSF) grants went to diversity, equity, and inclusion and other far-left initiatives. These NSF grants included those to educators that promoted Marxism, class warfare propaganda, and other anti-American ideologies in the classroom, masked as rigorous and thoughtful investigation.
While I once believed that funding should primarily support the advancement of core scientific methods and studies rather than numerous DEI initiatives, this view is a grotesque distortion of reality, especially when we consider the so-called “study” the White House is citing. Many DEI projects are, in fact, valuable educational efforts or have an environmental focus, often addressing critical research needs that receive little to no funding from other sources.
Here is a brief overview how these numbers were produced, and key problems that I have with the methods. The statement comes from the October 9, 2024 Senate Republican staff report Division. Extremism. Ideology: How the Biden-Harris NSF Politicized Science from the U.S. Senate Committee on Commerce, Science & Transportation, then led by Sen. Ted Cruz (PDF, the original is no more available on Aug 12, 2025). The underlying dataset was released on February 11, 2025 (press release and database).
Staff analyzed 32,198 NSF prime awards with start dates between January 2021 and April 4, 2024. Using a keyword-based tagging process, they identified 3,483 awards they labeled as “DEI/neo-Marxist,” totaling more than $2.05 billion. The report says that for 2024 (measured only up to April 4), 27% of new grants fell into this category. Appendix A of the report explains the method. Staff pulled all NSF awards from USAspending.gov with start dates in the 2021–2024 window. They ran an n-gram/keyword search using glossaries from sources like NACo and the University of Washington, expanding the list to more than 800,000 variants. Awards with zero or only one keyword match were removed, and additional filtering plus manual checks produced the final set of 3,483. Grants were grouped into five thematic categories (Status, Social Justice, Gender, Race, Environment). The “27% in 2024” figure came from the share of awards in that subset with start dates in the first quarter of 2024.
Faults and shortcomings in the method
The keyword approach equates the presence of certain words with being a DEI-focused grant, and the keyword list is very broad (including terms like “equity,” “privilege,” “climate change,” “systemic,” “historic*,” and “intersectional”), which can capture unrelated research.
The 27% figure comes from only part of the year (January–April 2024), not a full year.
There is ambiguity between counts and dollar amounts; the 27% refers to counts, not necessarily to total funding share.
Removing all single-keyword matches and applying manual pruning introduces subjectivity and potential bias.
Categories like “Social Justice” or “Race” are based purely on word presence, not actual research aims, conflating standard NSF education/broader impacts work with political advocacy.
Reliance on abstracts and spending descriptions means the screen often catches standard boilerplate language that NSF requires by law.
A House Science Committee Democratic staff review in April 2025 found numerous false positives in the Cruz dataset, such as biodiversity studies flagged for the word “diversity” or wildlife grants flagged for the word “female.” That review also notes that NSF is required by statute to consider “broader impacts” in all awards.
The Senate report is a partisan staff product, not peer-reviewed, and uses normative framing (“neo-Marxist,” “extremist”) rather than neutral description.
Restoring „gold standard“ of science by non-scientists?
An US health secretary who wants to retract an Annals paper for personal opinion?
Es ist ja nicht so, daß sich die Diakonie – der soziale Dienst der Evangelischen Kirche in Deutschland EKD – umgehend nach dem Hamas Überfall im Oktober 2023 mit Spendenaufrufen gemeldet hätte (ohne aber selbst vor Ort aktiv gewesen zu sein).
Oder daß die EKD nicht zu Gebeten für den Frieden aufgerufen hätte. Aber hat sie die Bundesregierung jemals für die Waffenlieferungen in die Kriegsregion kritisiert?
I have summarized the history of Post Publication Peer Review starting with Pubmed Commons to the leading website PubPeer. But most recently there are at least three new kids on the block: Peer Community In, Paperstars and alphaXiv.
What is the difference?
Peer Community In was founded in 2017 by Denis Bourguet and colleagues, targeting researchers across disciplines with a focus on peer-reviewing and recommending preprints as an alternative to traditional journals.
Paperstars, launched in 2023 by currently undisclosed founders, is aimed at both the general public and researchers, focusing on making scientific papers more accessible through AI-generated summaries and visuals.
alphaXiv was created in 2024 by the Allen Institute for AI to serve researchers and academics by enhancing preprint discoverability through AI-powered search and summarization tools.
BTW Science Guardians is a scammer website.
This is not about the extraordinary cyclist Peter Sagan but about the astronomer Carl Sagan who postulated in his 1979 book “Broca’s brain” that “extraordinary claims require extraordinary evidence”
A major part of the book is devoted to debunking “paradoxers” who either live at the edge of science or are outright charlatans.
Or the authorship discussion around the “Napalm Girl” Phan Thị Kim Phúc.
A third photograph – a snapshot from a London house two decades ago – has a similar price tag attached like Le Violon d’Ingres.
My most recent paper at https://arxiv.org/abs/2507.1223 examines this infamous photograph using the latest image analysis techniques.
This study offers a forensic assessment of a widely circulated photograph featuring Prince Andrew, Virginia Giuffre, and Ghislaine Maxwell – an image that has played a pivotal role in public discourse and legal narratives. Through analysis of multiple published versions, several inconsistencies are identified, including irregularities in lighting, posture, and physical interaction, which are more consistent with digital compositing than with an unaltered snapshot. While the absence of the original negative and a verifiable audit trail precludes definitive conclusions, the technical and contextual anomalies suggest that the image may have been deliberately constructed. Nevertheless, without additional evidence, the photograph remains an unresolved but symbolically charged fragment within a complex story of abuse, memory, and contested truth.
Even if there are now reasonable doubts on the image, the whole event may have happened — an horrible crime including many young women.
Maybe an artist is painting a scene from memory, this photograph could be showing a real scene although not in a physical sense.
An Pamela-Meyer-type analysis of the video above at least did not show that Virginia Giuffre is lying – her body language is more consistent with the reporting of a trauma survivor. So this
photograph remains an unresolved but symbolically charged fragment within a complex story of abuse, memory, and contested truth.
Warum steht der Begriff „Staatsräson“nicht ausdrücklich im Grundgesetz, wenn er doch angeblich das oberste Interesse oder Prinzip beschreibt, nach dem ein Staat handelt, um sein Bestehen, seine Ordnung und seine Sicherheit zu wahren?
– Ursprünglich wurde der Begriff in der Frühneuzeit geprägt, etwa durch Niccolò Machiavelli und später Giovanni Botero oder Richelieu.
– Er diente zur Legitimation staatlicher Machtpolitik, oft losgelöst von ethischen oder rechtlichen Maßstäben.
– In der Moderne ist er normativ begrenzt – d. h. im demokratischen Rechtsstaat muss Staatsräson mit Recht, Moral und Verfassung vereinbar sein.
Also ist Staatsräson das, was ein Staat für unbedingt notwendig hält, um sich selbst zu schützen und zu erhalten. Müsste in das nicht doch in das Grundgesetz?
Das Grundgesetz ist eine rechtsstaatliche Verfassung – kein Machtinstrument. Das Grundgesetz von 1949 wurde bewusst als Gegenentwurf zur NS-Diktatur geschaffen. Es soll:
– Macht begrenzen, nicht rechtfertigen,
– die Grundrechte des Einzelnen schützen, und
– Recht und Moral über staatliche Interessen stellen.
Ein Begriff wie „Staatsräson“, der traditionell die Zwecke des Staates über Recht und Moral stellt, passt nicht zu einer rechtsstaatlichen, demokratischen Verfassung wie dem Grundgesetz.
Before this paper, most sequence modeling (e.g., for language) used recurrent neural networks (RNNs) or convolutional neural networks (CNNs). These had significant limitations, such as a difficulty with long-range dependencies and slow training due to sequential processing. The Transformer replaced recurrence with self-attention, enabling parallelization and faster training, while better capturing dependencies in data. So the transformer architecture became the foundation for nearly all state-of-the-art NLP models. This enabled training models with billions of parameters, which is key to achieving high performance in AI tasks.
It’s making the rounds now – Andrew Gelman has already a long post – how authors of scientific papers are trying prompt injections for lazy reviewers that let a LLM write their review.
We do not need to discuss all dystopic X posts about LLMs.
https://x.com/elonmusk/status/1936333964693885089
Whenever Nature Mag, however, publishes nonsense like “A foundation model to predict and capture human cognition” this may deserve a comment…
Fortunately Science’s Cathleen O’Grady already commented
“I think there’s going to be a big portion of the scientific community that will view this paper very skeptically and be very harsh on it” says Blake Richards, a computational neuroscientist at McGill University … Jeffrey Bowers, a cognitive scientist at the University of Bristol, thinks the model is “absurd”. He and his colleagues tested Centaur … and found decidedly un-humanlike behavior.”
The claim is absurd as training set of 160 psych studies was way to small to cover even a minor aspect of human behavior.
And well, a large fraction of the 160 published study findings are probably wrong as may be assumed from another replications study in psych field
Ninety-seven percent of original studies had significant results … Thirty-six percent of replications had significant results.
All interpretations made by a scientist are hypotheses, and all hypotheses are tentative. They must forever be tested and they must be revised if found to be unsatisfactory. Hence, a change of mind in a scientist, and particularly in a great scientist, is not only not a sign of weakness but rather evidence for continuing attention to the respective problem and an ability to test the hypothesis again and again.
vocabulary changes in more than 15 million biomedical abstracts from 2010 to 2024 indexed by PubMed and show how the appearance of LLMs led to an abrupt increase in the frequency of certain style words. This excess word analysis suggests that at least 13.5% of 2024 abstracts were processed with LLMs.
Although they say that the analysis was performed on the corpus level and cannot identify individual texts that may have been processed by a LLM, we can of course check the proportion of LLM words in a text.
Unfortunately their online list contains stop words that I am eliminating here. But then we can run the following script!
# based on https://github.com/berenslab/llm-excess-vocab/tree/main
import csv
import re
import os
from collections import Counter
from striprtf.striprtf import rtf_to_text
from nltk.corpus import stopwords
import nltk
import chardet
# Ensure stopwords are available
nltk.download('stopwords')
# Paths
rtfd_folder_path = '/Users/x/Desktop/mss_image.rtfd' # RTFD is a directory
rtf_file_path = os.path.join(rtfd_folder_path, 'TXT.rtf') # or 'index.rtf'
csv_file_path = '/Users/x/Desktop/excess_words.csv'
# Read and decode the RTF file
with open(rtf_file_path, 'rb') as f:
raw_data = f.read()
# Try decoding automatically
encoding = chardet.detect(raw_data)['encoding']
rtf_content = raw_data.decode(encoding)
plain_text = rtf_to_text(rtf_content)
# Normalize and tokenize text
words_in_text = re.findall(r'\b\w+\b', plain_text.lower())
# Remove stopwords
stop_words = set(stopwords.words('english'))
filtered_words = [word for word in words_in_text if word not in stop_words]
# Load excess words from CSV
with open(csv_file_path, 'r', encoding='utf-8') as csv_file:
reader = csv.reader(csv_file)
excess_words = {row[0].strip().lower() for row in reader if row}
# Count excess words in filtered text
excess_word_counts = Counter(word for word in filtered_words if word in excess_words)
# Calculate proportion
total_words = len(filtered_words)
total_excess = sum(excess_word_counts.values())
proportion = total_excess / total_words if total_words > 0 else 0
# Output
print("\nExcess Words Found (Sorted by Frequency):")
for word, count in excess_word_counts.most_common():
print(f"{word}: {count}")
print(f"\nTotal words (without stopwords): {total_words}")
print(f"Total excess words: {total_excess}")
print(f"Proportion of excess words: {proportion:.4f}")
7 Aug 2025
The long ’em dash’ — U+2014 instead of the standard minus – seems to be a characteristic sign of chatGPT 4 even when asked not use it.