I have always avoided forward-looking statements. The future of science communication is complicated enough without adding prophecy to the mix. But the accelerating integration of AI into every corner of the research enterprise makes at least one scenario hard to dismiss: automated agents negotiating, submitting, and “publishing” scientific claims with no human hand on the wheel between preprint and record.
In banking and finance, High-Frequency Trading (HFT) is the Formula 1 of capital markets – algorithms executing thousands of transactions per second, reacting to signals no human could parse in time, optimizing for outcomes defined entirely upstream by whoever wrote the strategy. The races are real; the drivers are not.
The parallel to science is uncomfortable but structurally exact. An author AI monitors the literature on arXiv, identifies a gap, synthesizes a manuscript from existing results, checks it against a house style, and submits. A Publisher AI receives it, runs peer-review surrogates, scores novelty and methodological plausibility, and issues a DOI.
Both sides are optimizing for metrics – citation potential, impact proxies, throughput – that were defined by humans long ago and are now running unattended.

The analogy breaks down in one important place, and that is where it gets interesting. HFT operates in a closed, well-defined reward landscape: price, volume, spread. Science nominally operates in an open one: truth and trust. But truth is not what most of the current incentive architecture actually rewards. It rewards publication counts, journal prestige, and grant renewal. If those proxies can be satisfied algorithmically, there is no obvious mechanical barrier preventing it. The barrier, if it exists, is epistemic – and epistemic barriers have historically never slowed down industries that found a way around them.
What would High-Frequency Science (HFS) look like in practice? Probably not dramatic. Probably incremental aggregation papers – meta-analyses of meta-analyses, restatements of known findings dressed in new domain vocabulary, combinatorial hypothesis generation from structured databases. Nothing a careful reader could immediately falsify. Volume would rise; signal / noise ratio would fall. The journal impact factor, already a dubious instrument, would measure something even further removed from scientific value.
The question worth asking is not whether this will happen – parts of it already are – but who benefits from the arrangement. HFS benefits liquidity providers, AI firms running the algorithms, preprint servers with traffic, publishers with processing fees, and institutions with productivity dashboards full of green. Whether it benefits the cumulative knowledge record is a different question entirely, and one unlikely to appear in any AI’s objective function unless someone puts it there deliberately.