A Science career should not be like a Mastermind game

You do an experiment or a clinical study and you are the code braker not knowing the peg positions and colors ( set by a code maker ).

The codebreaker tries to guess the pattern, in both order and color, within twelve (or ten, or eight) turns. Each guess is made by placing a row of code pegs on the decoding board. Once placed, the codemaker provides feedback by placing from zero to four key pegs in the small holes of the row with the guess. A red or black key peg is placed for each code peg from the guess which is correct in both color and position. A white key peg indicates the existence of a correct color code peg placed in the wrong position (Wikipedia).

Each experiment costs considerable time and usually quite some money. Mostly an experiment provides a result and if you publish that result, you will get between zero and 4 pegs.
Zero pegs (“negative findings”) is definitely a great result but you will never make a science career just for the simple reason as research administrations can count only pegs.

Yes, they are not involved in the game (not a problem), they are not even interested in the result (not a problem). But they have plenty of time (big problem), so they sum up peg counts, they make statistics about averages per guess, they count ratios of white and red pegs, they even invent new complicated stats like number of guesses with pegs being greater than the order number of a particular guess ( or something like that also called Hirsch factor).

So the whole science award system is being optimized towards peg counts while the public still believes it is about problem solving or something like that.

It could be so simple, if they could just count the number of guesses necessary until you get four red pegs – the only relevant figure for a great mind, yea, yea (with credits to SD).