An absolutely wilde story

https://twitter.com/sTeamTraen/status/1567521153094230016

 

Regarding computer help there is an interesting video 

 

 

I recolored the Excel Sheet of recent tournaments docs.google.com/spreadsheets/d but cannot detect any pattern. Maybe it does not make sense to analyze completed games but only “super moves”?

 

The distribution of scores (in particular scores >72) is, however, suspicious as there are way too many 100′ scores.

read.xlsx(fn, sheet = "Sheet1", startRow = 1, colNames = TRUE, rowNames = FALSE, rows = NULL, cols = NULL, check.names = FALSE) %>%
	select(1,5:19) %>%
	slice(1:51) %>%
	melt(., id.vars="Tournament") %>%
	ggplot( aes(x=value) ) +	
	geom_histogram(binwidth=1, aes(y=..density..),fill="blue") +
	geom_density(alpha=.2, fill="red")

which produces

Oct 6, 2022

Chess.com published the Niemann report at https://www.chess.com/blog/CHESScom/hans-niemann-report saying

in our view there is a lack of concrete statistical evidence that he cheated in his game with Magnus or in any other over-the-board (“OTB”)—i.e., in-person—games. … While his performance in some of these [online] matches may seem to be within the realm of some statistical possibility, the probability of any single player performing this well across this many games is incredibly low … The basic concept of cheat detection, particularly at the top level of chess, is both statistical and manual, involving:
• Comparing the moves made to engine recommended moves
• Removing some moves (opening, some endgame)
• Focusing on key/critical moves
• Discussing with a panel of trained analysts and strong players
• Comparing player past performance and known strength profile
• Comparing a player’s performance to performances of comparable peers
• Looking at the statistical significance of the results (ex. “1 in a million chance of happening
naturally”)
• Looking at if there are behavioral factors at play (ex. “browser behavior”)
• Reviewing time usage when compared to difficulty of the moves on the board
Chess.com employs highly-rated and Grandmaster (GM) Fair Play Analysts precisely because there are many situations where humans are required to understand how “human” vs. “computer” a move actually is. Human chess and computer chess are different, even at the highest levels. The best humans play at an Elo rating of 2800. “Stockfish,” the most powerful chess engine, has an estimated rating of more than 3500…