Here are some simple some tools for examining papers.
There is no single button solution. To extract the pictures I use unar (“brew install unar” or “pdf2txt.py”) which preserves usually more detail than a screenshot. As web-based solution Forensically is useful (alternatives are Fotoforensics, Reveal and the beta version of ImageTwin). I have written an own OpenCV script while I think Sherloq is the most useful toolbox. Maybe there are also situations where individual color adjustments are needed: I have a local version of Affinity here but there is also a nice web-based tool called Minipaint.
For text plagiarism check, it probably needs some paid service: docoloc, Plagscan or Turnitin (services compared here) if don’t want to copy/paste text blocks into Google Search. For translation, I recommend Deepl. For text comparison I used BBEdit while also the Atom/split-diff plugin is suitable (as an online solution also simtext). For PDF comparison I suggest Copyleaks.
Current software developments to scan manuscripts for statistical issues are Barzooka (Nico Riedel), SciScore (Michèle Nuijten), Grim Test (Nicholas Brown, also online) and Grimmer (Jordan Anaya, also online). statcheck seems to be out of order while there is a nice R implementation of the Carlisle method by Nick Brown.
A short history of detecting image forgery can be found at the Zwelling blog and an introduction into the theory in a paper by Hany Farid or his book “Fake Photos“. A recent interview with Elisabeth Bik gives further details, an introduction into picture forensics is at scienceintegritydigest including details of the nomenclature.
PubPeer is the place where all results could be deposited.
“There is currently worldwide concern over corruption…. This concern touches all Member States and all levels of education” according to the European Counsil.