Category Archives: Software

Happy Birthday

A MUST READ by Tim Berners Lee at Medium

Three and a half decades ago, when I invented the web, its trajectory was impossible to imagine … It was to be a tool to empower humanity.The first decade of the web fulfilled that promise — the web was decentralised with a long-tail of content and options, it created small, more localised communities, provided individual empowerment and fostered huge value. Yet in the past decade, instead of embodying these values, the web has instead played a part in eroding them…

https://solidproject.org/

How does AI recognize AI text

The Semrush blog has a nice summary

By analyzing two main characteristics of the text: perplexity and burstiness. In other words, how predictable or unpredictable it sounds to the reader, as well as how varied or uniform the sentences are.

Perplexity is

a statistical measure of how confidently a language model predicts a text sample. In other words, it quantifies how “surprised” the model is when it sees new data. The lower the perplexity, the better the model predicts the text.

Burstiness is

is the intermittent increases and decreases in activity or frequency of an event. One of measures of burstiness is the Fano factor —a ratio between the variance and mean of counts. In natural language processing, burstiness has a slightly more specific definition… A word is more likely to occur again in a document if it has already appeared in the document. Importantly, the burstiness of a word and its semantic content are positively correlated; words that are more informative are also more bursty.

Or lets call it entropy? So we now have some criteria

    • AI texts are more uniform and  more predictable and often repetitive with
    • lack of depth and personality
    • Sometimes plagiarism checker may recognize “learned” AI phrases. Sometimes reference checkers will find “hallucinated” references
    • Incorrect content and outdated information in contrast needs human experts
    • An obvious, yet underappreciated downside: “AI texts have nothing to say” – “clichéd nothingness“.

Well, there appear now also  AI prologue sentences in scientific literature for example like “Certainly! Here is…”

Pro Tipp: Next level OCR of academic documents

Reading of math documents into LaTeX involves a lot of typing while there is some support now by FB (Github)

pip install nougat-ocr
nougat path/to/file.pdf -o output_directory

Parallelized computer code and DNA transcription

At stackexchange there is a super interesting discussion on parallelized computer code and DNA transcription (which is different to the DNA-based molecular programming literature…)

IF : Transcriptional activator; when present a gene will be transcribed. In general there is no termination of events unless the signal is gone; the program ends only with the death of the cell. So the IF statement is always a part of a loop.

WHILE : Transcriptional repressor; gene will be transcribed until repressor is not present.

FUNCTION: There are no equivalents of function calls. All events happen is the same space and there is always a likelihood of interference. One can argue that organelles can act as a compartment that may have a function like properties but they are highly complex and are not just some kind of input-output devices.

GOTO is always dependent on a condition. This can happen in case of certain network connections such as feedforward loops and branched pathways. For example if there is a signalling pathway like this: A → B → C and there is another connection D → C then if somehow D is activated it will directly affect C, making A and B dispensable.

Of course these are completely different concepts. I fully agree with the further stackexchange discussion that

it is the underlying logic that is important and not the statement construct itself and these examples should not be taken as absolute analogies. It is also to be noted that DNA is just a set of instructions and not really a fully functional entity … However, even being just a code it is comparable to a HLL [high level language] code that has to be compiled to execute its functions. See this post too.

Please forget everything you read from Francis Collins about this.

When AI results cannot be generalized

There is a new Science paper that shows

A central promise of artificial intelligence (AI) in healthcare is that large datasets can be mined to predict and identify the best course of care for future patients.  … Chekroud et al. showed that machine learning models routinely achieve perfect performance in one dataset even when that dataset is a large international multisite clinical trial … However, when that exact model was tested in truly independent clinical trials, performance fell to chance levels.

This study predicted antipsychotic medication effects for schizophrenia – admittedly not a trivial task due to high individual variability (as there are no extensive pharmacogenetics studies behind). But why did it completely fail? The authors highlight two major points in the introduction and detail three in the discussion

  • models may overfit the data by fitting the random noise of one particular dataset rather than a true signal
  • poor model transportability is expected due to patients, providers, or implementation characteristics that vary across trials
  • in particular patient groups that are too different across trials while this heterogeneity is not covered in the model
  • missing outcomes and covariates like psychosocial information and social determinants of health were not recorded in all studies
  • patient outcomes may be too context-dependent where trials may have subtly important differences in recruiting procedures, inclusion criteria and/or treatment protocols

So are we left now without any clue?

I remember another example of Gigerenzer in  “Click” showing misclassification of chest X rays due to different devices (mobile or stationary) which associates with more or less serious cases (page 128 refers to Zech et al.).  So we need to know the relevant co-factors first.

There is even a first understanding of the black box data shuffling in the neuronal net.  Using LRP  (Layer-wise Relevance Propagation) the recognition by weighting the characteristics of the input data can already be visualized as a heatmap.

Scientific integrity is not a weapon

Bill Ackman is threadening Harvard faculty on TwiX

In the near future AI will target every paper and not only a suspicious table or an image found by chance. Nevertheless using this now as a weapon seems to be immoral and is at high risk of false accusations that may be prosecuted as criminal defamations. Let’s see what happens to big mouthed  announcements…

 

The missing sleepwatcher manpage


sleepwatcher(8) System Manager's Manual sleepwatcher(8)

NAME
sleepwatcher – daemon to monitor sleep, wakeup and idleness of a Mac

SYNOPSIS
sleepwatcher [-n] [-v] [-V] [-d] [-g] [-f configfile] [-p pidfile]
[-a[allowsleepcommand]] [-c cantsleepcommand]
[-s sleepcommand] [-w wakeupcommand] [-D displaydimcommand]
[-E displayundimcommand] [-S displaysleepcommand]
[-W displaywakeupcommand] [-t timeout -i idlecommand
[-R idleresumecommand]] [-b break -r resumecommand]
[-P plugcommand] [-U unplugcommand]

DESCRIPTION
sleepwatcher is a program that monitors sleep, wakeup and idleness of a Mac. It can be used to execute a Unix command when the Mac or the display of the Mac goes to sleep mode or wakes up, after a given time without user interaction or when the user resumes activity after a break or when the power supply of a Mac notebook is attached or detached. It also can send the Mac to sleep mode or retrieve the time since last user activity.
Continue reading The missing sleepwatcher manpage

FAQ: From HDMI to NDI

1. What is all about?

Starting with Corona I have been streaming  lectures and church concerts. Using a Macbook and a Linux laptop, old iPhones and Nikon DSLR cameras were connected by HDMI cables to a Blackmagic ATEM mini pro. This worked well in the beginning although there are many shortcomings of HDMI as it is basically a protocol to connect just screens to a computer and not devices in a large nave.

  • cables are expensive and there are several connector types (A=Standard, B=Dual, C=Mini, D=Micro, E=Automotive) where the right length and type is always missing
  • it never worked for me more than 15-20m distance even with amplifier inserted
  • the signal was never 100% stable, it was lost it in the middle of the performance
  • HDMI is only unidirectional, there is no tally light signal to the camera
  • there is no PTZ control for camera movement

 

2. What are the options?

WIFI transmission would be nice but is probably not the first choice for video transmission in a crowded space with even considerable latency in the range of 200ms.  SDI is an IP industry standard for video but this would require dedicated and expensive cabling for each video sources including expensive transceivers.  The NDI protocol (network device interface) can use existing ethernet networks and WIFI to transmit video and audio signals. NDI enabled devices started slowly due to copyright and license issues but is expected to be a future market leader due to its high performance and low latency.

 

3. Is there any low-cost but high quality NDI solution?

NDI producing videocameras with PTZ (pan(tilt/zoom) movements are expensive in the range of 1,000-20,000€. But there are NDI encoder for existing hardware like DSLRs or mobile phones. These encoders are sometimes difficult to find, I am listing below what I have been used so far. Whenever a signal is live, it can be easily displayed and organized by Open Broadcaster Software  running on MacOS,  Linux or Windows. There are even apps for iOS and Android that can send and receive NDI data replacing now whole broadcasting vehicles ;-)

 

4. What do I need to buy?

Assuming that you already have some camera equipment that can produce clean HDMI out (Android needs  an USB to HDMI and iPhones a Lightning to HDMI cable), you will need

  • a few cheap CAT6 cables at different lengths (superfluous if you just want a WIFI solution)
  • an industrial router with SIM card slots (satellite transmission is still out of reach for semi-professional transmission  ;-(
  • one or more HDMI to NDI transceiver
  • an additional PTZ camera is not required at the beginning

 

5. Which devices did you test and which do you currently use?

  • router: I tested AVM FritzBox , TP Link and various Netgear devices all without RJ45 network ports. I am using now a Teltonika RUT950  with three RJ45 ports as it has great connectivity and a super detailed configuration menu.
  • NDI transceiver:I  tried a DIY solution with FFMPEG/Ubuntu, then a Kiloview P2 and a LINk Pi ENC2. I am now using Zowietek 4K HDMI which is giving a stable signal, being fully configurable, silent and can be powered by USB port or PoE.
  • PTZ:  so far I used a Logitech Pro2, but will replace it with an OBSBOT Tail Air

The Teltonika router and the two Zowietek converter cost you less than 500€, the  Obsbot also comes at less than 500€ while this setup allows for semi-professional grade live streams.

 

6. Tell me a bit more about your DSLR cameras and the iPhones please.

There is nothing particular, some old Nikon D4s and a Z6, all with good glass and an outdated  iPhone SE without SIM card.

 

7. Have you ever tried live streaming directly from a camera like the Insta 360 X3?

No.

 

8. What computer hardware and  software do you use for streaming and does this integrate PTZ control?

I used a 2017 and a 2020 Macbook (which showed ab  advantage over a Linux  notebook when directly connecting the DSLR).  NDI Video has to be delayed due to the latency of other NDI remote sources. I usually sync sound and different video sources at +400ms in OBS.
Right now I am testing an iPad app for wireless management called TopDirector. The app looks promising but haven’t tested it in the wild so far.
PTZ control can be managed by an OBS plugin, while TopDirector has it already built in.

 

9. How much setup time do you need?

Setting up 2 DSLR cameras, 1 phone and 1 audio recorder on tripods and laying cables takes 30-40 minutes . OBS configuration and Youtube setup takes another 10 minutes if everything goes well.

 

Macbook touchbar flickering

The touchbar is a nice feature of the Macbook when used as multimedia machine because it could be individually programmed. Unfortunately it has been abandoned may for reasons unknown to us. At least it starts now flickering at random intervals in my 2019 Macbook Pro. And nothing helps in the long-term

  • resetting SMC
  • restting NVRAM, PRAM
  • terminal kill touchbar
  • terminal kill control strip
  • pmset hibernate mode
  • upgrade to Sonoma

while only kill touchbar disables it immediately . Could it be a combined hardware / software issue?

  • a slowly expanding battery moving the keyboard lid?
  • some defect of the light sensor?
  • a defect when starting the OLED display?

While I can’t fully exclude a minimal battery expansion after 200  cycles, the battery is still marked as OK in the system report. The flickering can be stopped by a bright light at the camera hole on top of the display so the second option is also unlikely.

With the Medium hack it is gone during daytime but still occurs sometimes during sleep which is annoying…

Completely disabling  the touchbar is not possible (due to the ESC key), so the touchbar may need replacement as  recommended by Apple. I am still exploring some other options eg improving the Medium hack with no result so far.

Can ChatGPT generate a RCT dataset that isn’t recognized by forensic experts?

“Free synthetic data”? There are numerous Google ads selling synthetic aka fake data. How “good” are these datasets? Will they ever been used for scientific publications outside the AI field eg  surgisphere-like?

There is a nice paper by Taloni,  Scorcia and Giannaccare that tackles the first question. Unfortunately a nature news commentary by Miryam Naddaf is largely misleading when writing Continue reading Can ChatGPT generate a RCT dataset that isn’t recognized by forensic experts?