part I
(sound is poor)
part II
part III
part IV
TBC
part I
(sound is poor)
part II
part III
part IV
TBC
At stackexchange there is a super interesting discussion on parallelized computer code and DNA transcription (which is different to the DNA-based molecular programming literature…)
IF : Transcriptional activator; when present a gene will be transcribed. In general there is no termination of events unless the signal is gone; the program ends only with the death of the cell. So the IF statement is always a part of a loop.
WHILE : Transcriptional repressor; gene will be transcribed until repressor is not present.
FUNCTION: There are no equivalents of function calls. All events happen is the same space and there is always a likelihood of interference. One can argue that organelles can act as a compartment that may have a function like properties but they are highly complex and are not just some kind of input-output devices.
GOTO is always dependent on a condition. This can happen in case of certain network connections such as feedforward loops and branched pathways. For example if there is a signalling pathway like this: A → B → C and there is another connection D → C then if somehow D is activated it will directly affect C, making A and B dispensable.
Of course these are completely different concepts. I fully agree with the further stackexchange discussion that
it is the underlying logic that is important and not the statement construct itself and these examples should not be taken as absolute analogies. It is also to be noted that DNA is just a set of instructions and not really a fully functional entity … However, even being just a code it is comparable to a HLL [high level language] code that has to be compiled to execute its functions. See this post too.
Please forget everything you read from Francis Collins about this.
There is a new Science paper that shows
A central promise of artificial intelligence (AI) in healthcare is that large datasets can be mined to predict and identify the best course of care for future patients. … Chekroud et al. showed that machine learning models routinely achieve perfect performance in one dataset even when that dataset is a large international multisite clinical trial … However, when that exact model was tested in truly independent clinical trials, performance fell to chance levels.
This study predicted antipsychotic medication effects for schizophrenia – admittedly not a trivial task due to high individual variability (as there are no extensive pharmacogenetics studies behind). But why did it completely fail? The authors highlight two major points in the introduction and detail three in the discussion
So are we left now without any clue?
I remember another example of Gigerenzer in “Click” showing misclassification of chest X rays due to different devices (mobile or stationary) which associates with more or less serious cases (page 128 refers to Zech et al.). So we need to know the relevant co-factors first.
There is even a first understanding of the black box data shuffling in the neuronal net. Using LRP (Layer-wise Relevance Propagation) the recognition by weighting the characteristics of the input data can already be visualized as a heatmap.
Bill Ackman is threadening Harvard faculty on X
I expect that in the not too distant future AI will target every paper and not only a suspicious table or an image found by chance. Nevertheless using this now as a weapon is immoral and at high risk of false accusations. And , it may even be prosecuted as criminal defamation.
Feb 11, 2025
Unfortunately scientific integrity is being used again as personal weapon. Stefan Weber is making a business from right wing clients to verify doctoral theses. Without doubt, he has excellent technical skills (or at least a Turnitin account) but also completely lost all sense of proportion and direction. See
Back to SPIEGEL yesterday, translated
In some cases, his accusations turned out to be unfounded or less serious than he portrayed them. That’s why he is viewed more critically in Austria. … Until the Föderl-Schmid case, none of this had harmed him much. But for those he accused, it was a different story. Even if the allegations came to nothing, their reputation was tarnished
Gravestein discusses the 70 page arXiv paper “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness and 14 indicator properties from 6 areas
TBC
sleepwatcher(8) System Manager's Manual sleepwatcher(8)
NAME
sleepwatcher – daemon to monitor sleep, wakeup and idleness of a Mac
SYNOPSIS
sleepwatcher [-n] [-v] [-V] [-d] [-g] [-f configfile] [-p pidfile]
[-a[allowsleepcommand]] [-c cantsleepcommand]
[-s sleepcommand] [-w wakeupcommand] [-D displaydimcommand]
[-E displayundimcommand] [-S displaysleepcommand]
[-W displaywakeupcommand] [-t timeout -i idlecommand
[-R idleresumecommand]] [-b break -r resumecommand]
[-P plugcommand] [-U unplugcommand]
DESCRIPTION
sleepwatcher is a program that monitors sleep, wakeup and idleness of a Mac. It can be used to execute a Unix command when the Mac or the display of the Mac goes to sleep mode or wakes up, after a given time without user interaction or when the user resumes activity after a break or when the power supply of a Mac notebook is attached or detached. It also can send the Mac to sleep mode or retrieve the time since last user activity.
Continue reading The missing sleepwatcher manpage
1. What is all about?
Starting with Corona I have been streaming lectures and church concerts. A Macbook and a n old Chromebook, some old iPhones and Nikon DSLR cameras were connected by HDMI cables to a Blackmagic ATEM mini pro. This worked well although there are many shortcomings of HDMI as it is basically a protocol to connect just screens to a computer and not devices in a large nave.
2. What are the options?
WIFI transmission would be nice but is probably not the first choice for video transmission in a crowded space with even considerable latency in the range of 200ms. SDI is an IP industry standard for video but this would require dedicated and expensive cabling for each video sources including expensive transceivers. The NDI protocol (network device interface) can use existing ethernet networks and WIFI to transmit video and audio signals. NDI enabled devices started slowly due to copyright and license issues but is expected to be a future market leader due to its high performance and low latency.
3. Is there any low-cost but high quality NDI solution?
NDI producing videocameras with PTZ (pan(tilt/zoom) movements are expensive in the range of 1,000-20,000€. But there are NDI encoder for existing hardware like DSLRs or mobile phones. These encoders are sometimes difficult to find, I am listing below what I have been used so far. Whenever a signal is live, it can be easily displayed and organized by Open Broadcaster Software running on MacOS, Linux or Windows. There are even apps for iOS and Android that can send and receive NDI data replacing now whole broadcasting vehicles ;-)
4. What do I need to buy?
Assuming that you already have some camera equipment that can produce clean HDMI out (Android needs an USB to HDMI and iPhones a Lightning to HDMI cable), you will need
5. Which devices did you test and which do you currently use?
The Teltonika router and the two Zowietek converter cost you less than 500€, the Obsbot also comes at less than 500€ while this setup allows for semi-professional grade live streams.
6. Tell me a bit more about your DSLR cameras and the iPhones please.
There is nothing particular, some old Nikon D4s, a Z6 and a Z8, all with good glass and an outdated iPhone 12 mini without SIM card.
7. Have you ever tried live streaming directly from a camera like the Insta 360 X3?
No.
8. What computer hardware and software do you use for streaming and does this integrate PTZ control?
I use now a 2017 Macbook (which showed some advantage over a Linux notebook). NDI Video has to be delayed due to the latency of other NDI remote sources. I usually sync direct sound and NDI video with the sound being delayed with +450ms in OBS.
Right now I am testing also an iPad app called TopDirector. The app looks promising but haven’t tested it so much in the wild.
PTZ control can be managed by an OBS plugin, while TopDirector has PTZ controls already built in.
9. How much setup time do you need?
Setting up 2 DSLR cameras and 1 PTZ on tripods with cables takes 30-60-90 minutes . OBS configuration and Youtube setup takes another 15 minutes if everything goes well.
The touchbar is a nice feature of the Macbook when used as multimedia machine because it could be individually programmed. Unfortunately it has been abandoned may for reasons unknown to us. At least it starts now flickering at random intervals in my 2019 Macbook Pro. And nothing helps in the long-term
while only kill touchbar disables it immediately . Could it be a combined hardware / software issue?
While I can’t fully exclude a minimal battery expansion after 200 cycles, the battery is still marked as OK in the system report. The flickering can be stopped by a bright light at the camera hole on top of the display so the second option is also unlikely.
With the Medium hack it is gone during daytime but still occurs sometimes during sleep which is annoying. It seems that even the dedicated Hide My Bar app is only an user-level application that doesn’t have the permissions needed to disable the Touch Bar in lock screen.
Completely disabling the touchbar will not possible as there is no other ESC key. The touchbar there may need replacement as recommended by Apple or just tape on top of it.
“Free synthetic data”? There are numerous Google ads selling synthetic aka fake data. How “good” are these datasets? Will they ever been used for scientific publications outside the AI field eg surgisphere-like?
There is a nice paper by Taloni, Scorcia and Giannaccare that tackles the first question. Unfortunately a nature news commentary by Miryam Naddaf is largely misleading when writing Continue reading Can ChatGPT generate a RCT dataset that isn’t recognized by forensic experts?
Just one problem: the video isn’t real. “We created the demo by capturing footage in order to test Gemini’s capabilities on a wide range of challenges. Then we prompted Gemini using still image frames from the footage, and prompting via text.” (Parmy Olsen at Bloomberg was the first to report the discrepancy.)
It doesn’t even give more confidence if Oriol Vinyals now responds
All the user prompts and outputs in the video are real, shortened for brevity. The video illustrates what the multimodal user experiences built with Gemini could look like. We made it to inspire developers.
May I also emphasize that AI is a research method suffering form severe flaws as Nature reported again yesterday “Scientists worry that ill-informed use of artificial intelligence is driving a deluge of unreliable or useless research”
A team in India reported that artificial intelligence (AI) could do it, using machine learning to analyse a set of X-ray images. … But the following September, computer scientists Sanchari Dhar and Lior Shamir at Kansas State University in Manhattan took a closer look. They trained a machine-learning algorithm on the same images, but used only blank background sections that showed no body parts at all. Yet their AI could still pick out COVID-19 cases at well above chance level.
The problem seemed to be that there were consistent differences in the backgrounds of the medical images in the data set. An AI system could pick up on those artefacts to succeed in the diagnostic task, without learning any clinically relevant features — making it medically useless.
not even mentioning here again data leaking
There has been no systematic estimate of the extent of the problem, but researchers say that, anecdotally, error-strewn AI papers are everywhere. “This is a widespread issue impacting many communities beginning to adopt machine-learning methods,” Kapoor says.