those 4Rs are now suggested as standard review criteria – definitely a great proposal as
even if the experiment can be reproduced, replication is often an issue
those 4Rs are now suggested as standard review criteria – definitely a great proposal as
even if the experiment can be reproduced, replication is often an issue
As a scientist you are spammed by lab vendors, congress chairmen and journal editors. Here is a selection of the spam that I received during the last 24 hours, all “journal” titles that I have never heard before.
Journal of Clinical and Experimental Otolaryngology (Seoul, Korea)
Zur Krise der Wissenschaften kann man vieles sagen. Dabei wird das eher selten gemacht, denn eigenartigerweise ist Wissenschaft nur wenig selbst reflexiv. Warum mit immer mehr Wissen und immer mehr Bildung, die eigentliche Wissenschaft – Aufstellen von Hypothesen, Überprüfen von Sachverhalt und logischen Folgerungen – in den letzten Jahren immer weiter abgetreten ist? Continue reading Von vertikaler und horizontaler Wissenschaft
I can’t read anymore what I would like to read — which isn’t unexpected with the doubling of papers in the last 10 years. The current rewarding system has such a focus on impact and publications – an over optimization where negative side effects are now beginning to dominate. Continue reading Science pump in, pump out
Mag sein, aber von der Struktur unterscheidet sich diese Gesellschaft nicht vom Feudalismus des Mittelalters. Wir befinden uns in einer Leibeigenschaft. Die digitalen Feudalherren wie Facebook geben uns Land, sagen: Beackert es, ihr bekommt es kostenlos. Und wir beackern es wie verrückt, dieses Land. Am Ende kommen die Lehnsherren und holen die Ernte.
everyone has a right to privacy and security (2:30) this comes with a great responsibility (4:58)
I have been relying for 20 years on analog for reading my webserver logs. But maybe … this is a bit outdated, while some newer systems as awstats, google analytics or piwik are too overloaded for me. My more recent approach with plugins like statpress or slimstat considerably degraded my system performance. So why not setting up an own database as this can be done in 30 minutes?
First, get sqliteadmin.php as we need to build a small database “log.sqlite3”. Create a new table by pasting
CREATE TABLE 'visit' ('id' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'page' TEXT, 'dt' DATETIME DEFAULT CURRENT_TIMESTAMP, 'ip' TEXT, 'name' TEXT);
in the SQL window of sqliteadmin.php and include the following code in the header of all your pages
class MyDB extends SQLite3 { function __construct() { $this->open("log.sqlite3"); } }
$db = new MyDB();
$logpage="http://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI'];
$logip = $_SERVER['REMOTE_ADDR'];
$logip = substr($logip, 0, strrpos($logip, ".")) . ".xxx";
isset($_SERVER['PHP_AUTH_USER'])? $logname = $_SERVER['PHP_AUTH_USER'] : $logname = $_SERVER['REMOTE_USER'];
$sql='INSERT INTO visit (page,ip,name) VALUES ("'.$logpage.'","'.$logip.'","'.$logname.'");';
$db->query($sql);
$db->close();
That’s all we need to record. Maybe you want also a statistics page? Here is it, showing all recorded traffic (and in brackets what is going on today).
<pre>Logfile Analysis
<?php
class MyDB extends SQLite3 { function __construct() { $this->open("log.sqlite3"); } }
$db = new MyDB();
$sql = "SELECT page, count(page) AS c FROM visit
WHERE round(julianday('now') - julianday(dt)) >=0
GROUP BY page ORDER BY c DESC;";
$ret = $db->query($sql);
while($row = $ret->fetchArray(SQLITE3_ASSOC) ){
$pagearray[ $row["page"] ] = $row["c"];
}
$sql = "SELECT page, count(page) AS c FROM visit
WHERE round(julianday('now') - julianday(dt)) =0
GROUP BY page ORDER BY c DESC;";
$ret = $db->query($sql);
while($row = $ret->fetchArray(SQLITE3_ASSOC) ){
$pagetoday[ $row["page"] ] = $row["c"];
}
foreach ($pagearray as $key => $page) {
echo $page>5 ? "<div class=big>" . $page . "(". $pagetoday[$key] . ")" . "</div>" . $key . "<br>" : FALSE;
}
?>
Looks like we used only 32 lines of code, yea, yea.
Alles übers Zitieren steht bei academics.de in einem neuen Beitrag inklusive Gründe für das Zitieren, Funktion (methodisch, evolutionär, konträr…) und Zitierbarkeit besprechen.
I couldn’t find any example online how to revise my R code getting the exif data from pictures
fn <- c("/usr/local/bin/exiftool /Users/wjst/Desktop/white.tif")
info <- system(fn,inter=TRUE,wait=TRUE)
when moving now to knitr. So here is what worked for me as a replacement including the parsing
of exiftool output.
```{r, engine='bash', echo=FALSE}
/usr/local/bin/exiftool /Users/wjst/Desktop/white.tif >/Users/wjst/Desktop/white.txt
```
```{r Exif, echo=FALSE}
fn <- '/Users/wjst/Desktop/white.txt'
info <- paste(readLines(fn))
info <- strsplit(info,"[:]{1}[ ]{1}")
info <- matrix(data=unlist(info), ncol = 2, byrow = TRUE)
info <- gsub("(^[[:space:]]+|[[:space:]]+$)", "", info)
```
*Exif*
`r kable(info)`
Chip.de berichtet, wie die NSA Verschlüsselungen einstuft
Als “trivial” gilt etwa die Verfolgung eines Dokuments im Internet. “Gering” wird der Aufwand bezeichnet, Facebook-Chats auszulesen, “mäßig” anstrengen muss sich die NSA, um E-Mails eines russischen Mail-Providers abzufangen.
…
Das anonyme Netzwerk Tor, die Festplatten-Verschlüsselung TrueCrypt bis Version 7.1a, die Chat-Verschlüsselung Off-the Record (OTR), die E-Mail-Verschlüsselung PGP (etwa in der Implementierung Gpg4win) sowie das Protokoll ZRTP, mit dem sich Chats und Telefonate verschlüsseln lassen (zum Einsatz kommt es etwa in RedPhone und Signal), können von der NSA offenbar nicht geknackt werden.
…
I needed to patch an unused Fritz Box with a DHCP and name server for a new project. Running these services from a network server wasn’t really stable.
The freetz websites has all the details for the patch while I can confirm that freetz version 2.0 can power a 10 year old 1und1 branded Fritz Box SL WLAN (also known as 3020 type) running firmware 09.04.34.
I compiled everything under Ubuntu/Luna using the terminal
#!/bin/bash cat /etc/issue uname -m # update system sudo apt-get -y update sudo apt-get -y upgrade sudo apt-get -y dist-upgrade sudo apt-get -y install imagemagick subversion gcc g++ bzip2 binutils automake patch autoconf libtool pkg-config make libncurses5-dev zlib1g-dev flex bison patch texinfo tofrodos gettext pkg-config ecj fastjar realpath perl libstring-crc32-perl ruby ruby1.8 gawk python libusb-dev unzip intltool libglib2.0-dev xz-utils git-core libacl1-dev libattr1-dev libcap-dev # do NOT run the following as root cd /HOME/wjst/Desktop umask 022 svn checkout http://svn.freetz.org/branches/freetz-stable-2.0 cd freetz-2,.0 make menuconfig make
The new image may then be uploaded using the standard AVM webinterfaceat http://fritz.box . The AVM DHCP needs to be switched off.
Freetz can be reached at http://fritz.box:81 where also the dnsmasq hosts file can be programmed to reach my server at a permanent IP address.
Slashdot has a feature on that
Statisticians have long thought it impossible to tell cause and effect apart using observational data. The problem is to take two sets of measurements that are correlated, say X and Y, and to find out if X caused Y or Y caused X. That’s straightforward with a controlled experiment… But in the last couple of years, statisticians have developed a technique that can tease apart cause and effect from the observational data alone. It is based on the idea that any set of measurements always contain noise. However, the noise in the cause variable can influence the effect but not the other way round. So the noise in the effect dataset is always more complex than the noise in the cause dataset. .. The results suggest that the additive noise model can tease apart cause and effect correctly in up to 80 per cent of the cases (provided there are no confounding factors or selection effects).
and jmlr a more theoretical account
Based on these deliberations we propose an efficient new algorithm that is able to dis- tinguish between cause and effect for a finite sample of discrete variables.
tbc
It is a ritual that I have seen for may years. All lights are switched on after a talk and the race starts for the microphones.
These are not always the best questions that are being asked after a talk. And as there is a time limit, not all questions will be asked. And why are these only oral questions while the lecture was multimedia?
Apart from the fact that asking a question is a mini-presentation of people who are not suffering from low self esteem, I think we may indeed develop new tools of communication.
Here is my current experimental setup. I open a local hotspot before the lecture where up to 250 mobile devices (phone, tablet, laptop) can connect. Each of these clients in the audience gets a comment / upload screen when following instructions given on the first slide.
All audience response is being written to a database, while attached diagrams or screenshots are being saved to a cache directory.
The talk itself is not displayed from Powerpoint but from Chrome in presentation mode. Basically this is just one single presentation page built dynamically with one divs per slide. Next slide just means scrolling to the next div and can be controlled from any tablet or phone.
Periodically the presentation page is being appended with further divs containing new question slides from the audience.
After the end of the talk, we can go to the admin page, where we may jump to single questions and display them just like the slides from my own talk.
But what is so much different now to the current practice?
If you interested in testing, I would be happy to share further details. The only thing you need is a local WLAN router and access to the command line where a local web server is being started.
I need a local wireless LAN for about 100 mobile phones working as message terminals for an info screen projected to a white wall. As I had an old, unused Fritz! Box, I created a WLAN there but disabled DHCP as local domains are not supported by the Fritz! OS due to security constraints.
Then I installed dnsmasq on the Macbook and a packet sniffer to see what is going on.
brew update brew install dnsmasq brew install wireshark --with-qt
while I could use the DNS server already available under Mavericks. But then things became difficult. Here is what finally worked for me. Not sure if this is really the best way to handle it…
# need to be run as root sudo -s # WiFi off networksetup -setairportpower en0 off # set IP address, net mask, and router networksetup -setmanual Wi-Fi 192.168.1.2 255.255.255.0 192.168.1.1 # set nameserver IP address networksetup -setdnsservers Wi-Fi 192.168.1.2 # start nameserver launchctl load -w /Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist # start DHCP server launchctl load -w /System/Library/LaunchDaemons/bootps.plistapachectl start # WiFi on networksetup -setairportpower en0 on # test local domain dig test.dev
And just for the records here are two configuration files for DHCP and nameserver including their directories.
sudo nano /etc/bootpd.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>bootp_enabled</key>
<false/>
<key>detect_other_dhcp_server</key>
<true/>
<key>dhcp_enabled</key>
<array>
<string>en0</string>
</array>
<key>reply_threshold_seconds</key>
<integer>0</integer>
<key>Subnets</key>
<array>
<dict>
<key>allocate</key>
<true/>
<key>dhcp_domain_name_server</key>
<array>
<string>192.168.1.2</string>
</array>
<key>dhcp_router</key>
<string>192.168.1.1</string>
<key>lease_max</key>
<integer>86400</integer>
<key>lease_min</key>
<integer>86400</integer>
<key>name</key>
<string>192.168.1</string>
<key>net_address</key>
<string>192.168.1.0</string>
<key>net_mask</key>
<string>255.255.255.0</string>
<key>net_range</key>
<array>
<string>192.168.1.3</string>
<string>192.168.1.254</string>
</array>
</dict>
</array>
</dict>
</plist>
sudo nano /usr/local/etc/dnsmasq.conf
address=/.dev/192.168.1.2