Category Archives: Philosophy

James Joyce and fair use

The New Yorker has the background details

Stephen is Joyce’s only living descendant, and since the mid-nineteen-eighties he has effectively controlled the Joyce estate. Scholars must ask his permission to quote sizable passages or to reproduce manuscript pages from those works of Joyce’s that remain under copyright—including “Ulysses” and “Finnegans Wake”—as well as from more than three thousand letters and several dozen unpublished manuscript fragments…
Over the years, the relationship between Stephen Joyce and the Joyceans has gone from awkwardly symbiotic to plainly dysfunctional…

and the Lessig blog the results of the current controversy

As reported at the Stanford Center for Internet and Society, Shloss v. Estate of James Joyce has settled. As you can read in the settlement agreement, we got everything we were asking for, and more (the rights to republish the book). This is an important victory for a very strong soul, Carol Shloss, and for others in her field.

Addendum

Public Rambling on copyright problems in science blogs

The epigenetic landscape

What I always feared, but couldn’t believe, is now confirmed by renowned experts in a new Cell editorial

Historically, the word “epigenetics” was used to describe events that could not be explained by genetic principles.

It goes back to Conrad Waddington – and describes now such bizarre and inexplicable features like paramutation in maize, position effetc variegation in Drosophila and methylation in humans. There is a nice analogy of the classical 1957 epigenetic landscape figure of Waddington where the course of the ball is influenced by hillls and valleys where it finally arrives – the Pinball arcade game

known factors that may regulate epigenetic phenomena are shown direcing the complex movements of pinballs (cells) across the elegant landscape … no specific order of molecular events is implied; as such a sequence remains unknown. Effector proteins recognize specific histone modifications…

[MEDIA=19]

About replication validity of genetic association studies and illogical journal policies

Also outside the genetics community many people wonder why Popper’s account of falsifiability has so readily be abandoned. Karl Popper used falsification in “Logic of Scientific Discovery” as a criterion of demarcation between what is and what is not genuinely scientific.

Paul K. Feyerabend, one of Poppers many famous scholars at the London School of Economics- defended in “Against Method” (Feyerabend 1993) the view that there are no methodological rules which can be always used by scientists. He objected to any single prescriptive scientific method (like falsification) as any such method would limit the activities of scientists, and restrict scientific progress. Progress instead occurs where new theories are not consistent with older theories; a new theory also can never be consistent with all relevant facts: this make falsification attempts useless. Feyerabend advocated in a rather anarchistic view that scientific pluralism improves the critical power of science and not any schematic rules like profile population x with SNP panel y and describe all p less than z to finally develop new treatment t.

Many reasons why genetic association studies failed have been already identified (see Buchanan et al. 2006). Usually high impact journals get spectacular claims first; half-way down between Popper and Feyerabend, the editorial board looks for falsifiability by claiming additional populations.

As expected, effect sizes will not be exactly the same in different populations; often only neighbouring SNP “rescue” the initial claim. It has never been decided by a formal process, what does it mean if a third or fourth population doesn’t show up with the same result. It has never been clarified that falsifiability means that the exactly same SNP needs to be associated in all population studies or just a haplotype (or just a microsatellite allele) somewhere in that genomic region.

Nevertheless replication validity – in the context of generalization – is permanently used to prove “true” or “meaningful” association that ultimately deserve a high impact factor. Humans look different and they seem different in genetic terms: the high individual variablity in expressing a disease trait may reflect not only reflect a highly variable environment but also highly individual genetic pathway. We are willing to accept a causal mutation found in just one family with a monogenic trait often there seems no way to convince an editorial board that a strong association found in just one study sample is an important discovery that may severely impact exactly this population (given additional functional proof of otherwise static gene variant).

The absence of large linkage signals and the absence of reproducible genetic associations with nearly all complex diseases may indicate only individual risk gene combinations. It seems to be that we need to listen to another scholar of Popper – Thomas Kuhn — to change the current paradigma.

Addendum

14-6-07 Finally, Nature published some guidelines for interpretation of association studies

Sir Francis Bacon: Knowledge is power

which is even true for negative knowledge, e.g. the knowledge there is no association between factor x and factor y under condition z. As we all know this is being difficult to publish – Technology Review offers some relief:

„Journal of Negative Results – Ecology and Evolutionary Biology“ (JNR-EEB)
„Journal of Negative Observations in Genetic Oncology“
„Journal of Interesting Negative Results in Natural Language Processing and Machine Learning“
„Journal of Articles in Support of the Null Hypothesis“
„Journal of Negative Results in Biomedicine“
„Forum for Negative Results“ (FNR) inside of „Journal of Universal Computer Science“

Science crowd-sourced

I have recently read about a round-table discussion on “so called experts” – a frequent topic in environmental circles. Have to say that I do not fear so much half-way baked knowledge – even renowned experts are occasionally slipping to a closely related field where they are no expert at all. Or do you believe that a Nobel prize winner in physics has any primacy in ethics?

In the same vein, there is comment in nature medicine about Wikipedia – complaining that a 4th year medical student (“who is barely old enough to buy beer”) has such a large influence on medical writing at Wikipedia. As there doesn’t follow any details of his major errors or misunderstandings, I conclude that this comment is more about the beer drinking habits of the author Brandom Keim.

Anyway, there are quite interesting new sites by medical doctors like Gantyd (get a note from your doctor) with 3000 topic pages, 200 editors from 6 countries) or Ask Dr. Wiki (4 editors, clinical notes, pearls, ECGs, X-ray images and coronary angiograms) all worth a look.

It’ s a small world

Sometimes erroneously described as global village phenomenon the notion of a small world goes back to an experiment by Stanley Milgram (who became famous with the “obedience to authority” experiment – I did not know until last weeks that the punishing experiments had been repeated here in Munich where 85 percent of the subjects continued until to the end!).

The small world theory says that everyone in the world can be reached through a short chain of social acquaintances. The concept gave rise to the famous phrase of phrase six degrees of separation – I believe that a scientist may even reach another scientist in 4-5 steps.

My first PubNet example here is to reach F. Sanger by joint co-authors. This doesn’t work – my estimate would be 3 intermediary steps.

smallw01.png

My second PubNet example is to reach N. Morton (the foreword of his anniversary book says that a qualification of a genetic epidemiologist can be counted as “Newton”-points – the number of joint publications with Professor Morton).

smallw2.png

Addendum 8/7/08

Arxive.org has the largest study so far: 6,6 steps in 30 billion messenger conversations among 240 million people.

Search engines are about algorithms w/o structure, while databases are about structure w/o algorithms

NYT today has an interesting article about freebase (no, nothing about cocaine here) a forthcoming sematic web approach.

On the Web, there are few rules governing how information should be organized. But in the Metaweb database, to be named Freebase, information will be structured to make it possible for software programs to discern relationships and even meaning.
For example, an entry for California’s governor, Arnold Schwarzenegger, would be entered as a topic that would include a variety of attributes or “views” describing him as an actor, athlete and politician — listing them in a highly structured way in the database.
That would make it possible for programmers and Web developers to write programs allowing Internet users to pose queries that might produce a simple, useful answer rather than a long list of documents.

Valleywag – the famous tech gossip – also has something about semantic webs.

Fail better

I truly liked the recent Sjoblom study while a new Science letter now raises heavy criticism:

… put into stark reality the challenges facing the Human Cancer Genome Project (HCGP). One wonders about the merits of such high-cost, low-efficiency, and ultimately descriptive-type “brute force” studies. Although previously unknown mutated genes were unearthed, the functional consequences of most of these and their actual role in tumorigenesis are unknown, and even with that knowledge we are a long way from identifying new therapeutic targets.

This seems to be the open wound of modern biology: all these high throughput driven genotyping / expression profiling / metabolome scanning approaches are mainly money & impact & activity driven – parameter or hypothesis-free has become a fashionable buzz phrase while only a few years ago it would have been an affront to every serious researcher.

Funny to see also the new Nature initiative opentextmining.org as nobody wants to read the results of these studies. So at least computers should be able to do that. Fail better

Addendum

Similar criticism of the Neanderthal studies but a different argument

However, although such comparisons are of interest, it is not the static genome but rather the dynamic proteome that determines the phenotype of an organism. Salient examples include the caterpillar and the tadpole, which share
genomes with the butterfly and frog, respectively, but which have very different proteomes making them into very different organisms.
Thus, rather than performing untargeted comparisons of sizable genomes, we suggest that it might be more useful to address this question using a standard hypothesis-driven approach.

Don’t become a scientist?

A quick link to an open letter – I do not endorse the opinions expressed there…

Now you spend your time writing proposals rather than doing research. Worse, because your proposals are judged by your competitors you cannot follow your curiosity, but must spend your effort and talents on anticipating and deflecting criticism rather than on solving the important scientific problems. They’re not the same thing: you cannot put your past successes in a proposal, because they are finished work, and your new ideas, however original and clever, are still unproven. It is proverbial that original ideas are the kiss of death for a proposal; because they have not yet been proved to work (after all, that is what you are proposing to do) they can be, and will be, rated poorly. Having achieved the promised land, you find that it is not what you wanted after all.

Looks like ‘Research 2.0’ need to be installed there.

Delusion or poor scientific hypothesis

Delusion is a common symptom of paranoid schizophrenia ICD10 F20.0, usually combined with hallucinations (either auditory – noises or voices, visual or other perceptions of smell or taste). The most common paranoid symptoms are delusions of persecution, reference, exalted birth, special mission, bodily change, or jealousy. It has been most impressing (and harrowing) to see these patients as a medical student in Vienna at Baumgartner Höhe. Of course I visited Berggasse 19 but there have been more pioneers in Vienna like Krafft-Ebing).

We are arriving now at my main question: What is the difference between delusion and a scientific hypothesis? This question stems from a recent appraisal of the “TH17 revision” of the TH1/TH2 hypothesis by Lawrence Steinman

A historical perspective on the TH1/TH2 hypothesis is illuminating, both for its insights into important immunological phenomena and for its revelations about how groups of highly trained intellectuals, in this case immunologists, can adhere to an idea for so many years, even in the face of its obvious flaws.

He refers mainly to predictions of EAE outcomes – as an allergologist I could add more examples where the simple TH1/TH2 paradigma did not work. What is the difference between delusion and a scientific hypothesis? In my opinion the answer is context dependent as there is not so much difference – delusion will not be so persistent over time (although it is nearly impossible to convince somebody that he is captured by a delusion) while a poor hypothesis is usually more persistent (but there is a good chance to convince somebody that a hypothesis is wrong). Steinman also has some advice

We should not become fixated on the hypothesis, as if it were a ‘Law’, which in any case may fall in the face of new data that such a Law cannot explain. Most importantly, we should not ignore aberrant data that cannot be explained by a concept, whether it is deemed a Law or, more modestly, a Hypothesis. We should always be careful to explain those quirky aberrant points in the data and those annoying blemishes and flaws in the scientific theory. They may be hiding a tremendous new insight.

Meeting abstract versus full paper

JACI has an interesting letter comparing the number of meeting abstracts and the subsequent publication of a full paper. It seems that there is a large variation from from 11% to 78% – does this really mean that at some meetings half of the talks is not worth to write them up? This would explain why at some conferences nobody is taking notes – I am usually playing sudoku, backgammon or go but have also interesting podcasts with me.

Poll: Are most published research findings wrong?

–Day 5 of Just Science Week–

John Ioannides has published a rather influential paper (that will not so often be cited as read): “Why Most Published Research Findings Are False”. In principle his arguments are (numbering by me):

1. a research finding is less likely to be true when the studies conducted in a field are smaller
2. when effect sizes are smaller
3. when there is a greater number and lesser preselection of tested relationships
4. where there is greater flexibility in designs, definitions, outcomes, and analytical modes
5. when there is greater financial and other interest and prejudice
6. and when more teams are involved in a scientific field in chase of statistical significance

According to good scientific practice, this could be tested – the only problem is to recognize if the result of a single study is wrong. To be continued in 20 years…

{democracy:2}

Less is more

–Day 3 of Just Science Week–

Peer review certainly plays a major role in assuring quality of science. There are many positive aspects of peer review (plus a few disadvantages like promoting mainstream). Systematic research on peer review, however, has been largely absent until 2 decades ago; after 5 international conferences on peer review there is now also the WAME association of journal editors. Over the years, I have experienced the “cumulative wisdom” thrown at my own papers and of course developed my own style when doing reviews. Last week PLOS medicine published an interesting study who makes a good peer review:

These reviewers had done 2,856 reviews of 1,484 separate manuscripts during a four-year study period, and during this time the quality of the reviews had been rated by the journal’s editors. Surprisingly, most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training), and even these were only weak predictors.

The first finding may be unimportant for non-medics but the second may apply to a larger audience. What I fear – and that is usually not mentioned in the current discussion – that the peer review system is slowly suffocating. The willingness to do this (unpaid & extra) work is going down as papers (at least in my field) are produced more and more an industrial mass production level. I am getting a review request nearly every second day while I do need between 30 minutes and 3 hours for a paper. So, less is more.

Addendum

For a follow up go to sciencesque, a scenario how science in the post-review phase will work.

Anything better than impact factors?

Here is a nice inside view from the BMC journals – you can watch how often your own papers are being downloaded.

bmc.png

Hopefully these hits are not only generated by search engine spiders, yea, yea.