Tag Archives: Science + Philosophie

Journals under Threat

Under the headline ”Journals under Threat: A Joint Response from HSTM Editors” the editors of some of the leading international journals for history and philosophy of science and social studies of science have issued a joint declaration that I received by email and that I am reprinting here to give it a larger audience.

We live in an age of metrics. All around us, things are being standardized,
quantified, measured. Scholars concerned with the work of science and
technology must regard this as a fascinating and crucial practical, Continue reading Journals under Threat

Our lives are unrepeatable experiments lacking a control

a prosaic quotation from the recent Nature correspondence section that highlights why genetics has been leading us into nowhere.

Our lives are unrepeatble experiments lacking a control. Myriad external factors interact with genetic and epigenetic factors and with chance to determine whether we are well or ill, smart or dull, successes or failures.

Yea, yea.

Antedisciplinary Science

.. another thoughtful essay by Sean Eddy in PLOS Computational Biology cites the NIH Roadmap Initiative

The scale and complexity of today’s biomedical research problems demand that scientists move beyond the confines of their individual disciplines and explore new organizational models for team science. Advances in molecular imaging, for example, require collaborations among diverse groups—radiologists, cell biologists, physicists, and computer programmers.

which sounds great like all interdisciplinary science but has also all the drawbacks (“to temper the wind to the shorn lamb” seems to be the English translation of the German “weakest ring of the chain”).

Progress is driven by new scientific questions, which demand new ways of thinking. You want to go where a question takes you, not where your training left you.

Sure, the game is more about interdisciplinary people than interdisciplinary teams

A motley crew of misfits

and not EU accountants drive progress.

How much do you think a scientific blog post is worth

(in US dollars) asks Pimm – the partial immortalization blog. A first response to this question -based on Google adsense revenues- is about $0.47/post. I think that prices depend on context – from negative balance (wasted time) to a new research direction (+tenure +$100,000) there is everything possible.

James Joyce and fair use

The New Yorker has the background details

Stephen is Joyce’s only living descendant, and since the mid-nineteen-eighties he has effectively controlled the Joyce estate. Scholars must ask his permission to quote sizable passages or to reproduce manuscript pages from those works of Joyce’s that remain under copyright—including “Ulysses” and “Finnegans Wake”—as well as from more than three thousand letters and several dozen unpublished manuscript fragments…
Over the years, the relationship between Stephen Joyce and the Joyceans has gone from awkwardly symbiotic to plainly dysfunctional…

and the Lessig blog the results of the current controversy

As reported at the Stanford Center for Internet and Society, Shloss v. Estate of James Joyce has settled. As you can read in the settlement agreement, we got everything we were asking for, and more (the rights to republish the book). This is an important victory for a very strong soul, Carol Shloss, and for others in her field.

Addendum

Public Rambling on copyright problems in science blogs

The epigenetic landscape

What I always feared, but couldn’t believe, is now confirmed by renowned experts in a new Cell editorial

Historically, the word “epigenetics” was used to describe events that could not be explained by genetic principles.

It goes back to Conrad Waddington – and describes now such bizarre and inexplicable features like paramutation in maize, position effetc variegation in Drosophila and methylation in humans. There is a nice analogy of the classical 1957 epigenetic landscape figure of Waddington where the course of the ball is influenced by hillls and valleys where it finally arrives – the Pinball arcade game

known factors that may regulate epigenetic phenomena are shown direcing the complex movements of pinballs (cells) across the elegant landscape … no specific order of molecular events is implied; as such a sequence remains unknown. Effector proteins recognize specific histone modifications…

[MEDIA=19]

About replication validity of genetic association studies and illogical journal policies

Also outside the genetics community many people wonder why Popper’s account of falsifiability has so readily be abandoned. Karl Popper used falsification in “Logic of Scientific Discovery” as a criterion of demarcation between what is and what is not genuinely scientific.

Paul K. Feyerabend, one of Poppers many famous scholars at the London School of Economics- defended in “Against Method” (Feyerabend 1993) the view that there are no methodological rules which can be always used by scientists. He objected to any single prescriptive scientific method (like falsification) as any such method would limit the activities of scientists, and restrict scientific progress. Progress instead occurs where new theories are not consistent with older theories; a new theory also can never be consistent with all relevant facts: this make falsification attempts useless. Feyerabend advocated in a rather anarchistic view that scientific pluralism improves the critical power of science and not any schematic rules like profile population x with SNP panel y and describe all p less than z to finally develop new treatment t.

Many reasons why genetic association studies failed have been already identified (see Buchanan et al. 2006). Usually high impact journals get spectacular claims first; half-way down between Popper and Feyerabend, the editorial board looks for falsifiability by claiming additional populations.

As expected, effect sizes will not be exactly the same in different populations; often only neighbouring SNP “rescue” the initial claim. It has never been decided by a formal process, what does it mean if a third or fourth population doesn’t show up with the same result. It has never been clarified that falsifiability means that the exactly same SNP needs to be associated in all population studies or just a haplotype (or just a microsatellite allele) somewhere in that genomic region.

Nevertheless replication validity – in the context of generalization – is permanently used to prove “true” or “meaningful” association that ultimately deserve a high impact factor. Humans look different and they seem different in genetic terms: the high individual variablity in expressing a disease trait may reflect not only reflect a highly variable environment but also highly individual genetic pathway. We are willing to accept a causal mutation found in just one family with a monogenic trait often there seems no way to convince an editorial board that a strong association found in just one study sample is an important discovery that may severely impact exactly this population (given additional functional proof of otherwise static gene variant).

The absence of large linkage signals and the absence of reproducible genetic associations with nearly all complex diseases may indicate only individual risk gene combinations. It seems to be that we need to listen to another scholar of Popper – Thomas Kuhn — to change the current paradigma.

Addendum

14-6-07 Finally, Nature published some guidelines for interpretation of association studies

Sir Francis Bacon: Knowledge is power

which is even true for negative knowledge, e.g. the knowledge there is no association between factor x and factor y under condition z. As we all know this is being difficult to publish – Technology Review offers some relief:

„Journal of Negative Results – Ecology and Evolutionary Biology“ (JNR-EEB)
„Journal of Negative Observations in Genetic Oncology“
„Journal of Interesting Negative Results in Natural Language Processing and Machine Learning“
„Journal of Articles in Support of the Null Hypothesis“
„Journal of Negative Results in Biomedicine“
„Forum for Negative Results“ (FNR) inside of „Journal of Universal Computer Science“

Science crowd-sourced

I have recently read about a round-table discussion on “so called experts” – a frequent topic in environmental circles. Have to say that I do not fear so much half-way baked knowledge – even renowned experts are occasionally slipping to a closely related field where they are no expert at all. Or do you believe that a Nobel prize winner in physics has any primacy in ethics?

In the same vein, there is comment in nature medicine about Wikipedia – complaining that a 4th year medical student (“who is barely old enough to buy beer”) has such a large influence on medical writing at Wikipedia. As there doesn’t follow any details of his major errors or misunderstandings, I conclude that this comment is more about the beer drinking habits of the author Brandom Keim.

Anyway, there are quite interesting new sites by medical doctors like Gantyd (get a note from your doctor) with 3000 topic pages, 200 editors from 6 countries) or Ask Dr. Wiki (4 editors, clinical notes, pearls, ECGs, X-ray images and coronary angiograms) all worth a look.

It’ s a small world

Sometimes erroneously described as global village phenomenon the notion of a small world goes back to an experiment by Stanley Milgram (who became famous with the “obedience to authority” experiment – I did not know until last weeks that the punishing experiments had been repeated here in Munich where 85 percent of the subjects continued until to the end!).

The small world theory says that everyone in the world can be reached through a short chain of social acquaintances. The concept gave rise to the famous phrase of phrase six degrees of separation – I believe that a scientist may even reach another scientist in 4-5 steps.

My first PubNet example here is to reach F. Sanger by joint co-authors. This doesn’t work – my estimate would be 3 intermediary steps.

smallw01.png

My second PubNet example is to reach N. Morton (the foreword of his anniversary book says that a qualification of a genetic epidemiologist can be counted as “Newton”-points – the number of joint publications with Professor Morton).

smallw2.png

Addendum 8/7/08

Arxive.org has the largest study so far: 6,6 steps in 30 billion messenger conversations among 240 million people.

Search engines are about algorithms w/o structure, while databases are about structure w/o algorithms

NYT today has an interesting article about freebase (no, nothing about cocaine here) a forthcoming sematic web approach.

On the Web, there are few rules governing how information should be organized. But in the Metaweb database, to be named Freebase, information will be structured to make it possible for software programs to discern relationships and even meaning.
For example, an entry for California’s governor, Arnold Schwarzenegger, would be entered as a topic that would include a variety of attributes or “views” describing him as an actor, athlete and politician — listing them in a highly structured way in the database.
That would make it possible for programmers and Web developers to write programs allowing Internet users to pose queries that might produce a simple, useful answer rather than a long list of documents.

Valleywag – the famous tech gossip – also has something about semantic webs.

Fail better

I truly liked the recent Sjoblom study while a new Science letter now raises heavy criticism:

… put into stark reality the challenges facing the Human Cancer Genome Project (HCGP). One wonders about the merits of such high-cost, low-efficiency, and ultimately descriptive-type “brute force” studies. Although previously unknown mutated genes were unearthed, the functional consequences of most of these and their actual role in tumorigenesis are unknown, and even with that knowledge we are a long way from identifying new therapeutic targets.

This seems to be the open wound of modern biology: all these high throughput driven genotyping / expression profiling / metabolome scanning approaches are mainly money & impact & activity driven – parameter or hypothesis-free has become a fashionable buzz phrase while only a few years ago it would have been an affront to every serious researcher.

Funny to see also the new Nature initiative opentextmining.org as nobody wants to read the results of these studies. So at least computers should be able to do that. Fail better

Addendum

Similar criticism of the Neanderthal studies but a different argument

However, although such comparisons are of interest, it is not the static genome but rather the dynamic proteome that determines the phenotype of an organism. Salient examples include the caterpillar and the tadpole, which share
genomes with the butterfly and frog, respectively, but which have very different proteomes making them into very different organisms.
Thus, rather than performing untargeted comparisons of sizable genomes, we suggest that it might be more useful to address this question using a standard hypothesis-driven approach.

Don’t become a scientist?

A quick link to an open letter – I do not endorse the opinions expressed there…

Now you spend your time writing proposals rather than doing research. Worse, because your proposals are judged by your competitors you cannot follow your curiosity, but must spend your effort and talents on anticipating and deflecting criticism rather than on solving the important scientific problems. They’re not the same thing: you cannot put your past successes in a proposal, because they are finished work, and your new ideas, however original and clever, are still unproven. It is proverbial that original ideas are the kiss of death for a proposal; because they have not yet been proved to work (after all, that is what you are proposing to do) they can be, and will be, rated poorly. Having achieved the promised land, you find that it is not what you wanted after all.

Looks like ‘Research 2.0’ need to be installed there.