[open-science] Let us denonce the pseudo-open Public Library of Science

Paola Di Maio paola.dimaio at gmail.com
Wed Feb 15 04:03:41 UTC 2017


Big topic

Maybe we should change the thread subject to
reproducibility of science

for me, 'hard' science is all about reproducibility - can the same result
be obtained consistently - and if a result is not reproducible, it means
the phenomenon needs  researched/understood further, either by adjusting
methods or samples etc.

At the same time,  I am sure there are a lot of singular events  occurring
once that cannot be reproduced - such as astrobomical phenomena - and yet
studied/researched using a scientific method - that's still good stuff.

A lot also depends on the scale (space, time) some objects of study are of
a scale too large or too small to be reproduced, yet they are observable

Suddently I feel this conversation is becoming deep

"-)

Thomas:
> I believe that most researchers would never fabricate data,


I disagree- most researchers fabricate data if it brings in funds and if
they can get away with it. (argument for replicability)

Heather:  replicability is one of the many key arguments - among the many
others - for open data, purely to enable results to be verified  - as
Popper says, not me :-)







P

[image: --]

Paola Di Maio
[image: https://]about.me/paoladimaio
<https://about.me/paoladimaio?promo=email_sig&utm_source=product&utm_medium=email_sig&utm_campaign=chrome_ext>


On Tue, Feb 14, 2017 at 9:38 PM, Thomas Kluyver <takowl at gmail.com> wrote:

> On 14 February 2017 at 15:54, Heather Morrison <
> Heather.Morrison at uottawa.ca> wrote:
>
>> My argument (so far) is that replicability is a poor argument for open
>> data. Open data does not facilitate replication, nor is it necessary to
>> replication.
>
>
> I think we may be debating the meaning of the word 'replication', but as I
> see it, open data facilitates *partial* replication. We shouldn't pretend
> that rerunning the computation totally verifies a result, but it's not
> pointless either.
>
> I believe that most researchers would never fabricate data, but that it's
> quite common to tweak the analysis a bit to get an interesting result that
> you can publish. If we have the raw data and the computational steps done
> on them, that's the starting point for evaluating: Does it hold up if I use
> this kind of analysis? What if I change this assumption? That point looks
> like an outlier, what happens if I exclude it?
>
> That doesn't mean we should have 100% confidence in the data going into
> the analysis. But I think it would be a big step forwards for reproducible
> research if it was normal for the analysis steps to be reproduced.
>
> Thomas
>
> _______________________________________________
> open-science mailing list
> open-science at lists.okfn.org
> https://lists.okfn.org/mailman/listinfo/open-science
> Unsubscribe: https://lists.okfn.org/mailman/options/open-science
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.okfn.org/pipermail/open-science/attachments/20170215/28ff1a67/attachment-0003.html>


More information about the open-science mailing list