No subject


Sun Dec 12 18:29:16 UTC 2010


want "crowdcrafting" all the people participating in the project will build
this project because it is important to them.

In any case, maybe you will have to involve people in the later stages of
your project for validating some data, so if you want, you could use
PyBossa to do this. If you need more info about the project, let me know it=
.

Cheers,

Daniel

On Sat, Aug 18, 2012 at 11:38 PM, Peter Murray-Rust <pm286 at cam.ac.uk> wrote=
:

> Thanks, Nathan.
> No one is going to put pressure on you to do anything - but the
> oppoertunity for jointly coalescing projects is there.
>
> On Sat, Aug 18, 2012 at 10:14 PM, Nathan Rice <
> nathan.alexander.rice at gmail.com> wrote:
>
>>
>> Wow, I'm surprised this has made its way around as much as it has.  I
>> suppose if a project makes a jaded bioinformatics guy like me excited,
>> it shouldn't surprise me that others would find it interesting too.
>>
>> There is huge potential in automation which is why it's exciting. I
> realise I didn't answer your original question - see later.
>
> I'm still a little bit intimidated by the amount of work that will be
>> involved in getting a really solid, fully automated pipeline.
>
>
> Then take things at a pace that can be managed. No-one has to be a hero b=
y
> themselves.
>
>
>> I'm
>> trying to take it a step at a time.  I've almost finished a curated
>> list of plants, experimental molecules and compounds, and I'm fine
>> tuning the pubmed search code to reduce the initial signal to noise
>> ratio.
>>
>
> It may be that when you expose those you will find overlap with other
> people.
>
>>
>> I'm still not sure exactly how I want to go about selecting articles
>> to use as the training data set for article filtration.  A manually
>> curated list would probably work best, but given the number of
>> features that are available, I expect that the training set would need
>> to be at least 1,000 articles in size to get decent results.  This
>> might just be one of those cases where I need to bite the bullet, put
>> a large pot of coffee on, and get to work.
>>
>> To do content mining properly requires a considerable annotated corpus.
> Generally it's split 3 ways - training, testing and validation. But such =
a
> corpus is very valuable. Unfortunately copyright normally means it can't =
be
> redistributed (I've had this fight with publishers). However that will
> change as they realise that alienating the world won't work as they aren'=
t
> very competent totalitarians.
>
>
>> >   - PDF hacking (I have done a lot of this but we need more. Open font
>> info.
>> > Postscript reconstruction
>>
>> I have played with this a bit, one issue that is frustrating is many
>> PDFanalysis tools will randomly insert spaces due to font kerning, and
>> will order text based on vertical position on the page, rather than
>> preserving column order.  If there is a PDF text extraction tool that
>> doesn't do these I would love to know.
>>
>
> I work with PDFBox and pull this out character by character. I throw away
> all sequential information and only use coordinates and font-size.  This
> works pretty well for me. I can see some excessive kernings and ligatures
> may defeat it but at present I suspect I get less than 1 spurious space p=
er
> 1000 chars. And remember we also have vocabularies to help tune this.
>
>>
>>
>> >   - shallow natural language processing and NPL resources (e.g. vocabs=
,
>> > character sets)
>> >   - classification techniques (e.g. Lucene/Solr) for text and diagrams
>> >
>> > I think if we harness all these we will have a large step change in th=
e
>> > automation of extraction of scientific information from "the
>> literature".
>> >
>> > And one-by-one the publishers will come to us because they will need u=
s.
>>
>> It is really a shame that metadata isn't more standardized for journal
>> articles.
>
>
> We have been addressing this in the Open biblio project(s). BibJSON acts
> as an unofficial normalization of article metadata. If you mean
> domain-specific metadata then we have to do this ourselves - and I am
> confident we can - it will be better than keywords (I have little faith i=
n
> them)
>
>
>>  Pubmed MeSH terms and chemical lists are OK but there is so
>> much more that could be annotated for the article.
>>
>> I am very interested in generic classifiers at this level.
>
>
>
>>  > Timescale - about 1 year to have something major to report - about 5
>> years
>> > to change the way scientific information is managed.
>>
>> Scientific articles seem like the perfect place for semantic metadata.
>>  In particular, clinical trial articles should have a nice, standard
>> set of metadata artifacts for computer analysis, since they are so
>> cookie cutter.
>>
>
> I looked at this 2-3 years ago - for clinical trials on nutrition. IIRC
> the abstracts were very useful metadata - they were structured and used
> standard-ish terms. I think they could be NLP'ed quite well.
>
>>
>>
>> I have actually already invested some of my (unfortunately scant)
>> resources into having people go through mined pubmed articles and
>> create metadata annotations.  Unfortunately, without a lot of machine
>> learning input set filtration, this is going to cost at least
>> $10,000-20,000 USD to finish for my purposes, and more every time the
>> list is updated.  It would be much better to get really solid
>> algorithms together so that nobody has to incur costs on this
>> magnitude :)
>>
>
> Ultimately humans have to validate the metadata. You need an
> inter-annotator agreement. In chemistry we found that the maximum agreeme=
nt
> between expert human chemists was 93% for whether a phrase was a chemical
> or not. Machines by definition cannot do better than this.
>
> It's tempting to develop crowdsourcing for annotation but it's important
> that the crowd is part of the project, not just passive slaves.
>
>>
>>
>> --
> Peter Murray-Rust
> Reader in Molecular Informatics
> Unilever Centre, Dep. Of Chemistry
> University of Cambridge
> CB2 1EW, UK
> +44-1223-763069
>



--=20
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7
http://github.com/teleyinex
http://www.flickr.com/photos/teleyinex
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7
Por favor, NO utilice formatos de archivo propietarios para el
intercambio de documentos, como DOC y XLS, sino PDF, HTML, RTF, TXT, CSV
o cualquier otro que no obligue a utilizar un programa de un
fabricante concreto para tratar la informaci=C3=B3n contenida en =C3=A9l.
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7

--485b3918aed81aad3504c7ac7881
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hi,<br><br>I think that Peter has raised several good points about human va=
lidation and how they should not be treated like &quot;monkeys&quot; workin=
g for the project :-)<br><br>From CCC we love to say that instead of crowds=
ourcing a project what we want &quot;crowdcrafting&quot; all the people par=
ticipating in the project will build this project because it is important t=
o them.<br>

<br>In any case, maybe you will have to involve people in the later stages =
of your project for validating some data, so if you want, you could use PyB=
ossa to do this. If you need more info about the project, let me know it.<b=
r>

<br>Cheers,<br><br>Daniel<br><br><div class=3D"gmail_quote">On Sat, Aug 18,=
 2012 at 11:38 PM, Peter Murray-Rust <span dir=3D"ltr">&lt;<a href=3D"mailt=
o:pm286 at cam.ac.uk" target=3D"_blank">pm286 at cam.ac.uk</a>&gt;</span> wrote:<=
br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
Thanks, Nathan.<br>No one is going to put pressure on you to do anything - =
but the oppoertunity for jointly coalescing projects is there.<br><br><div =
class=3D"gmail_quote"><div class=3D"im">On Sat, Aug 18, 2012 at 10:14 PM, N=
athan Rice <span dir=3D"ltr">&lt;<a href=3D"mailto:nathan.alexander.rice at gm=
ail.com" target=3D"_blank">nathan.alexander.rice at gmail.com</a>&gt;</span> w=
rote:<br>


<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div><br>
</div>Wow, I&#39;m surprised this has made its way around as much as it has=
. =C2=A0I<br>
suppose if a project makes a jaded bioinformatics guy like me excited,<br>
it shouldn&#39;t surprise me that others would find it interesting too.<br>
<div><br></div></blockquote></div><div>There is huge potential in automatio=
n which is why it&#39;s exciting. I realise I didn&#39;t answer your origin=
al question - see later. <br>
<br>
</div><div class=3D"im"><blockquote class=3D"gmail_quote" style=3D"margin:0=
 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I&#39;m still a litt=
le bit intimidated by the amount of work that will be<br>
involved in getting a really solid, fully automated pipeline. =C2=A0</block=
quote></div><div><br>Then take things at a pace that can be managed. No-one=
 has to be a hero by themselves.<br>=C2=A0<br></div><div class=3D"im"><bloc=
kquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #cc=
c solid;padding-left:1ex">


I&#39;m<br>
trying to take it a step at a time. =C2=A0I&#39;ve almost finished a curate=
d<br>
list of plants, experimental molecules and compounds, and I&#39;m fine<br>
tuning the pubmed search code to reduce the initial signal to noise<br>
ratio.<br></blockquote></div><div><br>It may be that when you expose those =
you will find overlap with other people. <br></div><div class=3D"im"><block=
quote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc=
 solid;padding-left:1ex">



<br>
I&#39;m still not sure exactly how I want to go about selecting articles<br=
>
to use as the training data set for article filtration. =C2=A0A manually<br=
>
curated list would probably work best, but given the number of<br>
features that are available, I expect that the training set would need<br>
to be at least 1,000 articles in size to get decent results. =C2=A0This<br>
might just be one of those cases where I need to bite the bullet, put<br>
a large pot of coffee on, and get to work.<br>
<div><br></div></blockquote></div><div>To do content mining properly requir=
es a considerable annotated corpus. Generally it&#39;s split 3 ways - train=
ing, testing and validation. But such a corpus is very valuable. Unfortunat=
ely copyright normally means it can&#39;t be redistributed (I&#39;ve had th=
is fight with publishers). However that will change as they realise that al=
ienating the world won&#39;t work as they aren&#39;t very competent totalit=
arians.<br>


=C2=A0</div><div class=3D"im"><blockquote class=3D"gmail_quote" style=3D"ma=
rgin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>
&gt; =C2=A0 - PDF hacking (I have done a lot of this but we need more. Open=
 font info.<br>
&gt; Postscript reconstruction<br>
<br>
</div>I have played with this a bit, one issue that is frustrating is many<=
br>
PDFanalysis tools will randomly insert spaces due to font kerning, and<br>
will order text based on vertical position on the page, rather than<br>
preserving column order. =C2=A0If there is a PDF text extraction tool that<=
br>
doesn&#39;t do these I would love to know.<br></blockquote></div><div><br>I=
 work with PDFBox and pull this out character by character. I throw away al=
l sequential information and only use coordinates and font-size.=C2=A0 This=
 works pretty well for me. I can see some excessive kernings and ligatures =
may defeat it but at present I suspect I get less than 1 spurious space per=
 1000 chars. And remember we also have vocabularies to help tune this.<br>


</div><div class=3D"im"><blockquote class=3D"gmail_quote" style=3D"margin:0=
 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br><div><br>
&gt; =C2=A0 - shallow natural language processing and NPL resources (e.g. v=
ocabs,<br>
&gt; character sets)<br>
&gt; =C2=A0 - classification techniques (e.g. Lucene/Solr) for text and dia=
grams<br>
&gt;<br>
&gt; I think if we harness all these we will have a large step change in th=
e<br>
&gt; automation of extraction of scientific information from &quot;the lite=
rature&quot;.<br>
&gt;<br>
&gt; And one-by-one the publishers will come to us because they will need u=
s.<br>
<br>
</div>It is really a shame that metadata isn&#39;t more standardized for jo=
urnal<br>
articles. </blockquote></div><div><br>We have been addressing this in the O=
pen biblio project(s). BibJSON acts as an unofficial normalization of artic=
le metadata. If you mean domain-specific metadata then we have to do this o=
urselves - and I am confident we can - it will be better than keywords (I h=
ave little faith in them) <br>


=C2=A0<br></div><div class=3D"im"><blockquote class=3D"gmail_quote" style=
=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">=C2=A0Pu=
bmed MeSH terms and chemical lists are OK but there is so<br>
much more that could be annotated for the article.<br>
<div><br></div></blockquote></div><div>I am very interested in generic clas=
sifiers at this level.<br><br>=C2=A0</div><div class=3D"im"><blockquote cla=
ss=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;pa=
dding-left:1ex">

<div>

&gt; Timescale - about 1 year to have something major to report - about 5 y=
ears<br>
&gt; to change the way scientific information is managed.<br>
<br>
</div>Scientific articles seem like the perfect place for semantic metadata=
.<br>
=C2=A0In particular, clinical trial articles should have a nice, standard<b=
r>
set of metadata artifacts for computer analysis, since they are so<br>
cookie cutter.<br></blockquote></div><div><br>I looked at this 2-3 years ag=
o - for clinical trials on nutrition. IIRC the abstracts were very useful m=
etadata - they were structured and used standard-ish terms. I think they co=
uld be NLP&#39;ed quite well.<br>


</div><div class=3D"im"><blockquote class=3D"gmail_quote" style=3D"margin:0=
 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><br>
<br>
</div>I have actually already invested some of my (unfortunately scant)<br>
resources into having people go through mined pubmed articles and<br>
create metadata annotations. =C2=A0Unfortunately, without a lot of machine<=
br>
learning input set filtration, this is going to cost at least<br>
$10,000-20,000 USD to finish for my purposes, and more every time the<br>
list is updated. =C2=A0It would be much better to get really solid<br>
algorithms together so that nobody has to incur costs on this<br>
magnitude :)<br></blockquote></div><div><br>Ultimately humans have to valid=
ate the metadata. You need an inter-annotator agreement. In chemistry we fo=
und that the maximum agreement between expert human chemists was 93% for wh=
ether a phrase was a chemical or not. Machines by definition cannot do bett=
er than this. <br>


<br>It&#39;s tempting to develop crowdsourcing for annotation but it&#39;s =
important that the crowd is part of the project, not just passive slaves.<b=
r></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border=
-left:1px #ccc solid;padding-left:1ex">



<div><div><br><br></div></div></blockquote></div><div class=3D"HOEnZb"><div=
 class=3D"h5">-- <br>Peter Murray-Rust<br>Reader in Molecular Informatics<b=
r>Unilever Centre, Dep. Of Chemistry<br>University of Cambridge<br>CB2 1EW,=
 UK<br>

<a href=3D"tel:%2B44-1223-763069" value=3D"+441223763069" target=3D"_blank"=
>+44-1223-763069</a><br>

</div></div></blockquote></div><br><br clear=3D"all"><br>-- <br>=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7<br>

<a href=3D"http://github.com/teleyinex" target=3D"_blank">http://github.com=
/teleyinex</a><br><a href=3D"http://www.flickr.com/photos/teleyinex" target=
=3D"_blank">http://www.flickr.com/photos/teleyinex</a><br>=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7<br>

Por favor, NO utilice formatos de archivo propietarios para el<br>intercamb=
io de documentos, como DOC y XLS, sino PDF, HTML, RTF, TXT, CSV<br>o cualqu=
ier otro que no obligue a utilizar un programa de un<br>fabricante concreto=
 para tratar la informaci=C3=B3n contenida en =C3=A9l.<br>

=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=
=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=C2=B7=
=C2=B7=C2=B7=C2=B7=C2=B7<br>

--485b3918aed81aad3504c7ac7881--



More information about the open-bibliography mailing list