[openbiblio-dev] JISC openbib update
Rufus Pollock
rufus.pollock at okfn.org
Wed May 4 15:25:05 UTC 2011
On 4 May 2011 15:58, William Waites <ww at styx.org> wrote:
> * [2011-05-04 15:23:26 +0100] Rufus Pollock <rufus.pollock at okfn.org> écrit:
>
> ] Why do we need to write a general purpose JSON-LD parser? (Not trying
> ] to be difficult but to understand!). I had imagined JSON-LD being used
> ] in the context of an API where there would be a limited amount of
> ] material to serialize and deserialize (i.e. we are not trying to do
> ] huge RDF documents ...). Does this restriction help?
>
> Because we want JSON-LD (or some JSON variant, this is far from
> settled, but putting our implementation into the wild will help to
> build consensus) to be used in other projects, ideally by other
> people. We need a parser and a serialiser, and we don't want to have
> to make new ones every time we want to do this.
Understood but this significantly different aim from the original
reason JSON-LD was raised: as a slightly better and slightly more
standard JSON serialization for bibliographica API than our
home-cooked version ...
> In order to be reuseable, the right way to do this is at the RDFLib
> level, not higher up in the stack.
Fine but we may have a situation where the best is becoming the enemy
of the good.
> ] But standard RDF/JSON completely defeats the point of having a JSON
> ] (REST) API -- low entry cost for average web coder (after we could
> ] just use sparql for a lot of this)
>
> So what has been discovered is that RDF/JSON is easier to do. Now that
> we have actually tried implementing both, the tradeoff can be more
> sensibly evaluated.
But we're not interested in RDF/JSON (since no-one who does JSON who
is not already involved in the semantic web will be interested). If
RDF/JSON is all we can do why not just use our big sparql endpoint and
be done with it (exaggerating a bit but not not much ...)
> There is a significant disadvantage with JSON-LD in that it is
> ambiguous in terms of serialisation, and it is also lossy in terms of
> datatypes. We could merge JSON-LD and JSON-LD-CURIE to fix the latter
> problem. To fix the former problem, you basically will need a
> javascript library to sanely work with the data.
How badly do these problems affect us in, say, bibliographica.org (as
opposed to the general case)?
> ] OK but it seems we are now doing something different from the
> ] originally anticipated usage of JSON-LD (which use in an API with
> ] transmission of "smallish" objects such as Entry, Collection, Entity
> ] etc).
>
> There was never an anticipated usage of JSON-LD. It was a promising
> looking serialisation that we would try out. Now we've tried it out,
> we know more about what it actually means to do.
But our current dictiziation / serialization is certainly no better
than JSON-LD and arguably worse. That is what JSON-LD was proposed for
(not some general let's serialize arbitrary RDF ...)
> ] Also: do we need to do whole records for what is needed in
> ] bibliographica. The original discussion of JSON-LD was in improving on
> ] our current 'ad-hoc' dictization / json serialization. In that context
> ] is JSON-LD not better?
>
> This question is mis-posed because you're thinking too high in the
> stack. The answer is that we need to serialise whatever statements
> we have. This can be an entire graph (record) or a few triples or
> anything in between.
OK but we then seem to have a problem:
a) RDF/JSON seems to be the only thing serializable (not surprising
since it basically is RDF)
b) But RDF/JSON is no good for general consumers (the very reason we
are creating the JSON APIs)
rufus
More information about the openbiblio-dev
mailing list