[open-government] [openhouseproject] The Four "A"s of Open Government Data
Josh Tauberer
tauberer at govtrack.us
Sun Feb 12 23:01:37 UTC 2012
(Some replies of course only went to one list or the other --- apologies
if I'm replying to something you didn't see.)
On 02/11/2012 08:58 PM, David Robinson wrote:
> Adaptability.
>
> That captures the spirit of innovation that infuses so much of this
> work. And if data is adaptable, it is also capable of being analyzed
> -- or so I would think?
I like that this makes the focus broader than just analysis, closer to
the meaning of transformation.
On 02/12/2012 12:57 AM, Justin Grimes wrote:
> In comparison to open source, we only ask that code be licensed to
> be open source. We don’t ask that code compiles? is well documented?
> works well or as intended? etc. Those are things that might be
> expected or desired but certainly not required of it to be ”open”.
Even in the open source world, there are dozens of popular licenses. The
minimal requirements for 'open source' aren't necessarily natural ---
they no doubt came out of balancing different views and the pragmatic
need for interoperability of licenses.
The pragmatic needs for data, and especially government data, are
different. If data is meant to serve transparency, then it is important
to be able to know what the bits mean, more so than interoperability
(for instance).
On 02/12/2012 04:12 AM, innovation institute wrote:
> There is no accuracy in absolute terms.
That's exactly what I was saying. But in my experience, many agencies
who are or want to produce data do not have a well defined sense of
accuracy, or their definition is out of date with respect to data.
On 02/12/2012 01:21 PM, Gregory Slater wrote:
> What about 'API' for the fourth 'A' ?
On 02/12/2012 04:52 PM, Javier Muniz wrote:
> "queryable"
The fear that some of us have with those sorts of recommendations is
that agencies will then skip the bulk data part, and then we'll all have
to start getting API keys and bending over backwards to get large slices
of the underlying data for a large scale analysis.
On 02/12/2012 04:52 PM, Javier Muniz wrote:
> The nice thing about these definitions is that they have real
> (already defined) meaning, and can be tested or measured. Datasets
> could be tagged with their level of normalization, for example "1NF"
"1NF" (or even 3NF) can be a useful definition and recommendation, but
it is very narrow in the types of data it would make sense for (e.g. not
documents).
- Josh Tauberer (@JoshData)
- GovTrack.us | POPVOX.com
http://razor.occams.info | www.govtrack.us | www.popvox.com
More information about the open-government
mailing list