[wdmmg-dev] openspending.org back up
Carsten Senger
senger at rehfisch.de
Fri Mar 18 23:30:49 UTC 2011
Today openspending.org was most of the time down. The maschine was idle
most of the time, but almost all connection stalled shortly after every
apache restart.
Stefan struggled to find the cause of it. Later Rufus found out
that the cause where enormous mongodb query response times for some of
the queries.
A brief review of the mongodb logs showed that there where several
queries that took an hour or more.
We suspected that it's the amount of data in mongodb. From the mongo stats
we 1.4 million objects in the db, had 8 gig of storaged data and 12 gigs of
allocated disk space. So we decided to drop data and see what happens.
We droped eu, us, departments and fts (and barnet, but don't do that ;).
Now we have 330.000 objects, 1 gig of data and 4 gig of db file on
disc(mongo preallocates big files). The site works well again so far.
Here is the sugarcoated versions how to remove the datasets and entries
from mongodb and solr. Both could be turned into emergency scripts.
If you don't want to install mongodb from your os packages you can download
a statically linked version from the mongodb site to connect to another
host.
For mongodb the sequence is:
$ ./mongo --host <our db host> <database name>
> db.stats()
...
> db.dataset.distinct('name')
[
"cra",
"barnet",
"israel",
"uganda",
"bund",
"eu",
"us",
"departments",
"fts"
]
> db.entry.find({"dataset.name": "departments"}).count()
178470
> db.entry.remove({"dataset.name": "departments"})
> db.dataset.remove({name: "departments"})
and at the end of the session a call to repairDatabase() to compact the
files on disc:
> db.repairDatabase()
{ "ok" : 1 }
For solr:
(pyenv)okfn at ip-10-48-54-52:~/var/srvc/openspending.org$ python
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import solr
>>> con = solr.SolrConnection('http://....org:8080/solr/openspending.org')
>>> from wdmmg.lib.solrhelp import drop_index
>>> drop_index(dataset_name="fts", solr=con)
What we also found is that we have many requests from search engine bots.
When they start to crawl the site they download alot of pages and use the
/api so we constantly have to compute new aggregates and do big queries on
mongodb.
Nice weekend,
..Carsten
More information about the openspending-dev
mailing list