[ckan-dev] test failures

James Gardner james at 3aims.com
Fri Feb 4 11:25:18 UTC 2011


I don't see the case for spending energy reverting anything. I'm fine 
with deleting and recreating the tables with every test class as long as 
the tests aren't slow, which thanks to all the refactoring work they aren't.

James




On 04/02/11 11:21, David Read wrote:
> On 4 February 2011 10:40, Seb Bacon <seb.bacon at gmail.com 
> <mailto:seb.bacon at gmail.com>> wrote:
>
>     Hi,
>
>     On 4 February 2011 09:04, David Read <david.read at okfn.org
>     <mailto:david.read at okfn.org>> wrote:
>     >
>     >
>     > On 3 February 2011 22:35, David Raznick <kindly at gmail.com
>     <mailto:kindly at gmail.com>> wrote:
>     >>
>     >>
>     >> On Thu, Feb 3, 2011 at 9:50 PM, David Read <david.read at okfn.org
>     <mailto:david.read at okfn.org>> wrote:
>     >>>
>     >>> Hi all,
>     >>>
>     >>> I've changed buildbot to run ckan tests twice now: firstly
>     with sqlite
>     >>> and then again with postgres in the same buildbot script (I
>     should split it
>     >>> into two tasks, but that is for another day). I've fixed a few
>     problems with
>     >>> the tests under sqlite and they pass now, but there are still
>     problems with
>     >>> tests in postgres.
>     >>> Seb changed lots of 'rebuild_db' statements to 'clean_db',
>     which then
>     >>> requires 'init_db' at the start of the next test. So there are
>     now 'init_db'
>     >>> statements at the start of a great many tests. Was there any
>     reason for
>     >>> this?
>
>     Yes, see below.  My intent was for them to be at the start of *all*
>     tests.  Clearly for whatever reason the fact they weren't was being
>     hidden by another bug.
>
>     >> This may not be necessary any more with the fix we did the
>     other day on
>     >> migrate.  However, the in memory sqlite database gets lost
>     mysteriously
>     >> sometimes and its hard to figure out where.  That is why I
>     thought the
>     >> init_dbs were there.
>     >
>     > These problems are with postgres in this case, and there is no
>     mystery about
>     > losing the d.b. - we call clean_db!
>     > Note to Seb: Things were probably working in the past because the
>     > 'faster_db_hacks' config option wasn't being read properly,
>     which David and
>     > I fixed last Friday, so the full init_db was happening even with
>     postgres,
>     > whereas it's not now.
>
>     Ah, I wondered about that, seeing as the tests all passed before.
>     Apologies -- at least now I know about the asbool converter :)  As
>     below, my intent was always that the database *should* be recreated
>     with each test.
>
>     > Also the init_db parameter 'conditional' (BTW what a
>     > terrible name for a parameter...!) is not being used.
>
>     It's not the best, no, but let's not start picking on each others'
>     individual variable name errors :)  It's an accidental artefact of an
>     earlier refactoring, just get rid of it...
>
>
> Ok, but it's used a few times and because the name gives no idea of 
> the intention it's hard to know whether the init_db calls that use 
> this param can be removed too. Hence my suggestion for a tidy up.
>
>     > I think basically this
>     > all needs a bit of a review / tidy up to get it to make sense
>     and work too.
>
>     Either there's something I don't fundamentally get, or we are talking
>     cross purposes.  Let me try to untangle at least my own intent:
>
>
> Yes, that's what I'm after - thanks.
>
>     >>> It seems a right faff to remember this - is there no better
>     way? Indeed I
>     >>> believe the current test failures are caused by a test not
>     having a init_db.
>
>     When refactoring the tests, I noticed that some called "clean_db" and
>     "init_db" and others didn't.  This was inconsistent. 
>
>
> Looking at cset:c892749f74f0 (before you made changes) there were 
> three calls to clean_db in the tests. In all occasions they are 
> immediately followed by a create_db or a rebuild_db. So the tests 
> *were* consistently expecting the tables to be created.
>
> >
>
>     Most of the
>     tests assumed test fixtures had been set up,
>
>
> I don't believe that is true. All test classes assume the tables are 
> empty. All over the place you see setup methods creating test data and 
> teardowns deleting it. When the teardown is lazy, they call 
> rebuild_db, which in the past was slower, but sure.
>
>     but some had to assume they'd been set up but not reset.
>
>     This state of affairs is much
>     *more* of a faff when trying to debug problems, than knowing that each
>     test is responsible for setting up the fixtures it needs to run.
>     Previously, some (but not all) of the tests had a dependency on some
>     of the others.
>
>
> I agree there shouldn't be dependencies between tests. Certainly there 
> have been a few test methods which relied on state left by a previous 
> method in the class. The ones I've come across I've either decided are 
> a reasonable compromise and labelled very clearly, or I've gone to the 
> trouble to correct them. Yes there are a few tests which remain 'at 
> large', and we should focus on stamping them out, rather than throwing 
> the baby out with the bathwater to delete and recreate the tables with 
> every test class.
>
>     For example there were loads of tests that called "rebuild_db()" in
>     their teardown methods (which just called "clean_db" and then
>     "init_db").  More explicit (and standalone) for them to call init_db
>     in their setup and clean_db in their teardown.  Less faff when trying
>     to debug problems.
>
>
> So you replaced about 60 rebuild_db calls with clean_db and mandated 
> that every single test starts with init_db. There is no help to 
> debugging by making sure the tables are dropped in addition to 
> deleting the data. I maintain that replacing rebuild_db added faff and 
> for the sake of ease of writing tests and reducing code to maintain we 
> should revert this.
>
> However I think this shows that someone new to our tests needs to be 
> able to find our test conventions and intentions documented, so that 
> when there are issues (and it is true there are probably some 
> non-independent tests lurking) then we can all pull in the same 
> direction to solve them.
>
> David
>
>     https://bitbucket.org/okfn/ckan/changeset/e60cf442ad5b#chg-ckan/tests/functional/test_authorization_group.py_oldline27
>
>     For me, a principle of good unit testing is that it should be possible
>     to run each test independently of the others.  The previous setup
>     required you to run all the tests at once (or a hard-to-discover
>     subset -- some tests ran stand-alone, others didn't).
>
>     So, instead of setting up one test fixture for all the tests, and then
>     requiring each test to reset the data (sometimes by deleting
>     individual rows, sometimes by recreating the entire fixture set), it
>     felt much more explicit to move to the pattern of every test being
>     responsible for setting up the data it needs.  (Ideally this would
>     mean we'd actually factor out many of the bits of init_db so that
>     individual tests wouldn't have to load the whole lot)
>
>     That's the thinking!
>
>     >>> Note for everyone: Seb, James and I have agreed to set in
>     test.ini to use
>     >>> sqlite.
>     >>
>     >> I disagree to this.  You should specify what database you want
>     in your own
>     >> development.ini.  The template development.ini could have
>     sqlite as the
>     >> default instead.  This is the only sane way to do it in my opinion.
>     >
>     > Yes this might be better. Anyone else?
>
>     Ideally I'd go for neither, and just expect people to read the
>     documentation, and chose to run tests against the DB of their choice
>     as they desire.  Really, it's not sane either to develop against a
>     database we never use in production; sqlite was always a hack to
>     encourage people to run the tests *at all*.
>
>     However, if we are optimising for
>     making-sure-at-least-a-subset-of-the-tests-are-being-run-at-all,
>     putting it in test.ini is preferable (if not entirely "sane";),
>     because people never read the documentation, and getting the tests to
>     run fast in postgres does require people to read the documentation.
>
>
>     >>> We think it is a good idea to default tests to being run quickly.
>     >>> However, I can't see a good way to then run tests under
>     postgres - if you
>     >>> comment out the sqlite in test.ini (as the buildbot currently
>     does), 9/10
>     >>> times you end up checking it in by mistake. Maybe someone has
>     a better idea,
>     >>> since passing parameters to nose is a no no.
>     >>
>     >> Why is this a no no?
>     >
>     > From what I read, nose doesn't believe in passing parameters
>     onto tests -
>     > they should be stateless. You can fudge it by hacking argv, but
>     then you
>     > lose flexibility of the many useful nose command-line params.
>
>     Have a separate buildbot.ini?
>
>     Seb
>
>     _______________________________________________
>     ckan-dev mailing list
>     ckan-dev at lists.okfn.org <mailto:ckan-dev at lists.okfn.org>
>     http://lists.okfn.org/mailman/listinfo/ckan-dev
>
>
>
> _______________________________________________
> ckan-dev mailing list
> ckan-dev at lists.okfn.org
> http://lists.okfn.org/mailman/listinfo/ckan-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.okfn.org/pipermail/ckan-dev/attachments/20110204/21c8772a/attachment-0001.html>


More information about the ckan-dev mailing list