[Cosmo-dev] testing data migration for cosmo 0.6
jared at wordzoo.com
Mon Dec 4 12:10:15 PST 2006
On Fri, 2006-12-01 at 14:48 -0800, Aparna Kadakia wrote:
> At the minimum, we will require taking snapshot of production data;
> say a staging instance, running migration script on it and then
> running tests against this staging instance.
That seems accurate. An additional requirement is that we may need to
run this snapshot/migrate/stage process more than once.
> The real question is
> what tests do we run on this to certify the ugrade. As we already
> know automated cosmo tests creates it own data and tests that so
> those aren't going to be adequate for testing the migrated data.
I talked with Mikeal extensively last Thursday about possible approaches
to testing automation more appropriate to testing migrated data. Two I
1) Testing scripts which keep track of the expected state of a Cosmo
instance. This sort of test would start with an empty instance, run a
bunch of stuff, and at the end of tests, have an understanding of what
state the server is in.
The threaded performance test I use to validate Cosmo 0.5 and measure
performance works this way to some extent. As it does each PUT, it
notes that ICS in a data structure for that user. Later, when it does a
random GET, it compares (IIRC) what it gets back with what it put in
2) Testing scripts which pair "actions" with "responses", but separate
those in time. For instance, for a given PUT, you can run a GET later
to confirm the operation succeeded. So you could run a PUT and the GET
against server A, run the migration script (to create server B), then
run the GET against server B. If the GET/A == GET/B, then you've
Neither of these actually test production data already in the osaf.us
instance, though. For that, I envision a combination of new automated
and defined-test-plan manual testing.
> Also how do developers plan to do unit testing of the migration script?
Randy indicated he wasn't previously planning unit testing, but surely
there's some process he's going through to have some confidence that his
code works before release. By understanding his thought process of
"knowing that my script works", we can translate that into a test plan
or something automatable?
> What QA test scripts do we need to test the upgrade?
There are some automated tests possible, but they will require new code.
I don't think the following would be too hard to code:
* are there the same number of users before and after
* do all the CMP info for each user match before and after
* do users X,Y,Z have the same collections before and after
* do collections A,B,C have the same number of items before and after
Slightly harder but still straightforward are:
* do ICS files Q,R,S match before and after
* do PROPFINDs F,G,H return the same results
> What manual tests will be involved?
This is the tricky one. I'd like to test boundary cases. Tricky bits
seem to include recurrence, non-event items, and dashboard status.
> Should we get the dogfooders to test against this instance?
I think this would be helpful. Mikeal suggested an IRC hour for
coordination, which seemed to work well last time. Mikeal asked how
long the migration would take, to see whether the migration could be
re-done during interactive testing. I suspect it will be very fast,
though I'm not yet sure if this would be helpful.
I have some questions though, about how exactly dogfood testing would
work. I don't think it's necessarily wise to have people update their
existing Chandlers to point to the staging instance. Changes intended
to be made permanent would be lost when right before switchover, the
migration is run for the final time.
Perhaps we can have a good set of people who are comfortable running
side-by-side Chandler instances (pointed at different local
repositories). They would use this second instance to sync their
calendars, then look at a 20+ events/items to manually confirm that all
fields are the same. We would ask people to look for specific issues
that are not already tested by any automated testing we develop.
Another question is whether we test any production-data interop other
than Chandler. I think we probably don't need to as Chandler testing
will be substantial, though it might be helpful to see what people are
using on osaf.us besides Chandler for regular usage.
Overall, what we're really trying to do is raise confidence that the
migration script works. I doubt we'll get to 100% confidence, but the
more testing we do, the higher the confidence. We want to test "real
data", as it's probable that apps have created data a bit differently
than any automated tool will. I'll be happy migrating the service,
osaf.us, to Cosmo 0.6 using the migration tool if we've got a written
test plan and we've executed it well.
Jared Rhine <jared at wordzoo.com>
More information about the cosmo-dev