[Cosmo] preserving data during cosmo 0.3 upgrade
bcm at osafoundation.org
Thu Feb 9 12:10:23 PST 2006
On 2/9/06, Jared Rhine <jared at wordzoo.com> wrote:
> If we're talking about whole $OLD/data to a $NEW/data (everything ends
> up in a drop-in directory), I'm a happy camper.
> What config files are needed? Some of those may have been mussed with,
> so I thought I should ask and coordinate.
$OLD/etc/repository.xml and $NEW/etc/repository.xml.
> Thus a pre-dependency feature is the ability to detect what version a
> repo is at. This could be starting a Java process, or as simple as
> reading a flat file with a version number.
yeah, the current plan is to add a node to the repository with
properties for schema version and timestamp of last update.
> To update production, I see basically:
> - Announce downtime
> - Create new 0.3-based production instance
> - Take down old production service
> - rm -rf $NEW/data
> - run-migration-script --old $OLD/data --new $NEW/data
> - Bring up new production service
you won't rm -rf $NEW/data, as we'll be copying stuff into it (the out
of the box repository has no user data but it does have an initialized
schema and seed data), but otherwise, yes.
> How are you going to go about finding those bugs? (Which relates to how
> I find bugs in an update) I'm thinking maybe a small script that just
> dumps the CMP users structure out, and do a before-and-after diff.
> That, plus doing a Chandler sync for my accounts, will probably confirm
> overall data preservation. Beyond that, bugs found are more likely to
> be semantic or protocol changes.
yep, the cmp script and possibly also a similar one for dav.
> Did you have anything particular in mind for testing?
clicking around to verify manually ;) your script idea is a good one.
More information about the Cosmo