[Cosmo] preserving data during cosmo 0.3 upgrade

Jared Rhine jared at wordzoo.com
Thu Feb 9 03:50:27 PST 2006


On Tue, 2006-02-07 at 14:39 -0800, Brian Moseley wrote:
> such a tool could be be run while the server is offline...

Offline +1

>  * it will be packaged together with all dependencies and run via shell script

Ok.  Details not important; a raw java command line with -D whatever or
command-line arguments would be fine too.  No need for this to be
pretty, though it does set a fine precedent for future needed data
migrations.

>  * it will not attempt to convert a 0.2 repository "in place" but
> rather will copy the data out the 0.2 repository and user db into a
> new, blank 0.3 repository

If we're talking about whole $OLD/data to a $NEW/data (everything ends
up in a drop-in directory), I'm a happy camper.

>  * it will accept as input (via command line options) the locations of
> the data directories and config files for the source (0.2) repository
> and destination (0.3) repository

What config files are needed?  Some of those may have been mussed with,
so I thought I should ask and coordinate.

>  1) ,,, 2) ... 3) ...

Should be fine if the semantics haven't changed :)

> since passwords are stored in an encrypted format in the 0.2 user db,
> we'll have to extend the 0.3 internal apis...

Good.

> in the long term i think we'll want an upgrade tool that can convert
> from an arbitrary older version, so that one could upgrade a 0.2
> server to 0.6 with the same tool rather than requiring 4 separate
> tools.

+1 on punting.  A pretty robust framework is just a wrapper script that
runs each various steps, as separate scripts, in the right order.  If
you write independent data-migration scripts, then you have a
customizable framework to run detection and migration steps in any
sequence diagram you want.

Thus a pre-dependency feature is the ability to detect what version a
repo is at.  This could be starting a Java process, or as simple as
reading a flat file with a version number.

To update production, I see basically:

- Announce downtime
- Create new 0.3-based production instance
- Take down old production service
- rm -rf $NEW/data
- run-migration-script --old $OLD/data --new $NEW/data
- Bring up new production service

> i estimate that building the upgrade tool as proposed, testing on
> copies of the foxcloud and/or cosmo-demo repositories, and fixing
> found bugs will take 3-4 days.

How are you going to go about finding those bugs?  (Which relates to how
I find bugs in an update)  I'm thinking maybe a small script that just
dumps the CMP users structure out, and do a before-and-after diff.
That, plus doing a Chandler sync for my accounts, will probably confirm
overall data preservation.  Beyond that, bugs found are more likely to
be semantic or protocol changes.

Did you have anything particular in mind for testing?

> thoughts on the proposal?

Certainly seems reasonable and in line with what I'd expect for a
good-enough solution.

-- 
Jared Rhine <jared at wordzoo.com>




More information about the Cosmo mailing list