[Dev] Keeping Users' Data

Phillip J. Eby pje at telecommunity.com
Fri Feb 10 17:28:21 PST 2006

At 03:01 PM 2/10/2006 -0800, Katie Capps Parlante wrote:
>(3) In the 0.7 timeframe, we're going to have midpoints (maybe these are 
>the same as milestones, full proposal coming) that the user can download 
>and use. This implies that we need to support a path for users to migrate 
>data from 0.6.1 --> 0.7m1, from 0.7m1 --> 0.7m2, etc.

And will .ics be a sufficiently high-fidelity format for these transfers?

>(8) There should be some path for extension parcels to have data migrated 
>along with the ootb schemas, but it can require extra work from the parcel 
>writers to make it happen.

I presume this means that it's okay for the migration to take place via the 
same export/import process, given a suitable format?

I'm noticing a slight mismatch here between app and platform needs; 
export/import makes sense for a slow-moving app development, in that there 
are milestones at which it's reasonable to dump and reload your 
repository.  However, this won't be particularly practical on "internet 
time" for extension parcels being rapidly and iteratively developed.  For 
that matter, it's not necessarily all that practical for us while *we're* 
developing, except that we routinely create new repositories.  Smooth 
upgrades would help our development as well, so it's not as though the 
goals are in conflict; the platform just has more stringent technical 
requirements (i.e. supporting piecemeal upgrades and rollbacks) than the 
app as a whole.

>We have some open questions around sharing and schema evolution.
>- Do we need to support multiple chandler versions sharing the same data?
>One sort of ugly short-term proposal that Morgen and I discussed: users 
>would have to migrate their data locally and then publish a new share to 
>the server (new tickets, new subscriptions for sharees, etc.)
>Morgen and I agreed that in the long term, we probably need to identify a 
>sharing format (the well defined externalization of the data that you 
>mention). This would allow the client (and the server) to be able to 
>change internal formats, as long as they could migrate the data to the 
>well known externalization.

Makes sense.

>>* How will we ensure (procedurally or otherwise) that each version of 
>>Chandler will successfully upgrade from older versions?
>I'm not sure I fully understand this question. Is this question about testing?

Yes.  What isn't tested, probably doesn't work.  :)

>>* What support will we provide for parcel developers to ensure that 
>>*their* schemas upgrade safely, as well as the Chandler "out of the box" 
>I think that we need to provide hooks so that parcel developers can hook 
>into the same ui. If we ask the user to do a manual step to export/import, 
>the parcel developer should be able to get data into that  same exported 
>file (or directory or whatever). If we have an automated step at startup, 
>the parcel developer should be able to take advantage of that same 
>automated step.
>I think it reasonable to ask the parcel developer to do some work, define 
>an external format, etc.

Me too, but unfortunately it's not that simple.  If upgrading schema means 
dumping and reloading, then it means losing anything that *doesn't* have an 
external format, as well as being time-consuming if you've got a lot of 
data.  That means that for extension parcels, a dump-and-reload approach 
doesn't seem practical.  But, if it's okay for it to be like this in 0.7, 
then I guess we're okay.

>>* When an upgrade has to be reverted, what guarantees should we give the 
>>user for being able to revert safely without losing data?
>Is it reasonable to assume that a backup externalization is good enough?

Only if that process as a whole is good enough for 0.7.  If so, then yes, 
we can make attempt to make upgrades an all-or-nothing process.  What we 
aren't necessarily going to be able to do is support seamless 
download-and-run extension parcel upgrades that involve schema 
changes.  But I can live with that, it just means that the parcel upgrade 
process is going to be more complex.

What I'm thinking I'll do at this point is add support for the easy stuff 
now, and then we'll have to come up with some kind of transactional wrapper 
to do code and data upgrades at once.  How that will work (and even how we 
want it to) is kind of underspecified at the moment.  :(

More information about the Dev mailing list