[Chandler-dev] No more perf regression bugs?

Heikki Toivonen heikki at osafoundation.org
Mon Apr 16 18:05:01 PDT 2007


Andi Vajda wrote:
> It depends on what is a regression. If the perceived regression happened
> because the test was bogus then the regression is bogus too. If the

True. And I try to avoid filing bugs in such cases.

> perceived regression happened because a new feature was added then the
> regression is also bogus. If the perceived regression is because a bug
> was fixed then the regression is still bogus.

I disagree. It is certainly possible to make a performance regression
bug as part of adding a new feature or fixing another bug. You just
can't know it without checking. Hence, I have filed perf regression bugs
as reminders that these should be checked.

I am also somewhat puzzled as to what other checkins there are than bug
fixes and feature additions (I consider refactoring, perf improvements
etc. to be fixes to bugs).

> What can you do about your feeling of wasting your time ?
> Making damn sure that the regression you're calling really is one.

My current process is to check the performance results at least once a
day. I won't file a bug for a temporary spike in the results. I won't
file a bug if there is additional information about an expected change
in performance and the results roughly match. I scan the checkins to see
if there was a change that implies the perf test code should also be
changed. I am extra careful about changes that happen on just one
platform, sometimes waiting for days to see if the change goes away or
not. I typically also run manual tests to confirm the change happened.
Once I confirm a change in manual testing and/or permanent change on
Tinderbox, I file a bug.

If you are asking me to understand each checkin well enough to know if a
change was due to a bug, I think you are asking too much. For example,
if you check in a fix to some repository bug and I notice performance
got slower, there is basically no way I would be able to tell if you
made a mistake or not.  Of course if somebody, including me, changes
code I do know, I will be able to do an effective code review looking
for performance problems.

Generally speaking I would be looking at days or weeks or even more per
checkin for me to ramp up my understanding of the code that changed to
be able to say with any certainty if the checkin was buggy or not. I
just don't think that would be a good use of my time, considering the
person who checked in would be done in a few hours at most.

If you do think we need someone other than the person who made the
checkin to understand all the code before filing perf regression bugs,
then I think we need someone more knowledgeable than me to review all
the checkins. Peer reviews? Chandler benevolent dictator?

But this is still moot if nobody is going to be looking at the
regression bugs, no matter how real they are.

-- 
  Heikki Toivonen



-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 254 bytes
Desc: OpenPGP digital signature
Url : http://lists.osafoundation.org/pipermail/chandler-dev/attachments/20070416/3b19447e/signature.pgp


More information about the chandler-dev mailing list