[Chandler-dev] Comments requested on new test output format

Mikeal Rogers mikeal at osafoundation.org
Wed Jul 12 16:21:32 PDT 2006

On Jul 12, 2006, at 3:54 PM, Brian Moseley wrote:

> On 7/12/06, Mikeal Rogers <mikeal at osafoundation.org> wrote:
>> If you're suggesting that the report just say "I ran 5 test and
>> "TestSwitchTimeZone" " failed, and that anything more is too much
>> output then I'll have to disagree. The purpose of the automated
>> testing system is to tell you, as granularly as possible _what_
>> failed, not just that _something_ failed.
> that's what i'm saying. there's a difference between detailed trace
> info and a simple summary of test results. 99% of the time i want to
> only see the simple results. when there's a failure, i'm happy to go

This output;
> Test Report;
> *Suite ""ChandlerTestSuite"" Failed :: Total Time  
> ""0:04:55.434000"" ::
> Comment ""None""
> **Test ""TestSwitchTimezone"" Failed :: Total Time  
> ""0:00:00.931000"" ::
> Comment ""None
> None""
> ***Action ""CheckBlockVisibility"" Failed :: Total Time ""0:00:00"" ::
> Comment ""(On EditTimeZone Visibility)  || detail view = Fa
> lse ; expected value = True""
> ****Report ""(On EditTimeZone Visibility)  || detail view = False ;
> expected value = True"" Failed :: Comment ""None""

is only displayed when there is a failure. It's just displayed before  
the final summary. It does however "spew" to your screen every time  
there are failures calculated at the end of the suite.

> look in a log file for the trace info. i definitely don't want it all
> spewed to my screen for every test run.

I'm 50% done writing the new framework (OAF). With OAF you can  
specify totally different output to the file than to stdout.  
Unfortunately what we wrote for CATS-0.2 prints the same to stdout as  
it does to the file. The next version of 'broadsword' ( the test  
tools suite that includes HTTPTest) after we shove 0.1 out the door  
in a few days, will use OAF as well as the next version of CATS

>> A given test performs between 5 and 400 individual actions. Just
>> telling you that it "failed somewhere" is nearly useless, all that
> what i meant to communicate was that, along with failure, the test
> system would report a meaningful message (provided by the test author)
> describing what went wrong and providing a starting point for
> investigation. detailed trace info in a log file would help narrow the
> search.

The problem with that is that many of the same actions are repeated  
throughout the test, and only fail when something preceded them. I  
totally agree with a meaningful message being added in by the test  
author, which is why we included the ability to add a 'comment'  
string to every report, unfortunately we don't have many detailed  
comments in CATS or in the tests because the feature is very new and  
we've been trying to get the framework live first.

The 'trace' is what we are trying to show with the encapsulation  
using the '*' character, maybe we're doing a bad job of it and should  
consider a different format for the output (Have something that looks  
more like Python tracebacks?), but the content we are showing is all  
meaningful to the failure.

The summary you describe is printed at the end with (Number of Test  
Ran, Number of Test Failures, etc), we can easily add in a few lines  
that say "these are the tests that failed'. In this case you could  
just ignore any output until the last few lines.

Just to refresh, we have debug and masking settings and the purpose  
of this thread is to find the right defaults. We can hide any of the  
output, and only show failures or show nothing at all during the test  
run, the only problem is that until OAF is completed the file and  
stdout have the same output in them, and you would need to run the  
test suite again with some debug options when you wanted to debug the  
failure if we hide too much.

>> says is that you need to go and track some issue, somewhere. Instead
>> this tells you what failed, and where, and in some of the test tools
>> you could even run that single test again with an increased debug
>> level and get more feedback (like in HTTPTest it will print the
>> requests and responses from the server if debug > 4).
> in the normal case i want as little feedback as possible, and i want
> it to be supremely easy to see when something failed and what specific
> thing(s) failed. that is just not the case with HTTPTest or with the
> suggested output from earlier in this thread.

So are you saying that the output is too much, or that it's not  
readable enough, or both? The content in the failure report is all  
necessary to find the point of failure.

HTTPTest's output will change dramatically in the next version, so  
let's not use that output as an example here, but the encapsulations  
you see above are also in the new framework that HTTPTest will use  
eventually. In OAF you can customize the output to your needs, every  
developer could literally have their own output configuration that  
suited them specifically and formated everything in the exact way  
that they liked it.

But, with the current constraints of CATS-0.2 output would it be  
better if we;

Concealed all output during the test run except

Starting TestBlah
Ending TestBlah
Starting Test2Blah
----Something failed
Ending Test2Blah

Then printed the failure report which was more detailed and possibly  
even a bit too verbose.
Then the summary.


> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> Open Source Applications Foundation "chandler-dev" mailing list
> http://lists.osafoundation.org/mailman/listinfo/chandler-dev

More information about the chandler-dev mailing list