I am very impressed, especially as it sounds as though
a lot more tests exist (he talks of only pushing small
amounts of data to kernel.org) and a lot more are
going to be added.
It seems to me that there are a lot of disparate test
suites out there - some test the APIs, some benchmark
the performance, some validate the state at the end,
some verify that the source obeys expected rules.
What I have not (yet) seen is any work on relating the
results. Is a bug in the design? The implementation?
Some combination thereof? Is something correctly
written but not functioning because something it
depends on isn't working correctly?
It would even be useful if we could cross-reference
some of the benchmarks with the Linux graphing
project, so that we could see how the complexity of
the tested component differs between versions and
variants. (A small degredation in performance, if
related to a large increase in necessary
sophistication, is not necessarily that bad. The same
performance drop, if related to a massive
simplification of the design, is an indication of a
Test suites are necessary. Test suites are great.
Anyone working on a test suite deserves many kudos and
much praise. Test suites that are relatable enough
that you can see the same problem from different
angles -- those are worth their printout weight in
--- Nivedita Singhvi <niv@xxxxxxxxxx> wrote:
> For those who don't read lkml, I thought I'd point
> Martin Bligh's post regarding automated testing
> set up, since some people on this list were
> Networking tests are in plan...
> OK, I've finally got this to the point where I can
> publish it.
> Currently it builds and boots any mainline, -mjb,
> -mm kernel within
> about 15 minutes of release. runs dbench, tbench,
> kernbench, reaim and fsx.
> Currently I'm using a 4x AMD64 box, a 16x NUMA-Q, 4x
> NUMA-Q, 32x x440
> PPC64 Power 5 LPAR, PPC64 Power 4 LPAR, and PPC64
> Power 4 bare metal
> The config files it uses are linked by the machine
> names in the column
> Thanks to all the other IBM people who've worked on
> the ABAT test system
> that this stuff relies on - too many to list, but
> especially Andy, Adam,
> and Enrique, who have fixed endless bugs, and put up
> with my incessant
> bitching about it all not working as it should ;-)
> Clicking on the failure ones error codes should take
> you to somewhere
> vaguely helpful to diagnose it. Clicking on the job
> number just below
> that takes you to the info I'm publishing right now,
> which should
> include perf results and profiles, etc. I'll add
> graphs, etc later,
> comparing performance across kernels (I have them
> ... just not automated).
Find restaurants, movies, travel and more fun for the weekend. Check it out!