Seems handy enough. The tests could maybe work a little harder to make
sure that something useful actually happens. Right now, they just check
for exit status.
Would be nice perhaps to think of a way of systematically doing this.
Not full automation, mind you, but maybe a way to get a sense of the
disconnect between what we claim to support and what we actually do.
Andreas, I assume you're OK with the MIT style copyright as I've stuck a
header on the file on your behalf. Please let us know if this is not
the case.
|