I'd recommend to take a similar approach as I did in my framework my framework:
I'd recommend to take a similar approach as I did in my framework:
I'd recommend to take a similar approach as I did in my framework:
Looks promising!
A very important usability feature I see here is that all the functionality is easily accessible and intuitive from a UBench
instance.
One can easily explore the available features in an IDE using auto-completion on method names and hints on parameter types.
This is in contrast with an annotation-driven approach that forces users to remember multiple things: the annotation names, and how to trigger the annotation processor that will run the benchmarks.
The reporting features are also great, and something I will definitely (削除) shamelessly steal (削除ここまで) borrow to improve my alternative framework.
Looks promising!
A very important usability feature I see here is that all the functionality is easily accessible and intuitive from a UBench
instance.
One can easily explore the available features in an IDE using auto-completion on method names and hints on parameter types.
This is in contrast with an annotation-driven approach that forces users to remember multiple things: the annotation names, and how to trigger the annotation processor that will run the benchmarks.
The reporting features are also great, and something I will definitely (削除) shamelessly steal (削除ここまで) borrow to improve my alternative framework.
It doesn't really make sense to have to repeat this:
when comparing a number of alternative implementations,
asnormally you will use the same input for all of them.
Of course,
you'll probably want to re-run the same methods with different input/output pairs,
but do so one at a benchmarktime.
To clarify even further,
I don't see a use case for comparing the result of methodA
on inputA
with the result of methodB
on inputB
.
Maybe there is typically usedsuch a use case, but I don't think that would be the typical case.
I think normally you would want to run methodA
, methodB
, methodC
, ... oninputA
,
then again run the same inputsmethods on inputB
, then again on inputC
, and expects the same outputs from all the alternative implementationsso on.
One way to work around thisavoid repeatedly specifying the same inputs and outputs to each of the tasks could be to store them inside the benchmark instance, by adding .setInput
and .setExpectedOutput
methods, which will be shared by alland let the tasks share that data.
For running the same tasks against several input/output pairs,
these methods could take varargs.
The run method could validate if the input/output pairs are sane.
It doesn't really make sense to have to repeat this, as a benchmark is typically used on the same inputs and expects the same outputs from all the alternative implementations.
One way to work around this could be adding .setInput
and .setExpectedOutput
methods, which will be shared by all tasks.
It doesn't really make sense to have to repeat this:
when comparing a number of alternative implementations,
normally you will use the same input for all of them.
Of course,
you'll probably want to re-run the same methods with different input/output pairs,
but do so one at a time.
To clarify even further,
I don't see a use case for comparing the result of methodA
on inputA
with the result of methodB
on inputB
.
Maybe there is such a use case, but I don't think that would be the typical case.
I think normally you would want to run methodA
, methodB
, methodC
, ... oninputA
,
then again run the same methods on inputB
, then again on inputC
, and so on.
One way to avoid repeatedly specifying the same inputs and outputs to each of the tasks could be to store them inside the benchmark instance, by adding .setInput
and .setExpectedOutput
methods, and let the tasks share that data.
For running the same tasks against several input/output pairs,
these methods could take varargs.
The run method could validate if the input/output pairs are sane.