Summary
We should add a benchmark test project that runs various common scenarios for Heft, leveraging V8 code coverage to collect information on:
| item |
unit |
source |
| Executed Code |
KiB |
V8 Code Coverage |
| Unused Code |
KiB |
V8 Code Coverage |
| Loaded Code |
KiB |
V8 Code Coverage |
| Loaded Files |
# |
V8 Code Coverage |
| Duration |
ms |
time / cpuprofile |
Once we have this data being generated, we should have a baseline CI pipeline that calculates a new reference value for every commit into main, and then have the PR CI job do performance comparisons and look for regressions/improvements. This will help us catch performance regressions and give us a standard for evaluating impact of optimizations.
This is the same concept as #5691 , but targeting @rushstack/heft
Candidates for benchmarking
heft --help
heft run --only build in demo projects using our published rigs.
heft run --only test in demo projects using our published rigs.
Summary
We should add a benchmark test project that runs various common scenarios for Heft, leveraging V8 code coverage to collect information on:
Once we have this data being generated, we should have a baseline CI pipeline that calculates a new reference value for every commit into main, and then have the PR CI job do performance comparisons and look for regressions/improvements. This will help us catch performance regressions and give us a standard for evaluating impact of optimizations.
This is the same concept as #5691 , but targeting
@rushstack/heftCandidates for benchmarking
heft --helpheft run --only buildin demo projects using our published rigs.heft run --only testin demo projects using our published rigs.