-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Fix metric results alignment in end of test summary #4976
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
cc @dgzlopes this should be tackling your report 🙇🏻 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have some mild questions. We should fix the tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
I have left some nit picks and golfing changes to reduce the size and arguably make it more functional... hopefully also easier to read.
*/ | ||
_collectMetricsFromGroups(groups, allMetrics) { | ||
Object.values(groups).forEach((group) => { | ||
if (group.metrics) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty sure all of those ifs are unnecessary as it will just skip the operations.
Maybe we can remove them and see how it goes instead of having additional lines and identation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we would need something like: Object.entries(group.metrics ?? {})
, as it would throw a TypeError
if we pass an undefined
to Object.entries
🤔
* @returns {void} | ||
*/ | ||
_computeGlobalMaxNameWidth(report) { | ||
const allMetrics = {}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
const allMetrics = {}; | |
const metricNames = {}; |
It is only the metric names after all
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only the metric names
Are you suggesting to do it? Because, unless I'm wrong, I think the current code is collecting the entire metric objects, with their values, etc.
But yeah, I do agree that collecting names here is likely enough, as it's the only metric's attribute used after all.
internal/js/summary.js
Outdated
// Compute the global max name width | ||
let maxNameWidth = 0; | ||
Object.keys(allMetrics).forEach((name) => { | ||
const displayName = renderContext.indent(renderMetricDisplayName(name)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need to use renderContext
here? In my opinion it's confusing because this code is treating all the metrics (total results, groups, scenarios, etc) as the same, but each level will be later indented differently.
As the purpose is to calculate the longest metric name, I suspect we could just omit the indentation for this calculus.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a quick test, and apparently one of the metrics from total results (and a couple of tags), doesn't get properly aligned. So, feel free to leave it. But anyway, I'm confused for what I mentioned above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, after a few more tests, if you nest enough groups, the alignment breaks.
In practical terms, the current code is likely "good enough", but if you find an easy way to adequately take into account the identantion of each level, it would be great.
I guess you would something like slightly modifying the current heuristics, by just collecting the metric names from groups and scenarios, already indented at "collection time", so when you calculate the max you don't need to care about from what level of nesting a metric name comes from.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another valid approach, likely much simpler, would be to only align each section (total results, scenario and group) individually. That would solve the issue described above as well, as I think the key difference here is that you're using the longest metric name plus three dots as the reference length, which is what really prevents "too many dots" and stuff overflowing the length of a single line.
Indeed, if you have multiple nested groups, the current approach adds unnecessary dots, and aligns total results more to the right only because of the fact that below there are some nested groups.
I think it's a tradeoff, where I'm happy with both, as long as the implementation is correct and looks good. Up to you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving, because despite the minor suggestions I left, and what I pointed out in https://github.com/grafana/k6/pull/4976/files#r2239337215, I think this changeset is almost always correct in practical terms 👍🏻
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving, because despite the minor suggestions I left, and what I pointed out in https://github.com/grafana/k6/pull/4976/files#r2239337215, I think this changeset is almost always correct in practical terms 👍🏻
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Marking as request changes as the tests are failing :)
What?
This Pull Request makes sure the metric results of the end of test summary follow a global, shortest possible, alignment to reduce the horizontal space consumed, and make reading the results easier.
This was done by computing the alignment value (how many dots should be printed before the
: {results}
) globally, and cascading it down to each specific renderers. The minimum number of dot separators has been set to 3, which by my testing "feels" the best and most liable option.Importantly, the human-readable end of test summary is explicitly excluded from verisioning policy, and this is not considered a breaking change. Tentatively putting this in the v1.2.0 roadmap, as it is a rather small change, but happy to drop it if reviews uncover more work 🙇🏻
Why?
It was reported to us recently, that the end of test summary would sometimes now keep the metric results properly aligned, and would use more horizontal spacing than necessary.
We noticed that in certain more advanced cases, including using
summary-mode=full
, this would lead to unexpected alignment patterns, when using groups and scenarios. Most of them caused by alignment being computed locally to each section rather than globally.Checklist
make check
) and all pass.Related PR(s)/Issue(s)