Skip to content

Fix metric results alignment in end of test summary #4976

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

oleiade
Copy link
Contributor

@oleiade oleiade commented Jul 28, 2025

What?

This Pull Request makes sure the metric results of the end of test summary follow a global, shortest possible, alignment to reduce the horizontal space consumed, and make reading the results easier.

CleanShot 2025-07-28 at 11 45 54

This was done by computing the alignment value (how many dots should be printed before the : {results}) globally, and cascading it down to each specific renderers. The minimum number of dot separators has been set to 3, which by my testing "feels" the best and most liable option.

Importantly, the human-readable end of test summary is explicitly excluded from verisioning policy, and this is not considered a breaking change. Tentatively putting this in the v1.2.0 roadmap, as it is a rather small change, but happy to drop it if reviews uncover more work 🙇🏻

Why?

It was reported to us recently, that the end of test summary would sometimes now keep the metric results properly aligned, and would use more horizontal spacing than necessary.

unaligned-eot

We noticed that in certain more advanced cases, including using summary-mode=full, this would lead to unexpected alignment patterns, when using groups and scenarios. Most of them caused by alignment being computed locally to each section rather than globally.

Checklist

  • I have performed a self-review of my code.
  • I have commented on my code, particularly in hard-to-understand areas.
  • I have added tests for my changes.
  • I have run linter and tests locally (make check) and all pass.
  • I have added the correct milestone and labels to the PR.
  • I have updated the release notes: link
  • I have updated or added an issue to the k6-documentation: grafana/k6-docs#NUMBER if applicable
  • I have updated or added an issue to the TypeScript definitions: grafana/k6-DefinitelyTyped#NUMBER if applicable

Related PR(s)/Issue(s)

@oleiade oleiade added this to the v1.2.0 milestone Jul 28, 2025
@oleiade oleiade self-assigned this Jul 28, 2025
@oleiade oleiade requested a review from a team as a code owner July 28, 2025 09:50
@oleiade oleiade requested review from mstoykov and inancgumus and removed request for a team July 28, 2025 09:50
@oleiade
Copy link
Contributor Author

oleiade commented Jul 28, 2025

cc @dgzlopes this should be tackling your report 🙇🏻
cc @joanlopez as we initially worked on this together.

Copy link
Contributor

@inancgumus inancgumus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have some mild questions. We should fix the tests.

Copy link
Contributor

@mstoykov mstoykov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

I have left some nit picks and golfing changes to reduce the size and arguably make it more functional... hopefully also easier to read.

*/
_collectMetricsFromGroups(groups, allMetrics) {
Object.values(groups).forEach((group) => {
if (group.metrics) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty sure all of those ifs are unnecessary as it will just skip the operations.

Maybe we can remove them and see how it goes instead of having additional lines and identation?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we would need something like: Object.entries(group.metrics ?? {}), as it would throw a TypeError if we pass an undefined to Object.entries 🤔

* @returns {void}
*/
_computeGlobalMaxNameWidth(report) {
const allMetrics = {};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const allMetrics = {};
const metricNames = {};

It is only the metric names after all

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only the metric names

Are you suggesting to do it? Because, unless I'm wrong, I think the current code is collecting the entire metric objects, with their values, etc.

But yeah, I do agree that collecting names here is likely enough, as it's the only metric's attribute used after all.

// Compute the global max name width
let maxNameWidth = 0;
Object.keys(allMetrics).forEach((name) => {
const displayName = renderContext.indent(renderMetricDisplayName(name));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need to use renderContext here? In my opinion it's confusing because this code is treating all the metrics (total results, groups, scenarios, etc) as the same, but each level will be later indented differently.

As the purpose is to calculate the longest metric name, I suspect we could just omit the indentation for this calculus.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a quick test, and apparently one of the metrics from total results (and a couple of tags), doesn't get properly aligned. So, feel free to leave it. But anyway, I'm confused for what I mentioned above.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, after a few more tests, if you nest enough groups, the alignment breaks.

In practical terms, the current code is likely "good enough", but if you find an easy way to adequately take into account the identantion of each level, it would be great.

I guess you would something like slightly modifying the current heuristics, by just collecting the metric names from groups and scenarios, already indented at "collection time", so when you calculate the max you don't need to care about from what level of nesting a metric name comes from.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another valid approach, likely much simpler, would be to only align each section (total results, scenario and group) individually. That would solve the issue described above as well, as I think the key difference here is that you're using the longest metric name plus three dots as the reference length, which is what really prevents "too many dots" and stuff overflowing the length of a single line.

Indeed, if you have multiple nested groups, the current approach adds unnecessary dots, and aligns total results more to the right only because of the fact that below there are some nested groups.

I think it's a tradeoff, where I'm happy with both, as long as the implementation is correct and looks good. Up to you!

Copy link
Contributor

@joanlopez joanlopez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving, because despite the minor suggestions I left, and what I pointed out in https://github.com/grafana/k6/pull/4976/files#r2239337215, I think this changeset is almost always correct in practical terms 👍🏻

joanlopez
joanlopez previously approved these changes Jul 29, 2025
Copy link
Contributor

@joanlopez joanlopez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving, because despite the minor suggestions I left, and what I pointed out in https://github.com/grafana/k6/pull/4976/files#r2239337215, I think this changeset is almost always correct in practical terms 👍🏻

Copy link
Contributor

@mstoykov mstoykov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Marking as request changes as the tests are failing :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants