-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Increase range for expected VPA CPU recommendations in e2e #8386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
These tests can get flaky because the resource consumer consumes 1800 m CPU which can be unevenly distributed across 3 pods which can lead to failure. Also the tests dont need to append recommendations since vpa-recommender is running in this suite.
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: kamarabbas99 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @kamarabbas99. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test |
Can you explain a bit more on how this supposed to solve the flake?
So we have
Which means 1800m in total. but if this load is unevenly distributed across those 3 pods how changing the min/max of a pod helps? how does ParseQuantityOrDie("600m"), ParseQuantityOrDie("1800m")) solve this flake? doesn't a pod can get something like 400m and another will get 800m for example? |
Thats a good point. I am actually not sure about that but the similar thing is done for memory.
Maybe it will be minimum of 600? I am not sure how else to reproduce this but I am encountering this flake when I add cpu boost logic to updater(you can maybe add a sleep in RunOnce and it will still flake). |
/cc @omerap12 |
Is this change fixing a current flake, or a new upcoming flake? |
To be honest - I haven’t seen this flake happen recently, also not in my current PR. |
What type of PR is this?
/kind flake
What this PR does / why we need it:
These tests can get flaky because the resource consumer consumes 1800 m CPU which can be unevenly distributed across 3 pods which can lead to failure.
Also the tests don't need to append recommendations since vpa-recommender is running in this suite.
Similar PR before: #4469