You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please provide a copy of your descheduler policy config file
deschedulerPolicy:
profiles:
- name: SpotNodeTaints
pluginConfig:
- args:
evictLocalStoragePods: true
ignorePvcPods: true
name: DefaultEvictor
- name: RemovePodsViolatingNodeTaints
args:
includedTaints:
- kubernetes.azure.com/scalesetpriority=spot
plugins:
deschedule:
enabled:
- RemovePodsViolatingNodeTaints
What k8s version are you using (kubectl version)? v1.29.5
What did you do?
I am using Descheduler with two node pools: one spot (preemptible) and one non-spot. When Azure reclaims the spot instances, the application is correctly moved to the non-spot node pool. However, after a new spot node is provisioned, the application does not migrate back to the spot node pool as expected.
The application already has the required tolerations configured, but the migration does not occur. Additionally, even when enabling verbose logging (log-level v5), no relevant logs are generated to provide insight into the issue.
What did you expect to see?
When a new spot node becomes available, the application should migrate back to the spot node pool.
What did you see instead?
The application remains on the non-spot node pool and does not move back to the spot node pool.
Verbose logs (log-level v5) do not provide any details or errors related to the migration.
The text was updated successfully, but these errors were encountered:
What version of descheduler are you using?
descheduler version: 0.32.1
Does this issue reproduce with the latest release? Yes
Which descheduler CLI options are you using?
command:
cmdOptions:
v: 4
kind: CronJob
cronJobApiVersion: "batch/v1"
schedule: "*/2 * * * *"
Please provide a copy of your descheduler policy config file
deschedulerPolicy:
profiles:
- name: SpotNodeTaints
pluginConfig:
- args:
evictLocalStoragePods: true
ignorePvcPods: true
name: DefaultEvictor
- name: RemovePodsViolatingNodeTaints
args:
includedTaints:
- kubernetes.azure.com/scalesetpriority=spot
plugins:
deschedule:
enabled:
- RemovePodsViolatingNodeTaints
What k8s version are you using (
kubectl version
)? v1.29.5What did you do?
I am using Descheduler with two node pools: one spot (preemptible) and one non-spot. When Azure reclaims the spot instances, the application is correctly moved to the non-spot node pool. However, after a new spot node is provisioned, the application does not migrate back to the spot node pool as expected.
The application already has the required tolerations configured, but the migration does not occur. Additionally, even when enabling verbose logging (log-level v5), no relevant logs are generated to provide insight into the issue.
What did you expect to see?
When a new spot node becomes available, the application should migrate back to the spot node pool.
What did you see instead?
The application remains on the non-spot node pool and does not move back to the spot node pool.
Verbose logs (log-level v5) do not provide any details or errors related to the migration.
The text was updated successfully, but these errors were encountered: