Open
Description
joblib-spark will use the nodes that are on/available when cluster is started, but it never triggers the Databricks system to wake up another node, even when the one it is using is running on all cores.
I found some comments from 2020 that say auto-scaling with Spark is generally problematic, compared to auto-scaling in Azure itself. So maybe auto-scaling is not supposed to work.
Should it work?
Metadata
Metadata
Assignees
Labels
No labels