Skip to content

LeaderElector.canBecomeLeader should read the leaseDuration from the Kube Lease object and not from local configuration #7160

Open
@zoliszel

Description

@zoliszel

Describe the bug

Hi there,

to decide if the current process can become leader, io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector infers expiry of a lease using its local lease duration configuration:

  protected final boolean canBecomeLeader(LeaderElectionRecord leaderElectionRecord) {
    return Utils.isNullOrEmpty(leaderElectionRecord.getHolderIdentity())
        || now().isAfter(leaderElectionRecord.getRenewTime().plus(leaderElectionConfig.getLeaseDuration()));
  }

This is incorrect, applications within the same cluster can/will use different KubeAPI clients and even then different applications can decide to acquire the lease for their own duration of choice.

The duration of the lease is maintained in the lease.spec.leaseDurationSeconds field inside of the LeaderElectionRecord object which should be used to source how long the lease is held by the current owner.

This can lead to be premature loss of lease if 2 clients using different leaseDurations are deployed into the same cluster.

Fabric8 Kubernetes Client version

SNAPSHOT

Steps to reproduce

  • Prepare 2 LeaderElector,
    • LE1: LeaseDuration of 10s, retry period 1s (LE1),
    • LE2: LeaseDuration of 30s, retry period, 20s(LE2)
  • Start LE2 first , let it acquire the lease, then start LE1

Expected behavior

LE2 should keep hold of the lease until its shut down, but LE1 will take over instead.

Runtime

Kubernetes (vanilla)

Kubernetes API Server version

1.33

Environment

Linux

Fabric8 Kubernetes Client Logs

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions