Skip to content

LogWatch hangs forever when a connection is interrupted #7163

Open
@19Serhii99

Description

@19Serhii99

Describe the bug

I am testing behavior of the Fabric8 LogWatch during having network instability. I am trying to determine whether it is going to fail or recover when a connection is lost so that I could handle this scenario as neeeded. I ran kubectl proxy to control connectivity, so my master URL is http://127.0.0.1:8001.

The following code snippet is a minimum needed to test:

try (LogWatch watch = client.pods().inNamespace("$NAMESPACE").withName("$POD_NAME").watchLog();
BufferedReader reader = new BufferedReader(new InputStreamReader(watch.getOutput()))) {
  String line;
  while ((line = reader.readLine()) != null) {
    // Handle log line
  }
}

When running the program, I terminate the proxy to break the connection, and then run kubectl proxy command again. What happens now is reader.readLine() is stuck forever and no exception is thrown to the user. The only visible error is the following red log that is printed two times to the console:
[vert.x-eventloop-thread-1] ERROR io.vertx.core.net.impl.ConnectionBase - Connection reset

So, it never recovers / never stops / never notifies the user about any problems. There's no understanding how to intercept it on the client / user side. In the debug mode, I can see that SocketException gets thrown but then it gets swallowed at higher levels (unsure where)

I compared the behavior by attempting to run kubectl logs -n $NAMESPACE -f $POD_NAME and simple curl -X GET "http://127.0.0.1:8001/api/v1/namespaces/kube-system/pods/$POD_NAME/log?follow=true" and they error out on connection interruptions as anticipated (e.g., curl: (56) Recv failure: Connection was reset)

Fabric8 Kubernetes Client version

7.3.1

Steps to reproduce

  1. Run kubectl proxy
  2. Start Fabric8 LogWatch
  3. Terminate the proxy (Ctrl + C in the terminal)
  4. Run kubectl proxy
  5. Observe that neither more logs are received, nor exceptions are thrown. Only Connection reset error logs are present

Expected behavior

As a user, I would like the library to automatically retry watching with the backoff options we pass to Config when creating KubernetesClient and throw an exception / invoke some error handler when retry exhausted. As far as I unrderstand, connections for watching logs cannot be technically recovered, so at least an exception is expected to throw. The user might stream logs from the very beginning or specify sinceTime() to continue from the point of the failure.

Runtime

Kubernetes (vanilla)

Kubernetes API Server version

other (please specify in additional context)

Environment

Azure, Windows

Fabric8 Kubernetes Client Logs

[vert.x-eventloop-thread-1] ERROR io.vertx.core.net.impl.ConnectionBase - Connection reset
[vert.x-eventloop-thread-1] ERROR io.vertx.core.net.impl.ConnectionBase - Connection reset

Additional context

LogWatch is tested on Windows 11. K8S cluster is on Azure Kubernetes Service. Azure Kubernetes Service is on version 1.31.8

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions